00:00:00.000 Started by upstream project "autotest-per-patch" build number 120987 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.099 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.100 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.128 Fetching changes from the remote Git repository 00:00:00.129 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.156 Using shallow fetch with depth 1 00:00:00.156 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.156 > git --version # timeout=10 00:00:00.177 > git --version # 'git version 2.39.2' 00:00:00.177 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.177 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.177 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.226 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.236 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.248 Checking out Revision 6e1fadd1eee50389429f9abb33dde5face8ca717 (FETCH_HEAD) 00:00:06.248 > git config core.sparsecheckout # timeout=10 00:00:06.260 > git read-tree -mu HEAD # timeout=10 00:00:06.276 > git checkout -f 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=5 00:00:06.294 Commit message: "pool: attach build logs for failed merge builds" 00:00:06.295 > git rev-list --no-walk 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=10 00:00:06.377 [Pipeline] Start of Pipeline 00:00:06.393 [Pipeline] library 00:00:06.395 Loading library shm_lib@master 00:00:06.395 Library shm_lib@master is cached. Copying from home. 00:00:06.409 [Pipeline] node 00:00:21.411 Still waiting to schedule task 00:00:21.411 Waiting for next available executor on ‘vagrant-vm-host’ 00:02:41.291 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:41.294 [Pipeline] { 00:02:41.305 [Pipeline] catchError 00:02:41.307 [Pipeline] { 00:02:41.320 [Pipeline] wrap 00:02:41.327 [Pipeline] { 00:02:41.333 [Pipeline] stage 00:02:41.334 [Pipeline] { (Prologue) 00:02:41.353 [Pipeline] echo 00:02:41.354 Node: VM-host-SM16 00:02:41.359 [Pipeline] cleanWs 00:02:41.366 [WS-CLEANUP] Deleting project workspace... 00:02:41.366 [WS-CLEANUP] Deferred wipeout is used... 00:02:41.372 [WS-CLEANUP] done 00:02:41.552 [Pipeline] setCustomBuildProperty 00:02:41.629 [Pipeline] nodesByLabel 00:02:41.630 Found a total of 1 nodes with the 'sorcerer' label 00:02:41.641 [Pipeline] httpRequest 00:02:41.645 HttpMethod: GET 00:02:41.646 URL: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:02:41.648 Sending request to url: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:02:41.649 Response Code: HTTP/1.1 200 OK 00:02:41.650 Success: Status code 200 is in the accepted range: 200,404 00:02:41.650 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:02:41.789 [Pipeline] sh 00:02:42.070 + tar --no-same-owner -xf jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:02:42.091 [Pipeline] httpRequest 00:02:42.096 HttpMethod: GET 00:02:42.097 URL: http://10.211.164.96/packages/spdk_0d1f30fbf8d2a002d60a3252a65d4ffbff392cdb.tar.gz 00:02:42.098 Sending request to url: http://10.211.164.96/packages/spdk_0d1f30fbf8d2a002d60a3252a65d4ffbff392cdb.tar.gz 00:02:42.098 Response Code: HTTP/1.1 200 OK 00:02:42.099 Success: Status code 200 is in the accepted range: 200,404 00:02:42.099 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_0d1f30fbf8d2a002d60a3252a65d4ffbff392cdb.tar.gz 00:02:44.408 [Pipeline] sh 00:02:44.686 + tar --no-same-owner -xf spdk_0d1f30fbf8d2a002d60a3252a65d4ffbff392cdb.tar.gz 00:02:47.997 [Pipeline] sh 00:02:48.272 + git -C spdk log --oneline -n5 00:02:48.272 0d1f30fbf sma: add listener check on vfio device creation 00:02:48.272 129e6ba3b test/nvmf: add missing remove listener discovery 00:02:48.272 38dca48f0 libvfio-user: update submodule to point to `spdk` branch 00:02:48.272 7a71abf69 fuzz/llvm_vfio_fuzz: limit length of generated data to `bytes_per_cmd` 00:02:48.273 fe11fef3a fuzz/llvm_vfio_fuzz: fix `fuzz_vfio_user_irq_set` incorrect data length 00:02:48.288 [Pipeline] writeFile 00:02:48.301 [Pipeline] sh 00:02:48.574 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:48.585 [Pipeline] sh 00:02:48.862 + cat autorun-spdk.conf 00:02:48.862 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:48.862 SPDK_TEST_NVMF=1 00:02:48.862 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:48.862 SPDK_TEST_URING=1 00:02:48.862 SPDK_TEST_USDT=1 00:02:48.862 SPDK_RUN_UBSAN=1 00:02:48.862 NET_TYPE=virt 00:02:48.862 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:48.867 RUN_NIGHTLY=0 00:02:48.871 [Pipeline] } 00:02:48.886 [Pipeline] // stage 00:02:48.900 [Pipeline] stage 00:02:48.902 [Pipeline] { (Run VM) 00:02:48.914 [Pipeline] sh 00:02:49.219 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:49.219 + echo 'Start stage prepare_nvme.sh' 00:02:49.219 Start stage prepare_nvme.sh 00:02:49.219 + [[ -n 4 ]] 00:02:49.219 + disk_prefix=ex4 00:02:49.219 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:02:49.219 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:02:49.219 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:02:49.219 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:49.219 ++ SPDK_TEST_NVMF=1 00:02:49.219 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:49.219 ++ SPDK_TEST_URING=1 00:02:49.219 ++ SPDK_TEST_USDT=1 00:02:49.219 ++ SPDK_RUN_UBSAN=1 00:02:49.219 ++ NET_TYPE=virt 00:02:49.219 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:49.219 ++ RUN_NIGHTLY=0 00:02:49.219 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:49.219 + nvme_files=() 00:02:49.219 + declare -A nvme_files 00:02:49.219 + backend_dir=/var/lib/libvirt/images/backends 00:02:49.219 + nvme_files['nvme.img']=5G 00:02:49.219 + nvme_files['nvme-cmb.img']=5G 00:02:49.219 + nvme_files['nvme-multi0.img']=4G 00:02:49.219 + nvme_files['nvme-multi1.img']=4G 00:02:49.219 + nvme_files['nvme-multi2.img']=4G 00:02:49.219 + nvme_files['nvme-openstack.img']=8G 00:02:49.219 + nvme_files['nvme-zns.img']=5G 00:02:49.219 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:49.219 + (( SPDK_TEST_FTL == 1 )) 00:02:49.219 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:49.219 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:49.219 + for nvme in "${!nvme_files[@]}" 00:02:49.219 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:02:49.219 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:49.219 + for nvme in "${!nvme_files[@]}" 00:02:49.219 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:02:49.790 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:49.790 + for nvme in "${!nvme_files[@]}" 00:02:49.790 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:02:49.790 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:49.790 + for nvme in "${!nvme_files[@]}" 00:02:49.790 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:02:49.790 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:49.790 + for nvme in "${!nvme_files[@]}" 00:02:49.790 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:02:49.790 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:49.790 + for nvme in "${!nvme_files[@]}" 00:02:49.790 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:02:49.790 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:49.790 + for nvme in "${!nvme_files[@]}" 00:02:49.790 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:02:50.356 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:50.356 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:02:50.356 + echo 'End stage prepare_nvme.sh' 00:02:50.356 End stage prepare_nvme.sh 00:02:50.368 [Pipeline] sh 00:02:50.646 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:50.646 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:02:50.646 00:02:50.646 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:50.646 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:50.646 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:50.646 HELP=0 00:02:50.646 DRY_RUN=0 00:02:50.646 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:02:50.646 NVME_DISKS_TYPE=nvme,nvme, 00:02:50.646 NVME_AUTO_CREATE=0 00:02:50.646 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:02:50.646 NVME_CMB=,, 00:02:50.646 NVME_PMR=,, 00:02:50.646 NVME_ZNS=,, 00:02:50.646 NVME_MS=,, 00:02:50.646 NVME_FDP=,, 00:02:50.646 SPDK_VAGRANT_DISTRO=fedora38 00:02:50.646 SPDK_VAGRANT_VMCPU=10 00:02:50.646 SPDK_VAGRANT_VMRAM=12288 00:02:50.646 SPDK_VAGRANT_PROVIDER=libvirt 00:02:50.646 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:50.646 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:50.646 SPDK_OPENSTACK_NETWORK=0 00:02:50.646 VAGRANT_PACKAGE_BOX=0 00:02:50.646 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:50.646 FORCE_DISTRO=true 00:02:50.646 VAGRANT_BOX_VERSION= 00:02:50.646 EXTRA_VAGRANTFILES= 00:02:50.646 NIC_MODEL=e1000 00:02:50.646 00:02:50.646 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:02:50.646 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:53.998 Bringing machine 'default' up with 'libvirt' provider... 00:02:54.579 ==> default: Creating image (snapshot of base box volume). 00:02:54.579 ==> default: Creating domain with the following settings... 00:02:54.579 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713971223_2a92977c1299cba42d4f 00:02:54.579 ==> default: -- Domain type: kvm 00:02:54.579 ==> default: -- Cpus: 10 00:02:54.579 ==> default: -- Feature: acpi 00:02:54.579 ==> default: -- Feature: apic 00:02:54.579 ==> default: -- Feature: pae 00:02:54.579 ==> default: -- Memory: 12288M 00:02:54.579 ==> default: -- Memory Backing: hugepages: 00:02:54.579 ==> default: -- Management MAC: 00:02:54.579 ==> default: -- Loader: 00:02:54.579 ==> default: -- Nvram: 00:02:54.579 ==> default: -- Base box: spdk/fedora38 00:02:54.579 ==> default: -- Storage pool: default 00:02:54.579 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713971223_2a92977c1299cba42d4f.img (20G) 00:02:54.579 ==> default: -- Volume Cache: default 00:02:54.579 ==> default: -- Kernel: 00:02:54.579 ==> default: -- Initrd: 00:02:54.579 ==> default: -- Graphics Type: vnc 00:02:54.579 ==> default: -- Graphics Port: -1 00:02:54.579 ==> default: -- Graphics IP: 127.0.0.1 00:02:54.579 ==> default: -- Graphics Password: Not defined 00:02:54.579 ==> default: -- Video Type: cirrus 00:02:54.579 ==> default: -- Video VRAM: 9216 00:02:54.579 ==> default: -- Sound Type: 00:02:54.579 ==> default: -- Keymap: en-us 00:02:54.579 ==> default: -- TPM Path: 00:02:54.579 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:54.579 ==> default: -- Command line args: 00:02:54.579 ==> default: -> value=-device, 00:02:54.579 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:54.579 ==> default: -> value=-drive, 00:02:54.579 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:02:54.579 ==> default: -> value=-device, 00:02:54.579 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:54.579 ==> default: -> value=-device, 00:02:54.579 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:54.579 ==> default: -> value=-drive, 00:02:54.579 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:54.579 ==> default: -> value=-device, 00:02:54.579 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:54.579 ==> default: -> value=-drive, 00:02:54.579 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:54.579 ==> default: -> value=-device, 00:02:54.579 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:54.579 ==> default: -> value=-drive, 00:02:54.579 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:54.579 ==> default: -> value=-device, 00:02:54.579 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:54.864 ==> default: Creating shared folders metadata... 00:02:54.864 ==> default: Starting domain. 00:02:56.311 ==> default: Waiting for domain to get an IP address... 00:03:14.394 ==> default: Waiting for SSH to become available... 00:03:14.394 ==> default: Configuring and enabling network interfaces... 00:03:18.583 default: SSH address: 192.168.121.155:22 00:03:18.583 default: SSH username: vagrant 00:03:18.583 default: SSH auth method: private key 00:03:21.126 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:29.237 ==> default: Mounting SSHFS shared folder... 00:03:30.170 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:03:30.170 ==> default: Checking Mount.. 00:03:31.546 ==> default: Folder Successfully Mounted! 00:03:31.546 ==> default: Running provisioner: file... 00:03:32.481 default: ~/.gitconfig => .gitconfig 00:03:32.738 00:03:32.738 SUCCESS! 00:03:32.738 00:03:32.738 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:03:32.738 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:32.738 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:03:32.738 00:03:32.747 [Pipeline] } 00:03:32.768 [Pipeline] // stage 00:03:32.778 [Pipeline] dir 00:03:32.779 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:03:32.781 [Pipeline] { 00:03:32.796 [Pipeline] catchError 00:03:32.798 [Pipeline] { 00:03:32.811 [Pipeline] sh 00:03:33.110 + vagrant ssh-config --host vagrant 00:03:33.110 + sed -ne /^Host/,$p 00:03:33.110 + tee ssh_conf 00:03:37.317 Host vagrant 00:03:37.317 HostName 192.168.121.155 00:03:37.317 User vagrant 00:03:37.317 Port 22 00:03:37.317 UserKnownHostsFile /dev/null 00:03:37.317 StrictHostKeyChecking no 00:03:37.317 PasswordAuthentication no 00:03:37.317 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:03:37.317 IdentitiesOnly yes 00:03:37.317 LogLevel FATAL 00:03:37.317 ForwardAgent yes 00:03:37.317 ForwardX11 yes 00:03:37.317 00:03:37.330 [Pipeline] withEnv 00:03:37.332 [Pipeline] { 00:03:37.346 [Pipeline] sh 00:03:37.620 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:37.620 source /etc/os-release 00:03:37.620 [[ -e /image.version ]] && img=$(< /image.version) 00:03:37.620 # Minimal, systemd-like check. 00:03:37.620 if [[ -e /.dockerenv ]]; then 00:03:37.620 # Clear garbage from the node's name: 00:03:37.620 # agt-er_autotest_547-896 -> autotest_547-896 00:03:37.620 # $HOSTNAME is the actual container id 00:03:37.620 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:37.620 if mountpoint -q /etc/hostname; then 00:03:37.620 # We can assume this is a mount from a host where container is running, 00:03:37.620 # so fetch its hostname to easily identify the target swarm worker. 00:03:37.620 container="$(< /etc/hostname) ($agent)" 00:03:37.620 else 00:03:37.620 # Fallback 00:03:37.620 container=$agent 00:03:37.620 fi 00:03:37.620 fi 00:03:37.620 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:37.620 00:03:37.888 [Pipeline] } 00:03:37.905 [Pipeline] // withEnv 00:03:37.912 [Pipeline] setCustomBuildProperty 00:03:37.930 [Pipeline] stage 00:03:37.934 [Pipeline] { (Tests) 00:03:37.953 [Pipeline] sh 00:03:38.230 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:38.501 [Pipeline] timeout 00:03:38.501 Timeout set to expire in 30 min 00:03:38.503 [Pipeline] { 00:03:38.518 [Pipeline] sh 00:03:38.800 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:39.367 HEAD is now at 0d1f30fbf sma: add listener check on vfio device creation 00:03:39.382 [Pipeline] sh 00:03:39.660 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:39.932 [Pipeline] sh 00:03:40.216 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:40.231 [Pipeline] sh 00:03:40.509 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:03:40.766 ++ readlink -f spdk_repo 00:03:40.767 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:40.767 + [[ -n /home/vagrant/spdk_repo ]] 00:03:40.767 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:40.767 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:40.767 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:40.767 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:40.767 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:40.767 + cd /home/vagrant/spdk_repo 00:03:40.767 + source /etc/os-release 00:03:40.767 ++ NAME='Fedora Linux' 00:03:40.767 ++ VERSION='38 (Cloud Edition)' 00:03:40.767 ++ ID=fedora 00:03:40.767 ++ VERSION_ID=38 00:03:40.767 ++ VERSION_CODENAME= 00:03:40.767 ++ PLATFORM_ID=platform:f38 00:03:40.767 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:40.767 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:40.767 ++ LOGO=fedora-logo-icon 00:03:40.767 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:40.767 ++ HOME_URL=https://fedoraproject.org/ 00:03:40.767 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:40.767 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:40.767 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:40.767 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:40.767 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:40.767 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:40.767 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:40.767 ++ SUPPORT_END=2024-05-14 00:03:40.767 ++ VARIANT='Cloud Edition' 00:03:40.767 ++ VARIANT_ID=cloud 00:03:40.767 + uname -a 00:03:40.767 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:40.767 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:41.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:41.025 Hugepages 00:03:41.025 node hugesize free / total 00:03:41.025 node0 1048576kB 0 / 0 00:03:41.025 node0 2048kB 0 / 0 00:03:41.025 00:03:41.025 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:41.283 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:41.283 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:41.283 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:41.283 + rm -f /tmp/spdk-ld-path 00:03:41.283 + source autorun-spdk.conf 00:03:41.283 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:41.283 ++ SPDK_TEST_NVMF=1 00:03:41.283 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:41.283 ++ SPDK_TEST_URING=1 00:03:41.283 ++ SPDK_TEST_USDT=1 00:03:41.283 ++ SPDK_RUN_UBSAN=1 00:03:41.283 ++ NET_TYPE=virt 00:03:41.283 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:41.283 ++ RUN_NIGHTLY=0 00:03:41.283 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:41.283 + [[ -n '' ]] 00:03:41.283 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:41.283 + for M in /var/spdk/build-*-manifest.txt 00:03:41.283 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:41.283 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:41.283 + for M in /var/spdk/build-*-manifest.txt 00:03:41.283 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:41.283 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:41.283 ++ uname 00:03:41.283 + [[ Linux == \L\i\n\u\x ]] 00:03:41.283 + sudo dmesg -T 00:03:41.283 + sudo dmesg --clear 00:03:41.283 + dmesg_pid=5256 00:03:41.283 + sudo dmesg -Tw 00:03:41.283 + [[ Fedora Linux == FreeBSD ]] 00:03:41.283 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:41.283 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:41.283 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:41.283 + [[ -x /usr/src/fio-static/fio ]] 00:03:41.283 + export FIO_BIN=/usr/src/fio-static/fio 00:03:41.283 + FIO_BIN=/usr/src/fio-static/fio 00:03:41.283 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:41.283 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:41.283 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:41.283 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:41.283 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:41.283 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:41.283 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:41.283 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:41.283 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:41.283 Test configuration: 00:03:41.283 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:41.283 SPDK_TEST_NVMF=1 00:03:41.283 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:41.283 SPDK_TEST_URING=1 00:03:41.283 SPDK_TEST_USDT=1 00:03:41.283 SPDK_RUN_UBSAN=1 00:03:41.283 NET_TYPE=virt 00:03:41.283 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:41.541 RUN_NIGHTLY=0 15:07:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:41.541 15:07:50 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:41.541 15:07:50 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.541 15:07:50 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.541 15:07:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.541 15:07:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.541 15:07:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.541 15:07:50 -- paths/export.sh@5 -- $ export PATH 00:03:41.541 15:07:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.541 15:07:50 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:41.541 15:07:50 -- common/autobuild_common.sh@435 -- $ date +%s 00:03:41.541 15:07:50 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713971270.XXXXXX 00:03:41.541 15:07:50 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713971270.b0Zr0F 00:03:41.541 15:07:50 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:03:41.541 15:07:50 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:03:41.541 15:07:50 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:41.541 15:07:50 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:41.541 15:07:50 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:41.541 15:07:50 -- common/autobuild_common.sh@451 -- $ get_config_params 00:03:41.541 15:07:50 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:03:41.541 15:07:50 -- common/autotest_common.sh@10 -- $ set +x 00:03:41.541 15:07:50 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:03:41.541 15:07:50 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:03:41.541 15:07:50 -- pm/common@17 -- $ local monitor 00:03:41.541 15:07:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.541 15:07:50 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5290 00:03:41.541 15:07:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.541 15:07:50 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5292 00:03:41.541 15:07:50 -- pm/common@21 -- $ date +%s 00:03:41.541 15:07:50 -- pm/common@26 -- $ sleep 1 00:03:41.541 15:07:50 -- pm/common@21 -- $ date +%s 00:03:41.541 15:07:50 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713971270 00:03:41.541 15:07:50 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713971270 00:03:41.541 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713971270_collect-cpu-load.pm.log 00:03:41.541 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713971270_collect-vmstat.pm.log 00:03:42.475 15:07:51 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:03:42.475 15:07:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:42.475 15:07:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:42.475 15:07:51 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:42.475 15:07:51 -- spdk/autobuild.sh@16 -- $ date -u 00:03:42.475 Wed Apr 24 03:07:51 PM UTC 2024 00:03:42.475 15:07:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:42.475 v24.05-pre-412-g0d1f30fbf 00:03:42.475 15:07:51 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:42.475 15:07:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:42.475 15:07:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:42.475 15:07:51 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:42.475 15:07:51 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:42.475 15:07:51 -- common/autotest_common.sh@10 -- $ set +x 00:03:42.475 ************************************ 00:03:42.475 START TEST ubsan 00:03:42.475 ************************************ 00:03:42.475 using ubsan 00:03:42.475 15:07:51 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:03:42.475 00:03:42.475 real 0m0.000s 00:03:42.475 user 0m0.000s 00:03:42.475 sys 0m0.000s 00:03:42.475 15:07:51 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:03:42.475 ************************************ 00:03:42.475 END TEST ubsan 00:03:42.475 ************************************ 00:03:42.475 15:07:51 -- common/autotest_common.sh@10 -- $ set +x 00:03:42.734 15:07:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:42.734 15:07:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:42.734 15:07:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:42.734 15:07:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:42.734 15:07:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:42.734 15:07:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:42.734 15:07:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:42.734 15:07:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:42.734 15:07:51 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:03:42.734 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:42.734 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:43.303 Using 'verbs' RDMA provider 00:03:56.443 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:11.320 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:11.320 Creating mk/config.mk...done. 00:04:11.320 Creating mk/cc.flags.mk...done. 00:04:11.320 Type 'make' to build. 00:04:11.320 15:08:19 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:11.320 15:08:19 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:04:11.320 15:08:19 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:04:11.320 15:08:19 -- common/autotest_common.sh@10 -- $ set +x 00:04:11.320 ************************************ 00:04:11.320 START TEST make 00:04:11.320 ************************************ 00:04:11.320 15:08:19 -- common/autotest_common.sh@1111 -- $ make -j10 00:04:11.320 make[1]: Nothing to be done for 'all'. 00:04:21.295 The Meson build system 00:04:21.295 Version: 1.3.1 00:04:21.295 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:21.295 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:21.295 Build type: native build 00:04:21.295 Program cat found: YES (/usr/bin/cat) 00:04:21.295 Project name: DPDK 00:04:21.295 Project version: 23.11.0 00:04:21.295 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:21.295 C linker for the host machine: cc ld.bfd 2.39-16 00:04:21.295 Host machine cpu family: x86_64 00:04:21.295 Host machine cpu: x86_64 00:04:21.295 Message: ## Building in Developer Mode ## 00:04:21.295 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:21.295 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:21.295 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:21.295 Program python3 found: YES (/usr/bin/python3) 00:04:21.295 Program cat found: YES (/usr/bin/cat) 00:04:21.295 Compiler for C supports arguments -march=native: YES 00:04:21.295 Checking for size of "void *" : 8 00:04:21.295 Checking for size of "void *" : 8 (cached) 00:04:21.295 Library m found: YES 00:04:21.295 Library numa found: YES 00:04:21.295 Has header "numaif.h" : YES 00:04:21.295 Library fdt found: NO 00:04:21.295 Library execinfo found: NO 00:04:21.295 Has header "execinfo.h" : YES 00:04:21.295 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:21.295 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:21.295 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:21.295 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:21.295 Run-time dependency openssl found: YES 3.0.9 00:04:21.295 Run-time dependency libpcap found: YES 1.10.4 00:04:21.295 Has header "pcap.h" with dependency libpcap: YES 00:04:21.295 Compiler for C supports arguments -Wcast-qual: YES 00:04:21.295 Compiler for C supports arguments -Wdeprecated: YES 00:04:21.295 Compiler for C supports arguments -Wformat: YES 00:04:21.295 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:21.295 Compiler for C supports arguments -Wformat-security: NO 00:04:21.295 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:21.295 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:21.295 Compiler for C supports arguments -Wnested-externs: YES 00:04:21.295 Compiler for C supports arguments -Wold-style-definition: YES 00:04:21.295 Compiler for C supports arguments -Wpointer-arith: YES 00:04:21.295 Compiler for C supports arguments -Wsign-compare: YES 00:04:21.295 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:21.295 Compiler for C supports arguments -Wundef: YES 00:04:21.295 Compiler for C supports arguments -Wwrite-strings: YES 00:04:21.295 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:21.295 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:21.295 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:21.295 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:21.295 Program objdump found: YES (/usr/bin/objdump) 00:04:21.295 Compiler for C supports arguments -mavx512f: YES 00:04:21.295 Checking if "AVX512 checking" compiles: YES 00:04:21.295 Fetching value of define "__SSE4_2__" : 1 00:04:21.295 Fetching value of define "__AES__" : 1 00:04:21.295 Fetching value of define "__AVX__" : 1 00:04:21.295 Fetching value of define "__AVX2__" : 1 00:04:21.295 Fetching value of define "__AVX512BW__" : (undefined) 00:04:21.295 Fetching value of define "__AVX512CD__" : (undefined) 00:04:21.295 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:21.295 Fetching value of define "__AVX512F__" : (undefined) 00:04:21.295 Fetching value of define "__AVX512VL__" : (undefined) 00:04:21.295 Fetching value of define "__PCLMUL__" : 1 00:04:21.295 Fetching value of define "__RDRND__" : 1 00:04:21.295 Fetching value of define "__RDSEED__" : 1 00:04:21.295 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:21.295 Fetching value of define "__znver1__" : (undefined) 00:04:21.295 Fetching value of define "__znver2__" : (undefined) 00:04:21.295 Fetching value of define "__znver3__" : (undefined) 00:04:21.295 Fetching value of define "__znver4__" : (undefined) 00:04:21.295 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:21.295 Message: lib/log: Defining dependency "log" 00:04:21.295 Message: lib/kvargs: Defining dependency "kvargs" 00:04:21.295 Message: lib/telemetry: Defining dependency "telemetry" 00:04:21.295 Checking for function "getentropy" : NO 00:04:21.295 Message: lib/eal: Defining dependency "eal" 00:04:21.295 Message: lib/ring: Defining dependency "ring" 00:04:21.295 Message: lib/rcu: Defining dependency "rcu" 00:04:21.295 Message: lib/mempool: Defining dependency "mempool" 00:04:21.295 Message: lib/mbuf: Defining dependency "mbuf" 00:04:21.295 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:21.295 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:21.295 Compiler for C supports arguments -mpclmul: YES 00:04:21.295 Compiler for C supports arguments -maes: YES 00:04:21.295 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:21.295 Compiler for C supports arguments -mavx512bw: YES 00:04:21.295 Compiler for C supports arguments -mavx512dq: YES 00:04:21.295 Compiler for C supports arguments -mavx512vl: YES 00:04:21.295 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:21.295 Compiler for C supports arguments -mavx2: YES 00:04:21.295 Compiler for C supports arguments -mavx: YES 00:04:21.295 Message: lib/net: Defining dependency "net" 00:04:21.295 Message: lib/meter: Defining dependency "meter" 00:04:21.295 Message: lib/ethdev: Defining dependency "ethdev" 00:04:21.295 Message: lib/pci: Defining dependency "pci" 00:04:21.295 Message: lib/cmdline: Defining dependency "cmdline" 00:04:21.295 Message: lib/hash: Defining dependency "hash" 00:04:21.295 Message: lib/timer: Defining dependency "timer" 00:04:21.295 Message: lib/compressdev: Defining dependency "compressdev" 00:04:21.296 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:21.296 Message: lib/dmadev: Defining dependency "dmadev" 00:04:21.296 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:21.296 Message: lib/power: Defining dependency "power" 00:04:21.296 Message: lib/reorder: Defining dependency "reorder" 00:04:21.296 Message: lib/security: Defining dependency "security" 00:04:21.296 Has header "linux/userfaultfd.h" : YES 00:04:21.296 Has header "linux/vduse.h" : YES 00:04:21.296 Message: lib/vhost: Defining dependency "vhost" 00:04:21.296 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:21.296 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:21.296 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:21.296 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:21.296 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:21.296 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:21.296 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:21.296 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:21.296 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:21.296 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:21.296 Program doxygen found: YES (/usr/bin/doxygen) 00:04:21.296 Configuring doxy-api-html.conf using configuration 00:04:21.296 Configuring doxy-api-man.conf using configuration 00:04:21.296 Program mandb found: YES (/usr/bin/mandb) 00:04:21.296 Program sphinx-build found: NO 00:04:21.296 Configuring rte_build_config.h using configuration 00:04:21.296 Message: 00:04:21.296 ================= 00:04:21.296 Applications Enabled 00:04:21.296 ================= 00:04:21.296 00:04:21.296 apps: 00:04:21.296 00:04:21.296 00:04:21.296 Message: 00:04:21.296 ================= 00:04:21.296 Libraries Enabled 00:04:21.296 ================= 00:04:21.296 00:04:21.296 libs: 00:04:21.296 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:21.296 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:21.296 cryptodev, dmadev, power, reorder, security, vhost, 00:04:21.296 00:04:21.296 Message: 00:04:21.296 =============== 00:04:21.296 Drivers Enabled 00:04:21.296 =============== 00:04:21.296 00:04:21.296 common: 00:04:21.296 00:04:21.296 bus: 00:04:21.296 pci, vdev, 00:04:21.296 mempool: 00:04:21.296 ring, 00:04:21.296 dma: 00:04:21.296 00:04:21.296 net: 00:04:21.296 00:04:21.296 crypto: 00:04:21.296 00:04:21.296 compress: 00:04:21.296 00:04:21.296 vdpa: 00:04:21.296 00:04:21.296 00:04:21.296 Message: 00:04:21.296 ================= 00:04:21.296 Content Skipped 00:04:21.296 ================= 00:04:21.296 00:04:21.296 apps: 00:04:21.296 dumpcap: explicitly disabled via build config 00:04:21.296 graph: explicitly disabled via build config 00:04:21.296 pdump: explicitly disabled via build config 00:04:21.296 proc-info: explicitly disabled via build config 00:04:21.296 test-acl: explicitly disabled via build config 00:04:21.296 test-bbdev: explicitly disabled via build config 00:04:21.296 test-cmdline: explicitly disabled via build config 00:04:21.296 test-compress-perf: explicitly disabled via build config 00:04:21.296 test-crypto-perf: explicitly disabled via build config 00:04:21.296 test-dma-perf: explicitly disabled via build config 00:04:21.296 test-eventdev: explicitly disabled via build config 00:04:21.296 test-fib: explicitly disabled via build config 00:04:21.296 test-flow-perf: explicitly disabled via build config 00:04:21.296 test-gpudev: explicitly disabled via build config 00:04:21.296 test-mldev: explicitly disabled via build config 00:04:21.296 test-pipeline: explicitly disabled via build config 00:04:21.296 test-pmd: explicitly disabled via build config 00:04:21.296 test-regex: explicitly disabled via build config 00:04:21.296 test-sad: explicitly disabled via build config 00:04:21.296 test-security-perf: explicitly disabled via build config 00:04:21.296 00:04:21.296 libs: 00:04:21.296 metrics: explicitly disabled via build config 00:04:21.296 acl: explicitly disabled via build config 00:04:21.296 bbdev: explicitly disabled via build config 00:04:21.296 bitratestats: explicitly disabled via build config 00:04:21.296 bpf: explicitly disabled via build config 00:04:21.296 cfgfile: explicitly disabled via build config 00:04:21.296 distributor: explicitly disabled via build config 00:04:21.296 efd: explicitly disabled via build config 00:04:21.296 eventdev: explicitly disabled via build config 00:04:21.296 dispatcher: explicitly disabled via build config 00:04:21.296 gpudev: explicitly disabled via build config 00:04:21.296 gro: explicitly disabled via build config 00:04:21.296 gso: explicitly disabled via build config 00:04:21.296 ip_frag: explicitly disabled via build config 00:04:21.296 jobstats: explicitly disabled via build config 00:04:21.296 latencystats: explicitly disabled via build config 00:04:21.296 lpm: explicitly disabled via build config 00:04:21.296 member: explicitly disabled via build config 00:04:21.296 pcapng: explicitly disabled via build config 00:04:21.296 rawdev: explicitly disabled via build config 00:04:21.296 regexdev: explicitly disabled via build config 00:04:21.296 mldev: explicitly disabled via build config 00:04:21.296 rib: explicitly disabled via build config 00:04:21.296 sched: explicitly disabled via build config 00:04:21.296 stack: explicitly disabled via build config 00:04:21.296 ipsec: explicitly disabled via build config 00:04:21.296 pdcp: explicitly disabled via build config 00:04:21.296 fib: explicitly disabled via build config 00:04:21.296 port: explicitly disabled via build config 00:04:21.296 pdump: explicitly disabled via build config 00:04:21.296 table: explicitly disabled via build config 00:04:21.296 pipeline: explicitly disabled via build config 00:04:21.296 graph: explicitly disabled via build config 00:04:21.296 node: explicitly disabled via build config 00:04:21.296 00:04:21.296 drivers: 00:04:21.296 common/cpt: not in enabled drivers build config 00:04:21.296 common/dpaax: not in enabled drivers build config 00:04:21.296 common/iavf: not in enabled drivers build config 00:04:21.296 common/idpf: not in enabled drivers build config 00:04:21.296 common/mvep: not in enabled drivers build config 00:04:21.296 common/octeontx: not in enabled drivers build config 00:04:21.296 bus/auxiliary: not in enabled drivers build config 00:04:21.296 bus/cdx: not in enabled drivers build config 00:04:21.296 bus/dpaa: not in enabled drivers build config 00:04:21.296 bus/fslmc: not in enabled drivers build config 00:04:21.296 bus/ifpga: not in enabled drivers build config 00:04:21.296 bus/platform: not in enabled drivers build config 00:04:21.296 bus/vmbus: not in enabled drivers build config 00:04:21.296 common/cnxk: not in enabled drivers build config 00:04:21.296 common/mlx5: not in enabled drivers build config 00:04:21.296 common/nfp: not in enabled drivers build config 00:04:21.296 common/qat: not in enabled drivers build config 00:04:21.296 common/sfc_efx: not in enabled drivers build config 00:04:21.296 mempool/bucket: not in enabled drivers build config 00:04:21.296 mempool/cnxk: not in enabled drivers build config 00:04:21.296 mempool/dpaa: not in enabled drivers build config 00:04:21.296 mempool/dpaa2: not in enabled drivers build config 00:04:21.296 mempool/octeontx: not in enabled drivers build config 00:04:21.296 mempool/stack: not in enabled drivers build config 00:04:21.296 dma/cnxk: not in enabled drivers build config 00:04:21.296 dma/dpaa: not in enabled drivers build config 00:04:21.296 dma/dpaa2: not in enabled drivers build config 00:04:21.296 dma/hisilicon: not in enabled drivers build config 00:04:21.296 dma/idxd: not in enabled drivers build config 00:04:21.296 dma/ioat: not in enabled drivers build config 00:04:21.296 dma/skeleton: not in enabled drivers build config 00:04:21.296 net/af_packet: not in enabled drivers build config 00:04:21.296 net/af_xdp: not in enabled drivers build config 00:04:21.296 net/ark: not in enabled drivers build config 00:04:21.296 net/atlantic: not in enabled drivers build config 00:04:21.296 net/avp: not in enabled drivers build config 00:04:21.296 net/axgbe: not in enabled drivers build config 00:04:21.296 net/bnx2x: not in enabled drivers build config 00:04:21.296 net/bnxt: not in enabled drivers build config 00:04:21.296 net/bonding: not in enabled drivers build config 00:04:21.296 net/cnxk: not in enabled drivers build config 00:04:21.296 net/cpfl: not in enabled drivers build config 00:04:21.296 net/cxgbe: not in enabled drivers build config 00:04:21.296 net/dpaa: not in enabled drivers build config 00:04:21.296 net/dpaa2: not in enabled drivers build config 00:04:21.297 net/e1000: not in enabled drivers build config 00:04:21.297 net/ena: not in enabled drivers build config 00:04:21.297 net/enetc: not in enabled drivers build config 00:04:21.297 net/enetfec: not in enabled drivers build config 00:04:21.297 net/enic: not in enabled drivers build config 00:04:21.297 net/failsafe: not in enabled drivers build config 00:04:21.297 net/fm10k: not in enabled drivers build config 00:04:21.297 net/gve: not in enabled drivers build config 00:04:21.297 net/hinic: not in enabled drivers build config 00:04:21.297 net/hns3: not in enabled drivers build config 00:04:21.297 net/i40e: not in enabled drivers build config 00:04:21.297 net/iavf: not in enabled drivers build config 00:04:21.297 net/ice: not in enabled drivers build config 00:04:21.297 net/idpf: not in enabled drivers build config 00:04:21.297 net/igc: not in enabled drivers build config 00:04:21.297 net/ionic: not in enabled drivers build config 00:04:21.297 net/ipn3ke: not in enabled drivers build config 00:04:21.297 net/ixgbe: not in enabled drivers build config 00:04:21.297 net/mana: not in enabled drivers build config 00:04:21.297 net/memif: not in enabled drivers build config 00:04:21.297 net/mlx4: not in enabled drivers build config 00:04:21.297 net/mlx5: not in enabled drivers build config 00:04:21.297 net/mvneta: not in enabled drivers build config 00:04:21.297 net/mvpp2: not in enabled drivers build config 00:04:21.297 net/netvsc: not in enabled drivers build config 00:04:21.297 net/nfb: not in enabled drivers build config 00:04:21.297 net/nfp: not in enabled drivers build config 00:04:21.297 net/ngbe: not in enabled drivers build config 00:04:21.297 net/null: not in enabled drivers build config 00:04:21.297 net/octeontx: not in enabled drivers build config 00:04:21.297 net/octeon_ep: not in enabled drivers build config 00:04:21.297 net/pcap: not in enabled drivers build config 00:04:21.297 net/pfe: not in enabled drivers build config 00:04:21.297 net/qede: not in enabled drivers build config 00:04:21.297 net/ring: not in enabled drivers build config 00:04:21.297 net/sfc: not in enabled drivers build config 00:04:21.297 net/softnic: not in enabled drivers build config 00:04:21.297 net/tap: not in enabled drivers build config 00:04:21.297 net/thunderx: not in enabled drivers build config 00:04:21.297 net/txgbe: not in enabled drivers build config 00:04:21.297 net/vdev_netvsc: not in enabled drivers build config 00:04:21.297 net/vhost: not in enabled drivers build config 00:04:21.297 net/virtio: not in enabled drivers build config 00:04:21.297 net/vmxnet3: not in enabled drivers build config 00:04:21.297 raw/*: missing internal dependency, "rawdev" 00:04:21.297 crypto/armv8: not in enabled drivers build config 00:04:21.297 crypto/bcmfs: not in enabled drivers build config 00:04:21.297 crypto/caam_jr: not in enabled drivers build config 00:04:21.297 crypto/ccp: not in enabled drivers build config 00:04:21.297 crypto/cnxk: not in enabled drivers build config 00:04:21.297 crypto/dpaa_sec: not in enabled drivers build config 00:04:21.297 crypto/dpaa2_sec: not in enabled drivers build config 00:04:21.297 crypto/ipsec_mb: not in enabled drivers build config 00:04:21.297 crypto/mlx5: not in enabled drivers build config 00:04:21.297 crypto/mvsam: not in enabled drivers build config 00:04:21.297 crypto/nitrox: not in enabled drivers build config 00:04:21.297 crypto/null: not in enabled drivers build config 00:04:21.297 crypto/octeontx: not in enabled drivers build config 00:04:21.297 crypto/openssl: not in enabled drivers build config 00:04:21.297 crypto/scheduler: not in enabled drivers build config 00:04:21.297 crypto/uadk: not in enabled drivers build config 00:04:21.297 crypto/virtio: not in enabled drivers build config 00:04:21.297 compress/isal: not in enabled drivers build config 00:04:21.297 compress/mlx5: not in enabled drivers build config 00:04:21.297 compress/octeontx: not in enabled drivers build config 00:04:21.297 compress/zlib: not in enabled drivers build config 00:04:21.297 regex/*: missing internal dependency, "regexdev" 00:04:21.297 ml/*: missing internal dependency, "mldev" 00:04:21.297 vdpa/ifc: not in enabled drivers build config 00:04:21.297 vdpa/mlx5: not in enabled drivers build config 00:04:21.297 vdpa/nfp: not in enabled drivers build config 00:04:21.297 vdpa/sfc: not in enabled drivers build config 00:04:21.297 event/*: missing internal dependency, "eventdev" 00:04:21.297 baseband/*: missing internal dependency, "bbdev" 00:04:21.297 gpu/*: missing internal dependency, "gpudev" 00:04:21.297 00:04:21.297 00:04:21.555 Build targets in project: 85 00:04:21.555 00:04:21.555 DPDK 23.11.0 00:04:21.555 00:04:21.555 User defined options 00:04:21.555 buildtype : debug 00:04:21.555 default_library : shared 00:04:21.555 libdir : lib 00:04:21.555 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:21.555 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:21.555 c_link_args : 00:04:21.555 cpu_instruction_set: native 00:04:21.556 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:21.556 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:21.556 enable_docs : false 00:04:21.556 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:21.556 enable_kmods : false 00:04:21.556 tests : false 00:04:21.556 00:04:21.556 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:22.121 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:22.121 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:22.121 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:22.121 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:22.121 [4/265] Linking static target lib/librte_kvargs.a 00:04:22.121 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:22.121 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:22.380 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:22.380 [8/265] Linking static target lib/librte_log.a 00:04:22.380 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:22.380 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:22.638 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.897 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:23.155 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:23.155 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:23.155 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:23.155 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:23.155 [17/265] Linking static target lib/librte_telemetry.a 00:04:23.155 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:23.413 [19/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.413 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:23.413 [21/265] Linking target lib/librte_log.so.24.0 00:04:23.413 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:23.686 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:23.686 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:04:23.686 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:23.686 [26/265] Linking target lib/librte_kvargs.so.24.0 00:04:23.686 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:23.944 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:23.944 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:23.944 [30/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:04:23.944 [31/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.944 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:24.201 [33/265] Linking target lib/librte_telemetry.so.24.0 00:04:24.201 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:24.201 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:24.201 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:04:24.460 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:24.460 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:24.460 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:24.460 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:24.460 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:24.460 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:24.718 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:24.718 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:24.976 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:24.976 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:24.976 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:24.976 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:25.235 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:25.235 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:25.235 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:25.494 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:25.494 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:25.494 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:25.752 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:25.752 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:25.752 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:25.752 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:25.752 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:26.012 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:26.012 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:26.012 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:26.272 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:26.272 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:26.531 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:26.531 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:26.531 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:26.531 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:26.789 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:26.789 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:26.789 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:26.789 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:26.789 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:26.789 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:27.048 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:27.048 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:27.048 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:27.048 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:27.307 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:27.566 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:27.566 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:27.566 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:27.824 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:27.824 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:27.824 [85/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:27.824 [86/265] Linking static target lib/librte_eal.a 00:04:28.083 [87/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:28.083 [88/265] Linking static target lib/librte_rcu.a 00:04:28.083 [89/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:28.083 [90/265] Linking static target lib/librte_ring.a 00:04:28.083 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:28.342 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:28.600 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:28.600 [94/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.600 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:28.600 [96/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.600 [97/265] Linking static target lib/librte_mempool.a 00:04:28.600 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:28.600 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:28.859 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:28.859 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:28.859 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:28.859 [103/265] Linking static target lib/librte_mbuf.a 00:04:29.117 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:29.117 [105/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:29.117 [106/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:29.117 [107/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:29.375 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:29.375 [109/265] Linking static target lib/librte_net.a 00:04:29.375 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:29.375 [111/265] Linking static target lib/librte_meter.a 00:04:29.375 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:29.942 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:29.942 [114/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.942 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:29.942 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.942 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:29.942 [118/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.200 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.459 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:30.718 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:30.976 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:30.976 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:30.976 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:30.976 [125/265] Linking static target lib/librte_pci.a 00:04:30.976 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:30.976 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:30.976 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:31.235 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:31.235 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:31.235 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:31.235 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:31.235 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:31.235 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:31.235 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:31.235 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:31.235 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:31.235 [138/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.493 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:31.493 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:31.493 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:31.493 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:31.493 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:31.493 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:31.493 [145/265] Linking static target lib/librte_ethdev.a 00:04:31.751 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:31.751 [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:32.010 [148/265] Linking static target lib/librte_cmdline.a 00:04:32.010 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:32.010 [150/265] Linking static target lib/librte_timer.a 00:04:32.268 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:32.268 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:32.526 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:32.526 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:32.526 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:32.526 [156/265] Linking static target lib/librte_hash.a 00:04:32.526 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:32.526 [158/265] Linking static target lib/librte_compressdev.a 00:04:32.526 [159/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.784 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:32.784 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:32.784 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:33.352 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:33.352 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:33.352 [165/265] Linking static target lib/librte_dmadev.a 00:04:33.352 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:33.352 [167/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.352 [168/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:33.352 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:33.628 [170/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:33.628 [171/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.628 [172/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.628 [173/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:33.628 [174/265] Linking static target lib/librte_cryptodev.a 00:04:33.891 [175/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.150 [176/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:34.150 [177/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:34.150 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:34.150 [179/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:34.150 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:34.150 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:34.150 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:34.150 [183/265] Linking static target lib/librte_power.a 00:04:34.716 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:34.716 [185/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:34.716 [186/265] Linking static target lib/librte_reorder.a 00:04:34.716 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:34.716 [188/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:34.975 [189/265] Linking static target lib/librte_security.a 00:04:34.975 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:35.234 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:35.234 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.511 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.511 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.511 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:35.511 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:35.769 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:35.769 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:36.028 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:36.028 [200/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.028 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:36.286 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:36.287 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:36.287 [204/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:36.287 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:36.545 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:36.545 [207/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:36.545 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:36.545 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:36.803 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:36.803 [211/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:36.803 [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:36.803 [213/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:36.803 [214/265] Linking static target drivers/librte_bus_vdev.a 00:04:36.803 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:36.803 [216/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:36.803 [217/265] Linking static target drivers/librte_bus_pci.a 00:04:36.803 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:36.803 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:37.061 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:37.061 [221/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:37.061 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:37.061 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:37.061 [224/265] Linking static target drivers/librte_mempool_ring.a 00:04:37.319 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:37.577 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:37.834 [227/265] Linking static target lib/librte_vhost.a 00:04:38.766 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.766 [229/265] Linking target lib/librte_eal.so.24.0 00:04:39.024 [230/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.024 [231/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:04:39.024 [232/265] Linking target lib/librte_pci.so.24.0 00:04:39.024 [233/265] Linking target lib/librte_meter.so.24.0 00:04:39.024 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:04:39.024 [235/265] Linking target lib/librte_ring.so.24.0 00:04:39.024 [236/265] Linking target lib/librte_dmadev.so.24.0 00:04:39.024 [237/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.024 [238/265] Linking target lib/librte_timer.so.24.0 00:04:39.282 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:04:39.282 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:04:39.282 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:04:39.282 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:04:39.282 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:04:39.282 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:04:39.282 [245/265] Linking target lib/librte_rcu.so.24.0 00:04:39.282 [246/265] Linking target lib/librte_mempool.so.24.0 00:04:39.541 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:04:39.541 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:04:39.541 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:04:39.541 [250/265] Linking target lib/librte_mbuf.so.24.0 00:04:39.541 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:04:39.800 [252/265] Linking target lib/librte_compressdev.so.24.0 00:04:39.800 [253/265] Linking target lib/librte_net.so.24.0 00:04:39.800 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:04:39.800 [255/265] Linking target lib/librte_reorder.so.24.0 00:04:39.800 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:04:39.800 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:04:40.058 [258/265] Linking target lib/librte_cmdline.so.24.0 00:04:40.058 [259/265] Linking target lib/librte_hash.so.24.0 00:04:40.058 [260/265] Linking target lib/librte_security.so.24.0 00:04:40.058 [261/265] Linking target lib/librte_ethdev.so.24.0 00:04:40.058 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:04:40.058 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:04:40.316 [264/265] Linking target lib/librte_power.so.24.0 00:04:40.316 [265/265] Linking target lib/librte_vhost.so.24.0 00:04:40.316 INFO: autodetecting backend as ninja 00:04:40.316 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:41.689 CC lib/ut/ut.o 00:04:41.689 CC lib/log/log.o 00:04:41.689 CC lib/log/log_flags.o 00:04:41.689 CC lib/log/log_deprecated.o 00:04:41.689 CC lib/ut_mock/mock.o 00:04:41.689 LIB libspdk_ut_mock.a 00:04:41.689 LIB libspdk_log.a 00:04:41.689 SO libspdk_ut_mock.so.6.0 00:04:41.689 LIB libspdk_ut.a 00:04:41.689 SO libspdk_log.so.7.0 00:04:41.689 SYMLINK libspdk_ut_mock.so 00:04:41.689 SO libspdk_ut.so.2.0 00:04:41.689 SYMLINK libspdk_log.so 00:04:41.689 SYMLINK libspdk_ut.so 00:04:41.947 CXX lib/trace_parser/trace.o 00:04:41.947 CC lib/util/base64.o 00:04:41.947 CC lib/util/bit_array.o 00:04:41.947 CC lib/util/cpuset.o 00:04:41.947 CC lib/util/crc16.o 00:04:41.947 CC lib/ioat/ioat.o 00:04:41.947 CC lib/util/crc32.o 00:04:41.947 CC lib/util/crc32c.o 00:04:41.947 CC lib/dma/dma.o 00:04:41.947 CC lib/vfio_user/host/vfio_user_pci.o 00:04:42.205 CC lib/vfio_user/host/vfio_user.o 00:04:42.205 CC lib/util/crc32_ieee.o 00:04:42.205 CC lib/util/crc64.o 00:04:42.205 CC lib/util/dif.o 00:04:42.205 CC lib/util/fd.o 00:04:42.205 CC lib/util/file.o 00:04:42.205 LIB libspdk_dma.a 00:04:42.205 SO libspdk_dma.so.4.0 00:04:42.205 CC lib/util/hexlify.o 00:04:42.205 CC lib/util/iov.o 00:04:42.205 SYMLINK libspdk_dma.so 00:04:42.205 CC lib/util/math.o 00:04:42.463 CC lib/util/pipe.o 00:04:42.463 CC lib/util/strerror_tls.o 00:04:42.463 LIB libspdk_ioat.a 00:04:42.463 CC lib/util/string.o 00:04:42.463 LIB libspdk_vfio_user.a 00:04:42.463 SO libspdk_ioat.so.7.0 00:04:42.463 SO libspdk_vfio_user.so.5.0 00:04:42.463 CC lib/util/uuid.o 00:04:42.463 SYMLINK libspdk_ioat.so 00:04:42.463 CC lib/util/fd_group.o 00:04:42.463 CC lib/util/xor.o 00:04:42.463 SYMLINK libspdk_vfio_user.so 00:04:42.463 CC lib/util/zipf.o 00:04:42.722 LIB libspdk_util.a 00:04:42.722 SO libspdk_util.so.9.0 00:04:42.982 LIB libspdk_trace_parser.a 00:04:42.982 SO libspdk_trace_parser.so.5.0 00:04:42.982 SYMLINK libspdk_util.so 00:04:42.982 SYMLINK libspdk_trace_parser.so 00:04:43.242 CC lib/rdma/common.o 00:04:43.242 CC lib/rdma/rdma_verbs.o 00:04:43.242 CC lib/env_dpdk/env.o 00:04:43.242 CC lib/env_dpdk/memory.o 00:04:43.242 CC lib/json/json_parse.o 00:04:43.242 CC lib/env_dpdk/pci.o 00:04:43.242 CC lib/env_dpdk/init.o 00:04:43.242 CC lib/idxd/idxd.o 00:04:43.242 CC lib/vmd/vmd.o 00:04:43.242 CC lib/conf/conf.o 00:04:43.500 CC lib/json/json_util.o 00:04:43.500 CC lib/idxd/idxd_user.o 00:04:43.500 LIB libspdk_conf.a 00:04:43.500 SO libspdk_conf.so.6.0 00:04:43.500 CC lib/env_dpdk/threads.o 00:04:43.500 LIB libspdk_rdma.a 00:04:43.500 CC lib/env_dpdk/pci_ioat.o 00:04:43.500 SO libspdk_rdma.so.6.0 00:04:43.500 SYMLINK libspdk_conf.so 00:04:43.500 CC lib/env_dpdk/pci_virtio.o 00:04:43.500 CC lib/json/json_write.o 00:04:43.758 SYMLINK libspdk_rdma.so 00:04:43.758 CC lib/env_dpdk/pci_vmd.o 00:04:43.758 CC lib/env_dpdk/pci_idxd.o 00:04:43.758 CC lib/env_dpdk/pci_event.o 00:04:43.758 CC lib/env_dpdk/sigbus_handler.o 00:04:43.758 LIB libspdk_idxd.a 00:04:43.758 CC lib/env_dpdk/pci_dpdk.o 00:04:43.758 SO libspdk_idxd.so.12.0 00:04:43.758 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:43.758 CC lib/vmd/led.o 00:04:43.758 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:43.758 SYMLINK libspdk_idxd.so 00:04:44.016 LIB libspdk_json.a 00:04:44.016 LIB libspdk_vmd.a 00:04:44.016 SO libspdk_json.so.6.0 00:04:44.016 SO libspdk_vmd.so.6.0 00:04:44.016 SYMLINK libspdk_json.so 00:04:44.016 SYMLINK libspdk_vmd.so 00:04:44.273 CC lib/jsonrpc/jsonrpc_server.o 00:04:44.273 CC lib/jsonrpc/jsonrpc_client.o 00:04:44.273 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:44.273 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:44.531 LIB libspdk_jsonrpc.a 00:04:44.531 LIB libspdk_env_dpdk.a 00:04:44.531 SO libspdk_jsonrpc.so.6.0 00:04:44.791 SO libspdk_env_dpdk.so.14.0 00:04:44.791 SYMLINK libspdk_jsonrpc.so 00:04:44.791 SYMLINK libspdk_env_dpdk.so 00:04:44.791 CC lib/rpc/rpc.o 00:04:45.049 LIB libspdk_rpc.a 00:04:45.309 SO libspdk_rpc.so.6.0 00:04:45.309 SYMLINK libspdk_rpc.so 00:04:45.567 CC lib/trace/trace.o 00:04:45.568 CC lib/trace/trace_flags.o 00:04:45.568 CC lib/trace/trace_rpc.o 00:04:45.568 CC lib/keyring/keyring_rpc.o 00:04:45.568 CC lib/keyring/keyring.o 00:04:45.568 CC lib/notify/notify.o 00:04:45.568 CC lib/notify/notify_rpc.o 00:04:45.826 LIB libspdk_notify.a 00:04:45.826 SO libspdk_notify.so.6.0 00:04:45.826 LIB libspdk_trace.a 00:04:45.826 LIB libspdk_keyring.a 00:04:45.826 SYMLINK libspdk_notify.so 00:04:45.826 SO libspdk_keyring.so.1.0 00:04:45.826 SO libspdk_trace.so.10.0 00:04:45.826 SYMLINK libspdk_keyring.so 00:04:45.826 SYMLINK libspdk_trace.so 00:04:46.085 CC lib/sock/sock.o 00:04:46.085 CC lib/sock/sock_rpc.o 00:04:46.085 CC lib/thread/thread.o 00:04:46.085 CC lib/thread/iobuf.o 00:04:46.652 LIB libspdk_sock.a 00:04:46.652 SO libspdk_sock.so.9.0 00:04:46.652 SYMLINK libspdk_sock.so 00:04:47.220 CC lib/nvme/nvme_ctrlr.o 00:04:47.220 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:47.220 CC lib/nvme/nvme_fabric.o 00:04:47.220 CC lib/nvme/nvme_ns_cmd.o 00:04:47.220 CC lib/nvme/nvme_ns.o 00:04:47.220 CC lib/nvme/nvme_pcie.o 00:04:47.220 CC lib/nvme/nvme_pcie_common.o 00:04:47.220 CC lib/nvme/nvme.o 00:04:47.220 CC lib/nvme/nvme_qpair.o 00:04:47.787 LIB libspdk_thread.a 00:04:47.787 CC lib/nvme/nvme_quirks.o 00:04:47.787 SO libspdk_thread.so.10.0 00:04:47.787 CC lib/nvme/nvme_transport.o 00:04:47.787 SYMLINK libspdk_thread.so 00:04:48.045 CC lib/nvme/nvme_discovery.o 00:04:48.045 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:48.045 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:48.045 CC lib/nvme/nvme_tcp.o 00:04:48.045 CC lib/nvme/nvme_opal.o 00:04:48.045 CC lib/accel/accel.o 00:04:48.303 CC lib/accel/accel_rpc.o 00:04:48.303 CC lib/nvme/nvme_io_msg.o 00:04:48.303 CC lib/nvme/nvme_poll_group.o 00:04:48.561 CC lib/nvme/nvme_zns.o 00:04:48.561 CC lib/nvme/nvme_stubs.o 00:04:48.561 CC lib/nvme/nvme_auth.o 00:04:48.561 CC lib/accel/accel_sw.o 00:04:48.561 CC lib/nvme/nvme_cuse.o 00:04:49.129 CC lib/nvme/nvme_rdma.o 00:04:49.129 LIB libspdk_accel.a 00:04:49.129 CC lib/blob/blobstore.o 00:04:49.129 SO libspdk_accel.so.15.0 00:04:49.129 SYMLINK libspdk_accel.so 00:04:49.129 CC lib/blob/request.o 00:04:49.129 CC lib/blob/zeroes.o 00:04:49.387 CC lib/init/json_config.o 00:04:49.387 CC lib/init/subsystem.o 00:04:49.387 CC lib/blob/blob_bs_dev.o 00:04:49.387 CC lib/virtio/virtio.o 00:04:49.387 CC lib/init/subsystem_rpc.o 00:04:49.646 CC lib/init/rpc.o 00:04:49.646 CC lib/virtio/virtio_vhost_user.o 00:04:49.646 CC lib/virtio/virtio_vfio_user.o 00:04:49.646 CC lib/virtio/virtio_pci.o 00:04:49.646 CC lib/bdev/bdev.o 00:04:49.646 CC lib/bdev/bdev_rpc.o 00:04:49.646 CC lib/bdev/bdev_zone.o 00:04:49.646 LIB libspdk_init.a 00:04:49.646 SO libspdk_init.so.5.0 00:04:49.905 CC lib/bdev/part.o 00:04:49.905 CC lib/bdev/scsi_nvme.o 00:04:49.905 SYMLINK libspdk_init.so 00:04:49.905 LIB libspdk_virtio.a 00:04:49.905 SO libspdk_virtio.so.7.0 00:04:49.905 CC lib/event/app.o 00:04:49.905 CC lib/event/reactor.o 00:04:49.905 CC lib/event/log_rpc.o 00:04:49.905 CC lib/event/app_rpc.o 00:04:49.905 CC lib/event/scheduler_static.o 00:04:50.163 SYMLINK libspdk_virtio.so 00:04:50.442 LIB libspdk_nvme.a 00:04:50.442 LIB libspdk_event.a 00:04:50.442 SO libspdk_event.so.13.0 00:04:50.715 SYMLINK libspdk_event.so 00:04:50.715 SO libspdk_nvme.so.13.0 00:04:50.974 SYMLINK libspdk_nvme.so 00:04:51.908 LIB libspdk_blob.a 00:04:51.908 SO libspdk_blob.so.11.0 00:04:52.209 SYMLINK libspdk_blob.so 00:04:52.209 LIB libspdk_bdev.a 00:04:52.209 CC lib/lvol/lvol.o 00:04:52.209 CC lib/blobfs/blobfs.o 00:04:52.209 CC lib/blobfs/tree.o 00:04:52.468 SO libspdk_bdev.so.15.0 00:04:52.468 SYMLINK libspdk_bdev.so 00:04:52.725 CC lib/scsi/dev.o 00:04:52.725 CC lib/scsi/lun.o 00:04:52.725 CC lib/scsi/port.o 00:04:52.725 CC lib/scsi/scsi.o 00:04:52.725 CC lib/nvmf/ctrlr.o 00:04:52.725 CC lib/ublk/ublk.o 00:04:52.725 CC lib/ftl/ftl_core.o 00:04:52.725 CC lib/nbd/nbd.o 00:04:52.982 CC lib/nbd/nbd_rpc.o 00:04:52.982 CC lib/scsi/scsi_bdev.o 00:04:52.982 CC lib/scsi/scsi_pr.o 00:04:52.982 CC lib/scsi/scsi_rpc.o 00:04:52.982 CC lib/ftl/ftl_init.o 00:04:53.240 CC lib/scsi/task.o 00:04:53.240 CC lib/ublk/ublk_rpc.o 00:04:53.240 LIB libspdk_nbd.a 00:04:53.240 LIB libspdk_blobfs.a 00:04:53.240 SO libspdk_nbd.so.7.0 00:04:53.240 SO libspdk_blobfs.so.10.0 00:04:53.240 SYMLINK libspdk_nbd.so 00:04:53.240 CC lib/ftl/ftl_layout.o 00:04:53.240 CC lib/nvmf/ctrlr_discovery.o 00:04:53.241 CC lib/ftl/ftl_debug.o 00:04:53.241 CC lib/nvmf/ctrlr_bdev.o 00:04:53.241 SYMLINK libspdk_blobfs.so 00:04:53.241 CC lib/nvmf/subsystem.o 00:04:53.498 CC lib/ftl/ftl_io.o 00:04:53.498 LIB libspdk_ublk.a 00:04:53.498 LIB libspdk_scsi.a 00:04:53.498 LIB libspdk_lvol.a 00:04:53.498 SO libspdk_ublk.so.3.0 00:04:53.498 SO libspdk_lvol.so.10.0 00:04:53.498 SO libspdk_scsi.so.9.0 00:04:53.498 SYMLINK libspdk_ublk.so 00:04:53.498 SYMLINK libspdk_lvol.so 00:04:53.498 CC lib/ftl/ftl_sb.o 00:04:53.498 CC lib/nvmf/nvmf.o 00:04:53.498 CC lib/ftl/ftl_l2p.o 00:04:53.756 SYMLINK libspdk_scsi.so 00:04:53.756 CC lib/nvmf/nvmf_rpc.o 00:04:53.756 CC lib/ftl/ftl_l2p_flat.o 00:04:53.756 CC lib/ftl/ftl_nv_cache.o 00:04:53.756 CC lib/nvmf/transport.o 00:04:53.756 CC lib/ftl/ftl_band.o 00:04:53.756 CC lib/ftl/ftl_band_ops.o 00:04:54.016 CC lib/ftl/ftl_writer.o 00:04:54.016 CC lib/iscsi/conn.o 00:04:54.283 CC lib/iscsi/init_grp.o 00:04:54.283 CC lib/ftl/ftl_rq.o 00:04:54.283 CC lib/ftl/ftl_reloc.o 00:04:54.283 CC lib/vhost/vhost.o 00:04:54.541 CC lib/vhost/vhost_rpc.o 00:04:54.541 CC lib/nvmf/tcp.o 00:04:54.541 CC lib/nvmf/rdma.o 00:04:54.541 CC lib/iscsi/iscsi.o 00:04:54.541 CC lib/iscsi/md5.o 00:04:54.541 CC lib/iscsi/param.o 00:04:54.541 CC lib/iscsi/portal_grp.o 00:04:54.541 CC lib/ftl/ftl_l2p_cache.o 00:04:54.798 CC lib/iscsi/tgt_node.o 00:04:54.798 CC lib/iscsi/iscsi_subsystem.o 00:04:54.798 CC lib/iscsi/iscsi_rpc.o 00:04:54.798 CC lib/iscsi/task.o 00:04:55.055 CC lib/vhost/vhost_scsi.o 00:04:55.055 CC lib/vhost/vhost_blk.o 00:04:55.055 CC lib/ftl/ftl_p2l.o 00:04:55.055 CC lib/ftl/mngt/ftl_mngt.o 00:04:55.055 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:55.313 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:55.313 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:55.313 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:55.313 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:55.313 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:55.571 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:55.571 CC lib/vhost/rte_vhost_user.o 00:04:55.571 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:55.571 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:55.571 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:55.829 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:55.829 LIB libspdk_iscsi.a 00:04:55.829 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:55.829 CC lib/ftl/utils/ftl_conf.o 00:04:55.829 SO libspdk_iscsi.so.8.0 00:04:55.829 CC lib/ftl/utils/ftl_md.o 00:04:56.087 CC lib/ftl/utils/ftl_mempool.o 00:04:56.087 CC lib/ftl/utils/ftl_bitmap.o 00:04:56.087 CC lib/ftl/utils/ftl_property.o 00:04:56.087 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:56.087 SYMLINK libspdk_iscsi.so 00:04:56.087 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:56.087 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:56.087 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:56.087 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:56.087 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:56.346 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:56.346 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:56.346 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:56.346 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:56.346 CC lib/ftl/base/ftl_base_dev.o 00:04:56.346 CC lib/ftl/base/ftl_base_bdev.o 00:04:56.346 CC lib/ftl/ftl_trace.o 00:04:56.604 LIB libspdk_nvmf.a 00:04:56.605 LIB libspdk_vhost.a 00:04:56.605 SO libspdk_nvmf.so.18.0 00:04:56.605 LIB libspdk_ftl.a 00:04:56.605 SO libspdk_vhost.so.8.0 00:04:56.863 SYMLINK libspdk_vhost.so 00:04:56.863 SYMLINK libspdk_nvmf.so 00:04:56.863 SO libspdk_ftl.so.9.0 00:04:57.122 SYMLINK libspdk_ftl.so 00:04:57.688 CC module/env_dpdk/env_dpdk_rpc.o 00:04:57.688 CC module/blob/bdev/blob_bdev.o 00:04:57.688 CC module/sock/uring/uring.o 00:04:57.688 CC module/sock/posix/posix.o 00:04:57.688 CC module/accel/iaa/accel_iaa.o 00:04:57.688 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:57.688 CC module/accel/ioat/accel_ioat.o 00:04:57.688 CC module/keyring/file/keyring.o 00:04:57.688 CC module/accel/dsa/accel_dsa.o 00:04:57.688 CC module/accel/error/accel_error.o 00:04:57.688 LIB libspdk_env_dpdk_rpc.a 00:04:57.688 SO libspdk_env_dpdk_rpc.so.6.0 00:04:57.688 SYMLINK libspdk_env_dpdk_rpc.so 00:04:57.688 CC module/accel/error/accel_error_rpc.o 00:04:57.946 CC module/keyring/file/keyring_rpc.o 00:04:57.946 CC module/accel/ioat/accel_ioat_rpc.o 00:04:57.946 LIB libspdk_scheduler_dynamic.a 00:04:57.946 CC module/accel/iaa/accel_iaa_rpc.o 00:04:57.946 CC module/accel/dsa/accel_dsa_rpc.o 00:04:57.946 SO libspdk_scheduler_dynamic.so.4.0 00:04:57.946 LIB libspdk_blob_bdev.a 00:04:57.946 SYMLINK libspdk_scheduler_dynamic.so 00:04:57.946 SO libspdk_blob_bdev.so.11.0 00:04:57.946 LIB libspdk_accel_error.a 00:04:57.946 LIB libspdk_keyring_file.a 00:04:57.946 SYMLINK libspdk_blob_bdev.so 00:04:57.946 SO libspdk_accel_error.so.2.0 00:04:57.946 LIB libspdk_accel_ioat.a 00:04:57.946 LIB libspdk_accel_iaa.a 00:04:57.946 LIB libspdk_accel_dsa.a 00:04:57.946 SO libspdk_keyring_file.so.1.0 00:04:57.946 SO libspdk_accel_ioat.so.6.0 00:04:57.946 SO libspdk_accel_iaa.so.3.0 00:04:57.946 SO libspdk_accel_dsa.so.5.0 00:04:57.946 SYMLINK libspdk_accel_error.so 00:04:58.204 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:58.204 SYMLINK libspdk_accel_ioat.so 00:04:58.204 SYMLINK libspdk_accel_iaa.so 00:04:58.204 SYMLINK libspdk_keyring_file.so 00:04:58.204 SYMLINK libspdk_accel_dsa.so 00:04:58.204 CC module/scheduler/gscheduler/gscheduler.o 00:04:58.204 LIB libspdk_scheduler_dpdk_governor.a 00:04:58.204 CC module/bdev/malloc/bdev_malloc.o 00:04:58.204 CC module/bdev/error/vbdev_error.o 00:04:58.204 CC module/bdev/delay/vbdev_delay.o 00:04:58.204 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:58.204 CC module/bdev/lvol/vbdev_lvol.o 00:04:58.204 LIB libspdk_scheduler_gscheduler.a 00:04:58.204 CC module/bdev/gpt/gpt.o 00:04:58.204 SO libspdk_scheduler_gscheduler.so.4.0 00:04:58.204 LIB libspdk_sock_uring.a 00:04:58.462 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:58.462 CC module/bdev/error/vbdev_error_rpc.o 00:04:58.462 SO libspdk_sock_uring.so.5.0 00:04:58.462 CC module/blobfs/bdev/blobfs_bdev.o 00:04:58.462 SYMLINK libspdk_scheduler_gscheduler.so 00:04:58.462 LIB libspdk_sock_posix.a 00:04:58.462 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:58.462 SO libspdk_sock_posix.so.6.0 00:04:58.462 SYMLINK libspdk_sock_uring.so 00:04:58.462 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:58.462 SYMLINK libspdk_sock_posix.so 00:04:58.462 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:58.462 CC module/bdev/gpt/vbdev_gpt.o 00:04:58.462 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:58.462 LIB libspdk_bdev_error.a 00:04:58.462 LIB libspdk_blobfs_bdev.a 00:04:58.462 SO libspdk_bdev_error.so.6.0 00:04:58.720 SO libspdk_blobfs_bdev.so.6.0 00:04:58.720 LIB libspdk_bdev_malloc.a 00:04:58.720 SYMLINK libspdk_bdev_error.so 00:04:58.720 SYMLINK libspdk_blobfs_bdev.so 00:04:58.720 LIB libspdk_bdev_delay.a 00:04:58.720 SO libspdk_bdev_malloc.so.6.0 00:04:58.720 CC module/bdev/null/bdev_null.o 00:04:58.720 SO libspdk_bdev_delay.so.6.0 00:04:58.720 SYMLINK libspdk_bdev_malloc.so 00:04:58.720 CC module/bdev/null/bdev_null_rpc.o 00:04:58.720 SYMLINK libspdk_bdev_delay.so 00:04:58.720 LIB libspdk_bdev_gpt.a 00:04:58.720 CC module/bdev/passthru/vbdev_passthru.o 00:04:58.720 LIB libspdk_bdev_lvol.a 00:04:58.720 CC module/bdev/raid/bdev_raid.o 00:04:58.720 CC module/bdev/nvme/bdev_nvme.o 00:04:58.720 SO libspdk_bdev_gpt.so.6.0 00:04:58.978 CC module/bdev/split/vbdev_split.o 00:04:58.978 SO libspdk_bdev_lvol.so.6.0 00:04:58.978 SYMLINK libspdk_bdev_gpt.so 00:04:58.978 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:58.978 SYMLINK libspdk_bdev_lvol.so 00:04:58.978 CC module/bdev/uring/bdev_uring.o 00:04:58.978 CC module/bdev/raid/bdev_raid_rpc.o 00:04:58.978 CC module/bdev/raid/bdev_raid_sb.o 00:04:58.978 LIB libspdk_bdev_null.a 00:04:58.978 SO libspdk_bdev_null.so.6.0 00:04:58.978 CC module/bdev/aio/bdev_aio.o 00:04:59.236 CC module/bdev/split/vbdev_split_rpc.o 00:04:59.236 SYMLINK libspdk_bdev_null.so 00:04:59.236 CC module/bdev/aio/bdev_aio_rpc.o 00:04:59.236 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:59.236 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:59.236 CC module/bdev/nvme/nvme_rpc.o 00:04:59.236 LIB libspdk_bdev_split.a 00:04:59.236 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:59.236 CC module/bdev/raid/raid0.o 00:04:59.236 LIB libspdk_bdev_passthru.a 00:04:59.236 CC module/bdev/uring/bdev_uring_rpc.o 00:04:59.236 SO libspdk_bdev_split.so.6.0 00:04:59.236 SO libspdk_bdev_passthru.so.6.0 00:04:59.494 SYMLINK libspdk_bdev_split.so 00:04:59.494 SYMLINK libspdk_bdev_passthru.so 00:04:59.494 LIB libspdk_bdev_aio.a 00:04:59.494 LIB libspdk_bdev_zone_block.a 00:04:59.494 CC module/bdev/nvme/bdev_mdns_client.o 00:04:59.494 SO libspdk_bdev_aio.so.6.0 00:04:59.494 LIB libspdk_bdev_uring.a 00:04:59.494 SO libspdk_bdev_zone_block.so.6.0 00:04:59.494 SO libspdk_bdev_uring.so.6.0 00:04:59.494 SYMLINK libspdk_bdev_aio.so 00:04:59.494 CC module/bdev/raid/raid1.o 00:04:59.494 SYMLINK libspdk_bdev_uring.so 00:04:59.494 SYMLINK libspdk_bdev_zone_block.so 00:04:59.494 CC module/bdev/iscsi/bdev_iscsi.o 00:04:59.494 CC module/bdev/nvme/vbdev_opal.o 00:04:59.494 CC module/bdev/raid/concat.o 00:04:59.752 CC module/bdev/ftl/bdev_ftl.o 00:04:59.752 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:59.752 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:59.752 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:59.752 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:59.752 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:59.752 LIB libspdk_bdev_raid.a 00:05:00.010 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:00.010 SO libspdk_bdev_raid.so.6.0 00:05:00.010 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:00.010 LIB libspdk_bdev_ftl.a 00:05:00.010 SYMLINK libspdk_bdev_raid.so 00:05:00.010 SO libspdk_bdev_ftl.so.6.0 00:05:00.010 SYMLINK libspdk_bdev_ftl.so 00:05:00.010 LIB libspdk_bdev_iscsi.a 00:05:00.010 SO libspdk_bdev_iscsi.so.6.0 00:05:00.268 SYMLINK libspdk_bdev_iscsi.so 00:05:00.268 LIB libspdk_bdev_virtio.a 00:05:00.268 SO libspdk_bdev_virtio.so.6.0 00:05:00.527 SYMLINK libspdk_bdev_virtio.so 00:05:01.109 LIB libspdk_bdev_nvme.a 00:05:01.109 SO libspdk_bdev_nvme.so.7.0 00:05:01.366 SYMLINK libspdk_bdev_nvme.so 00:05:01.624 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:01.624 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:01.624 CC module/event/subsystems/vmd/vmd.o 00:05:01.624 CC module/event/subsystems/iobuf/iobuf.o 00:05:01.624 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:01.624 CC module/event/subsystems/keyring/keyring.o 00:05:01.624 CC module/event/subsystems/sock/sock.o 00:05:01.624 CC module/event/subsystems/scheduler/scheduler.o 00:05:01.883 LIB libspdk_event_keyring.a 00:05:01.883 LIB libspdk_event_sock.a 00:05:01.883 LIB libspdk_event_vhost_blk.a 00:05:01.883 LIB libspdk_event_vmd.a 00:05:01.883 LIB libspdk_event_scheduler.a 00:05:01.883 LIB libspdk_event_iobuf.a 00:05:01.883 SO libspdk_event_sock.so.5.0 00:05:01.883 SO libspdk_event_keyring.so.1.0 00:05:01.883 SO libspdk_event_vmd.so.6.0 00:05:01.883 SO libspdk_event_vhost_blk.so.3.0 00:05:01.883 SO libspdk_event_scheduler.so.4.0 00:05:01.883 SO libspdk_event_iobuf.so.3.0 00:05:01.883 SYMLINK libspdk_event_sock.so 00:05:01.883 SYMLINK libspdk_event_keyring.so 00:05:01.883 SYMLINK libspdk_event_vhost_blk.so 00:05:01.883 SYMLINK libspdk_event_scheduler.so 00:05:01.883 SYMLINK libspdk_event_vmd.so 00:05:02.142 SYMLINK libspdk_event_iobuf.so 00:05:02.401 CC module/event/subsystems/accel/accel.o 00:05:02.401 LIB libspdk_event_accel.a 00:05:02.401 SO libspdk_event_accel.so.6.0 00:05:02.660 SYMLINK libspdk_event_accel.so 00:05:02.918 CC module/event/subsystems/bdev/bdev.o 00:05:02.918 LIB libspdk_event_bdev.a 00:05:03.176 SO libspdk_event_bdev.so.6.0 00:05:03.176 SYMLINK libspdk_event_bdev.so 00:05:03.435 CC module/event/subsystems/scsi/scsi.o 00:05:03.435 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:03.435 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:03.435 CC module/event/subsystems/ublk/ublk.o 00:05:03.435 CC module/event/subsystems/nbd/nbd.o 00:05:03.435 LIB libspdk_event_ublk.a 00:05:03.436 LIB libspdk_event_scsi.a 00:05:03.436 LIB libspdk_event_nbd.a 00:05:03.436 SO libspdk_event_ublk.so.3.0 00:05:03.436 SO libspdk_event_scsi.so.6.0 00:05:03.694 SO libspdk_event_nbd.so.6.0 00:05:03.694 SYMLINK libspdk_event_scsi.so 00:05:03.694 LIB libspdk_event_nvmf.a 00:05:03.694 SYMLINK libspdk_event_ublk.so 00:05:03.694 SYMLINK libspdk_event_nbd.so 00:05:03.694 SO libspdk_event_nvmf.so.6.0 00:05:03.694 SYMLINK libspdk_event_nvmf.so 00:05:03.952 CC module/event/subsystems/iscsi/iscsi.o 00:05:03.952 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:03.952 LIB libspdk_event_vhost_scsi.a 00:05:03.952 LIB libspdk_event_iscsi.a 00:05:03.952 SO libspdk_event_vhost_scsi.so.3.0 00:05:04.209 SO libspdk_event_iscsi.so.6.0 00:05:04.209 SYMLINK libspdk_event_vhost_scsi.so 00:05:04.209 SYMLINK libspdk_event_iscsi.so 00:05:04.209 SO libspdk.so.6.0 00:05:04.210 SYMLINK libspdk.so 00:05:04.467 CXX app/trace/trace.o 00:05:04.726 CC examples/ioat/perf/perf.o 00:05:04.726 CC examples/nvme/hello_world/hello_world.o 00:05:04.726 CC examples/accel/perf/accel_perf.o 00:05:04.726 CC examples/vmd/lsvmd/lsvmd.o 00:05:04.726 CC examples/sock/hello_world/hello_sock.o 00:05:04.726 CC examples/nvmf/nvmf/nvmf.o 00:05:04.726 CC examples/bdev/hello_world/hello_bdev.o 00:05:04.726 CC test/accel/dif/dif.o 00:05:04.726 CC examples/blob/hello_world/hello_blob.o 00:05:04.984 LINK lsvmd 00:05:04.984 LINK ioat_perf 00:05:04.984 LINK hello_world 00:05:04.984 LINK hello_sock 00:05:04.984 LINK hello_bdev 00:05:04.984 LINK hello_blob 00:05:04.984 LINK nvmf 00:05:04.984 LINK spdk_trace 00:05:04.984 LINK dif 00:05:05.242 LINK accel_perf 00:05:05.242 CC examples/vmd/led/led.o 00:05:05.242 CC examples/ioat/verify/verify.o 00:05:05.242 CC examples/nvme/reconnect/reconnect.o 00:05:05.242 CC examples/util/zipf/zipf.o 00:05:05.242 CC examples/bdev/bdevperf/bdevperf.o 00:05:05.242 LINK led 00:05:05.243 CC app/trace_record/trace_record.o 00:05:05.243 CC examples/blob/cli/blobcli.o 00:05:05.501 LINK verify 00:05:05.501 LINK zipf 00:05:05.501 CC examples/thread/thread/thread_ex.o 00:05:05.501 CC test/app/bdev_svc/bdev_svc.o 00:05:05.501 CC test/bdev/bdevio/bdevio.o 00:05:05.501 LINK reconnect 00:05:05.501 LINK spdk_trace_record 00:05:05.759 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:05.759 CC examples/nvme/arbitration/arbitration.o 00:05:05.759 CC examples/idxd/perf/perf.o 00:05:05.759 LINK bdev_svc 00:05:05.759 LINK thread 00:05:05.759 CC app/nvmf_tgt/nvmf_main.o 00:05:05.759 LINK blobcli 00:05:06.018 LINK bdevio 00:05:06.018 LINK arbitration 00:05:06.018 LINK idxd_perf 00:05:06.018 CC test/blobfs/mkfs/mkfs.o 00:05:06.018 LINK nvmf_tgt 00:05:06.018 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:06.018 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:06.018 LINK bdevperf 00:05:06.018 LINK nvme_manage 00:05:06.277 CC examples/nvme/hotplug/hotplug.o 00:05:06.277 TEST_HEADER include/spdk/accel.h 00:05:06.277 TEST_HEADER include/spdk/accel_module.h 00:05:06.277 TEST_HEADER include/spdk/assert.h 00:05:06.277 TEST_HEADER include/spdk/barrier.h 00:05:06.277 TEST_HEADER include/spdk/base64.h 00:05:06.277 TEST_HEADER include/spdk/bdev.h 00:05:06.277 TEST_HEADER include/spdk/bdev_module.h 00:05:06.277 TEST_HEADER include/spdk/bdev_zone.h 00:05:06.277 TEST_HEADER include/spdk/bit_array.h 00:05:06.277 TEST_HEADER include/spdk/bit_pool.h 00:05:06.277 TEST_HEADER include/spdk/blob_bdev.h 00:05:06.277 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:06.277 TEST_HEADER include/spdk/blobfs.h 00:05:06.277 TEST_HEADER include/spdk/blob.h 00:05:06.277 TEST_HEADER include/spdk/conf.h 00:05:06.277 TEST_HEADER include/spdk/config.h 00:05:06.277 TEST_HEADER include/spdk/cpuset.h 00:05:06.277 TEST_HEADER include/spdk/crc16.h 00:05:06.277 TEST_HEADER include/spdk/crc32.h 00:05:06.277 TEST_HEADER include/spdk/crc64.h 00:05:06.277 TEST_HEADER include/spdk/dif.h 00:05:06.277 TEST_HEADER include/spdk/dma.h 00:05:06.277 TEST_HEADER include/spdk/endian.h 00:05:06.277 TEST_HEADER include/spdk/env_dpdk.h 00:05:06.277 TEST_HEADER include/spdk/env.h 00:05:06.277 TEST_HEADER include/spdk/event.h 00:05:06.277 TEST_HEADER include/spdk/fd_group.h 00:05:06.277 LINK interrupt_tgt 00:05:06.277 TEST_HEADER include/spdk/fd.h 00:05:06.277 TEST_HEADER include/spdk/file.h 00:05:06.277 TEST_HEADER include/spdk/ftl.h 00:05:06.277 TEST_HEADER include/spdk/gpt_spec.h 00:05:06.277 TEST_HEADER include/spdk/hexlify.h 00:05:06.277 TEST_HEADER include/spdk/histogram_data.h 00:05:06.277 LINK mkfs 00:05:06.277 TEST_HEADER include/spdk/idxd.h 00:05:06.277 TEST_HEADER include/spdk/idxd_spec.h 00:05:06.277 TEST_HEADER include/spdk/init.h 00:05:06.277 TEST_HEADER include/spdk/ioat.h 00:05:06.277 TEST_HEADER include/spdk/ioat_spec.h 00:05:06.277 TEST_HEADER include/spdk/iscsi_spec.h 00:05:06.277 TEST_HEADER include/spdk/json.h 00:05:06.277 TEST_HEADER include/spdk/jsonrpc.h 00:05:06.277 TEST_HEADER include/spdk/keyring.h 00:05:06.277 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:06.277 TEST_HEADER include/spdk/keyring_module.h 00:05:06.277 TEST_HEADER include/spdk/likely.h 00:05:06.277 TEST_HEADER include/spdk/log.h 00:05:06.277 TEST_HEADER include/spdk/lvol.h 00:05:06.277 TEST_HEADER include/spdk/memory.h 00:05:06.277 TEST_HEADER include/spdk/mmio.h 00:05:06.277 TEST_HEADER include/spdk/nbd.h 00:05:06.277 TEST_HEADER include/spdk/notify.h 00:05:06.277 TEST_HEADER include/spdk/nvme.h 00:05:06.277 TEST_HEADER include/spdk/nvme_intel.h 00:05:06.277 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:06.277 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:06.277 TEST_HEADER include/spdk/nvme_spec.h 00:05:06.277 TEST_HEADER include/spdk/nvme_zns.h 00:05:06.277 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:06.277 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:06.277 TEST_HEADER include/spdk/nvmf.h 00:05:06.277 TEST_HEADER include/spdk/nvmf_spec.h 00:05:06.277 TEST_HEADER include/spdk/nvmf_transport.h 00:05:06.277 TEST_HEADER include/spdk/opal.h 00:05:06.277 TEST_HEADER include/spdk/opal_spec.h 00:05:06.277 TEST_HEADER include/spdk/pci_ids.h 00:05:06.277 TEST_HEADER include/spdk/pipe.h 00:05:06.277 TEST_HEADER include/spdk/queue.h 00:05:06.277 TEST_HEADER include/spdk/reduce.h 00:05:06.277 TEST_HEADER include/spdk/rpc.h 00:05:06.277 TEST_HEADER include/spdk/scheduler.h 00:05:06.277 TEST_HEADER include/spdk/scsi.h 00:05:06.277 TEST_HEADER include/spdk/scsi_spec.h 00:05:06.277 CC test/dma/test_dma/test_dma.o 00:05:06.277 TEST_HEADER include/spdk/sock.h 00:05:06.277 TEST_HEADER include/spdk/stdinc.h 00:05:06.277 TEST_HEADER include/spdk/string.h 00:05:06.277 TEST_HEADER include/spdk/thread.h 00:05:06.277 TEST_HEADER include/spdk/trace.h 00:05:06.277 TEST_HEADER include/spdk/trace_parser.h 00:05:06.277 CC app/iscsi_tgt/iscsi_tgt.o 00:05:06.277 TEST_HEADER include/spdk/tree.h 00:05:06.277 TEST_HEADER include/spdk/ublk.h 00:05:06.277 TEST_HEADER include/spdk/util.h 00:05:06.277 TEST_HEADER include/spdk/uuid.h 00:05:06.277 TEST_HEADER include/spdk/version.h 00:05:06.277 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:06.277 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:06.277 TEST_HEADER include/spdk/vhost.h 00:05:06.277 TEST_HEADER include/spdk/vmd.h 00:05:06.536 TEST_HEADER include/spdk/xor.h 00:05:06.536 TEST_HEADER include/spdk/zipf.h 00:05:06.536 CXX test/cpp_headers/accel.o 00:05:06.536 LINK hotplug 00:05:06.536 CXX test/cpp_headers/accel_module.o 00:05:06.536 CC app/spdk_lspci/spdk_lspci.o 00:05:06.536 LINK nvme_fuzz 00:05:06.536 LINK cmb_copy 00:05:06.536 CC app/spdk_tgt/spdk_tgt.o 00:05:06.536 CXX test/cpp_headers/assert.o 00:05:06.536 LINK iscsi_tgt 00:05:06.536 LINK spdk_lspci 00:05:06.848 CXX test/cpp_headers/barrier.o 00:05:06.848 CC examples/nvme/abort/abort.o 00:05:06.848 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:06.848 LINK spdk_tgt 00:05:06.848 CC app/spdk_nvme_perf/perf.o 00:05:06.848 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:06.848 LINK test_dma 00:05:06.848 CXX test/cpp_headers/base64.o 00:05:06.848 CC app/spdk_nvme_identify/identify.o 00:05:06.848 CC test/app/histogram_perf/histogram_perf.o 00:05:06.848 LINK pmr_persistence 00:05:06.848 CXX test/cpp_headers/bdev.o 00:05:06.848 CC test/env/mem_callbacks/mem_callbacks.o 00:05:07.106 CXX test/cpp_headers/bdev_module.o 00:05:07.106 LINK histogram_perf 00:05:07.106 CC app/spdk_nvme_discover/discovery_aer.o 00:05:07.106 LINK abort 00:05:07.106 CC app/spdk_top/spdk_top.o 00:05:07.364 CXX test/cpp_headers/bdev_zone.o 00:05:07.364 CC app/vhost/vhost.o 00:05:07.364 LINK spdk_nvme_discover 00:05:07.364 CC app/spdk_dd/spdk_dd.o 00:05:07.364 CXX test/cpp_headers/bit_array.o 00:05:07.364 LINK vhost 00:05:07.364 CC app/fio/nvme/fio_plugin.o 00:05:07.623 LINK mem_callbacks 00:05:07.623 LINK spdk_nvme_perf 00:05:07.623 CXX test/cpp_headers/bit_pool.o 00:05:07.623 CC app/fio/bdev/fio_plugin.o 00:05:07.623 LINK spdk_nvme_identify 00:05:07.881 CXX test/cpp_headers/blob_bdev.o 00:05:07.881 CC test/app/jsoncat/jsoncat.o 00:05:07.881 CC test/env/vtophys/vtophys.o 00:05:07.881 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:07.881 LINK spdk_dd 00:05:07.881 CC test/env/memory/memory_ut.o 00:05:07.881 LINK jsoncat 00:05:07.881 LINK vtophys 00:05:07.881 LINK spdk_top 00:05:07.881 CXX test/cpp_headers/blobfs_bdev.o 00:05:08.140 LINK spdk_nvme 00:05:08.140 LINK env_dpdk_post_init 00:05:08.140 CXX test/cpp_headers/blobfs.o 00:05:08.140 LINK spdk_bdev 00:05:08.140 CXX test/cpp_headers/blob.o 00:05:08.140 CXX test/cpp_headers/conf.o 00:05:08.140 CXX test/cpp_headers/config.o 00:05:08.140 CXX test/cpp_headers/cpuset.o 00:05:08.140 CC test/app/stub/stub.o 00:05:08.399 CC test/env/pci/pci_ut.o 00:05:08.399 CXX test/cpp_headers/crc16.o 00:05:08.399 CXX test/cpp_headers/crc32.o 00:05:08.399 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:08.399 CXX test/cpp_headers/crc64.o 00:05:08.399 LINK iscsi_fuzz 00:05:08.399 CC test/event/event_perf/event_perf.o 00:05:08.399 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:08.399 LINK stub 00:05:08.399 CXX test/cpp_headers/dif.o 00:05:08.658 CC test/event/reactor/reactor.o 00:05:08.658 LINK event_perf 00:05:08.658 CC test/nvme/aer/aer.o 00:05:08.658 CC test/lvol/esnap/esnap.o 00:05:08.658 CXX test/cpp_headers/dma.o 00:05:08.658 LINK pci_ut 00:05:08.658 LINK reactor 00:05:08.658 CC test/nvme/reset/reset.o 00:05:08.658 CC test/rpc_client/rpc_client_test.o 00:05:08.916 CXX test/cpp_headers/endian.o 00:05:08.916 LINK vhost_fuzz 00:05:08.916 CC test/thread/poller_perf/poller_perf.o 00:05:08.916 LINK memory_ut 00:05:08.916 LINK aer 00:05:08.916 LINK rpc_client_test 00:05:08.916 CC test/event/reactor_perf/reactor_perf.o 00:05:08.916 LINK reset 00:05:08.916 CXX test/cpp_headers/env_dpdk.o 00:05:08.916 LINK poller_perf 00:05:09.175 CC test/event/app_repeat/app_repeat.o 00:05:09.175 CC test/nvme/sgl/sgl.o 00:05:09.175 LINK reactor_perf 00:05:09.175 CXX test/cpp_headers/env.o 00:05:09.175 CXX test/cpp_headers/event.o 00:05:09.175 CC test/nvme/overhead/overhead.o 00:05:09.175 LINK app_repeat 00:05:09.175 CC test/nvme/e2edp/nvme_dp.o 00:05:09.175 CC test/event/scheduler/scheduler.o 00:05:09.175 CC test/nvme/err_injection/err_injection.o 00:05:09.175 CXX test/cpp_headers/fd_group.o 00:05:09.433 CXX test/cpp_headers/fd.o 00:05:09.433 LINK sgl 00:05:09.433 CXX test/cpp_headers/file.o 00:05:09.433 CC test/nvme/startup/startup.o 00:05:09.433 LINK err_injection 00:05:09.433 CXX test/cpp_headers/ftl.o 00:05:09.433 LINK scheduler 00:05:09.433 LINK nvme_dp 00:05:09.433 LINK overhead 00:05:09.691 LINK startup 00:05:09.691 CC test/nvme/reserve/reserve.o 00:05:09.691 CC test/nvme/connect_stress/connect_stress.o 00:05:09.691 CC test/nvme/simple_copy/simple_copy.o 00:05:09.691 CXX test/cpp_headers/gpt_spec.o 00:05:09.691 CC test/nvme/boot_partition/boot_partition.o 00:05:09.691 CC test/nvme/compliance/nvme_compliance.o 00:05:09.691 CC test/nvme/fused_ordering/fused_ordering.o 00:05:09.691 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:09.950 CXX test/cpp_headers/hexlify.o 00:05:09.950 LINK connect_stress 00:05:09.950 LINK reserve 00:05:09.950 CC test/nvme/fdp/fdp.o 00:05:09.950 LINK simple_copy 00:05:09.950 LINK boot_partition 00:05:09.950 LINK fused_ordering 00:05:09.950 LINK doorbell_aers 00:05:09.950 CXX test/cpp_headers/histogram_data.o 00:05:09.950 CXX test/cpp_headers/idxd.o 00:05:09.950 CXX test/cpp_headers/idxd_spec.o 00:05:10.208 CXX test/cpp_headers/init.o 00:05:10.208 LINK nvme_compliance 00:05:10.208 CC test/nvme/cuse/cuse.o 00:05:10.208 CXX test/cpp_headers/ioat.o 00:05:10.208 CXX test/cpp_headers/ioat_spec.o 00:05:10.208 CXX test/cpp_headers/iscsi_spec.o 00:05:10.208 CXX test/cpp_headers/json.o 00:05:10.208 LINK fdp 00:05:10.208 CXX test/cpp_headers/jsonrpc.o 00:05:10.208 CXX test/cpp_headers/keyring.o 00:05:10.208 CXX test/cpp_headers/keyring_module.o 00:05:10.208 CXX test/cpp_headers/likely.o 00:05:10.467 CXX test/cpp_headers/log.o 00:05:10.467 CXX test/cpp_headers/lvol.o 00:05:10.467 CXX test/cpp_headers/memory.o 00:05:10.467 CXX test/cpp_headers/mmio.o 00:05:10.467 CXX test/cpp_headers/nbd.o 00:05:10.467 CXX test/cpp_headers/notify.o 00:05:10.467 CXX test/cpp_headers/nvme.o 00:05:10.467 CXX test/cpp_headers/nvme_intel.o 00:05:10.467 CXX test/cpp_headers/nvme_ocssd.o 00:05:10.467 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:10.467 CXX test/cpp_headers/nvme_spec.o 00:05:10.467 CXX test/cpp_headers/nvme_zns.o 00:05:10.467 CXX test/cpp_headers/nvmf_cmd.o 00:05:10.467 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:10.725 CXX test/cpp_headers/nvmf.o 00:05:10.725 CXX test/cpp_headers/nvmf_spec.o 00:05:10.725 CXX test/cpp_headers/nvmf_transport.o 00:05:10.725 CXX test/cpp_headers/opal.o 00:05:10.725 CXX test/cpp_headers/opal_spec.o 00:05:10.725 CXX test/cpp_headers/pci_ids.o 00:05:10.725 CXX test/cpp_headers/pipe.o 00:05:10.725 CXX test/cpp_headers/queue.o 00:05:10.984 CXX test/cpp_headers/reduce.o 00:05:10.984 CXX test/cpp_headers/rpc.o 00:05:10.984 CXX test/cpp_headers/scheduler.o 00:05:10.984 CXX test/cpp_headers/scsi.o 00:05:10.984 CXX test/cpp_headers/scsi_spec.o 00:05:10.984 CXX test/cpp_headers/sock.o 00:05:10.984 CXX test/cpp_headers/stdinc.o 00:05:10.984 CXX test/cpp_headers/string.o 00:05:10.984 CXX test/cpp_headers/thread.o 00:05:10.984 CXX test/cpp_headers/trace.o 00:05:10.984 CXX test/cpp_headers/trace_parser.o 00:05:10.984 CXX test/cpp_headers/tree.o 00:05:11.241 CXX test/cpp_headers/ublk.o 00:05:11.241 CXX test/cpp_headers/util.o 00:05:11.241 CXX test/cpp_headers/uuid.o 00:05:11.241 CXX test/cpp_headers/version.o 00:05:11.241 LINK cuse 00:05:11.241 CXX test/cpp_headers/vfio_user_pci.o 00:05:11.241 CXX test/cpp_headers/vfio_user_spec.o 00:05:11.241 CXX test/cpp_headers/vhost.o 00:05:11.241 CXX test/cpp_headers/vmd.o 00:05:11.241 CXX test/cpp_headers/xor.o 00:05:11.241 CXX test/cpp_headers/zipf.o 00:05:13.143 LINK esnap 00:05:13.710 00:05:13.710 real 1m3.589s 00:05:13.710 user 6m21.994s 00:05:13.710 sys 1m32.369s 00:05:13.710 15:09:22 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:05:13.710 15:09:22 -- common/autotest_common.sh@10 -- $ set +x 00:05:13.710 ************************************ 00:05:13.710 END TEST make 00:05:13.710 ************************************ 00:05:13.710 15:09:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:13.710 15:09:22 -- pm/common@30 -- $ signal_monitor_resources TERM 00:05:13.710 15:09:22 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:05:13.710 15:09:22 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:13.710 15:09:22 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:13.710 15:09:22 -- pm/common@45 -- $ pid=5298 00:05:13.710 15:09:22 -- pm/common@52 -- $ sudo kill -TERM 5298 00:05:13.710 15:09:22 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:13.710 15:09:22 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:13.710 15:09:22 -- pm/common@45 -- $ pid=5301 00:05:13.710 15:09:22 -- pm/common@52 -- $ sudo kill -TERM 5301 00:05:13.710 15:09:22 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:13.710 15:09:22 -- nvmf/common.sh@7 -- # uname -s 00:05:13.710 15:09:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.710 15:09:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.710 15:09:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.710 15:09:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.710 15:09:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.710 15:09:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.710 15:09:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.710 15:09:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.710 15:09:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.710 15:09:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.710 15:09:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:05:13.710 15:09:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:05:13.710 15:09:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.710 15:09:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.710 15:09:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:13.710 15:09:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.710 15:09:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:13.710 15:09:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.710 15:09:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.710 15:09:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.710 15:09:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.710 15:09:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.710 15:09:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.710 15:09:22 -- paths/export.sh@5 -- # export PATH 00:05:13.710 15:09:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.710 15:09:22 -- nvmf/common.sh@47 -- # : 0 00:05:13.710 15:09:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:13.710 15:09:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:13.710 15:09:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.710 15:09:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.710 15:09:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.710 15:09:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:13.710 15:09:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:13.710 15:09:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:13.711 15:09:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:13.711 15:09:22 -- spdk/autotest.sh@32 -- # uname -s 00:05:13.711 15:09:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:13.711 15:09:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:13.711 15:09:22 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:13.711 15:09:22 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:13.711 15:09:22 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:13.711 15:09:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:13.970 15:09:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:13.970 15:09:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:13.970 15:09:22 -- spdk/autotest.sh@48 -- # udevadm_pid=52301 00:05:13.970 15:09:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:13.970 15:09:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:13.970 15:09:22 -- pm/common@17 -- # local monitor 00:05:13.970 15:09:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:13.970 15:09:22 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=52302 00:05:13.970 15:09:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:13.970 15:09:22 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=52304 00:05:13.970 15:09:22 -- pm/common@26 -- # sleep 1 00:05:13.970 15:09:22 -- pm/common@21 -- # date +%s 00:05:13.970 15:09:22 -- pm/common@21 -- # date +%s 00:05:13.970 15:09:22 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713971362 00:05:13.970 15:09:22 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713971362 00:05:13.970 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713971362_collect-vmstat.pm.log 00:05:13.970 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713971362_collect-cpu-load.pm.log 00:05:14.905 15:09:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:14.905 15:09:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:14.905 15:09:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:14.905 15:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:14.905 15:09:23 -- spdk/autotest.sh@59 -- # create_test_list 00:05:14.905 15:09:23 -- common/autotest_common.sh@734 -- # xtrace_disable 00:05:14.905 15:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:14.905 15:09:24 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:14.905 15:09:24 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:14.905 15:09:24 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:14.905 15:09:24 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:14.905 15:09:24 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:14.905 15:09:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:14.905 15:09:24 -- common/autotest_common.sh@1441 -- # uname 00:05:14.905 15:09:24 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:05:14.905 15:09:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:14.905 15:09:24 -- common/autotest_common.sh@1461 -- # uname 00:05:14.905 15:09:24 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:05:14.905 15:09:24 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:14.905 15:09:24 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:05:14.906 15:09:24 -- spdk/autotest.sh@72 -- # hash lcov 00:05:14.906 15:09:24 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:14.906 15:09:24 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:05:14.906 --rc lcov_branch_coverage=1 00:05:14.906 --rc lcov_function_coverage=1 00:05:14.906 --rc genhtml_branch_coverage=1 00:05:14.906 --rc genhtml_function_coverage=1 00:05:14.906 --rc genhtml_legend=1 00:05:14.906 --rc geninfo_all_blocks=1 00:05:14.906 ' 00:05:14.906 15:09:24 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:05:14.906 --rc lcov_branch_coverage=1 00:05:14.906 --rc lcov_function_coverage=1 00:05:14.906 --rc genhtml_branch_coverage=1 00:05:14.906 --rc genhtml_function_coverage=1 00:05:14.906 --rc genhtml_legend=1 00:05:14.906 --rc geninfo_all_blocks=1 00:05:14.906 ' 00:05:14.906 15:09:24 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:05:14.906 --rc lcov_branch_coverage=1 00:05:14.906 --rc lcov_function_coverage=1 00:05:14.906 --rc genhtml_branch_coverage=1 00:05:14.906 --rc genhtml_function_coverage=1 00:05:14.906 --rc genhtml_legend=1 00:05:14.906 --rc geninfo_all_blocks=1 00:05:14.906 --no-external' 00:05:14.906 15:09:24 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:05:14.906 --rc lcov_branch_coverage=1 00:05:14.906 --rc lcov_function_coverage=1 00:05:14.906 --rc genhtml_branch_coverage=1 00:05:14.906 --rc genhtml_function_coverage=1 00:05:14.906 --rc genhtml_legend=1 00:05:14.906 --rc geninfo_all_blocks=1 00:05:14.906 --no-external' 00:05:14.906 15:09:24 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:14.906 lcov: LCOV version 1.14 00:05:14.906 15:09:24 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:24.880 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:24.880 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:24.880 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:24.880 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:24.880 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:24.880 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:30.146 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:30.146 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:42.345 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:42.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:42.346 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:42.346 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:42.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:42.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:42.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:42.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:42.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:42.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:42.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:42.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:42.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:42.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:42.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:42.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:42.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:42.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:42.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:42.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:42.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:42.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:42.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:42.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:42.604 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:42.604 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:45.884 15:09:54 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:45.884 15:09:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:45.884 15:09:54 -- common/autotest_common.sh@10 -- # set +x 00:05:45.884 15:09:54 -- spdk/autotest.sh@91 -- # rm -f 00:05:45.884 15:09:54 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:46.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.450 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:46.709 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:46.709 15:09:55 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:46.709 15:09:55 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:46.709 15:09:55 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:46.709 15:09:55 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:46.709 15:09:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:46.709 15:09:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:46.709 15:09:55 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:46.709 15:09:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:46.709 15:09:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:46.709 15:09:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:46.709 15:09:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:46.709 15:09:55 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:46.709 15:09:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:46.709 15:09:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:46.709 15:09:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:46.709 15:09:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:46.709 15:09:55 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:46.709 15:09:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:46.709 15:09:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:46.709 15:09:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:46.709 15:09:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:46.709 15:09:55 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:46.709 15:09:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:46.709 15:09:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:46.709 15:09:55 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:46.709 15:09:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:46.709 15:09:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:46.709 15:09:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:46.709 15:09:55 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:46.709 15:09:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:46.709 No valid GPT data, bailing 00:05:46.709 15:09:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:46.709 15:09:55 -- scripts/common.sh@391 -- # pt= 00:05:46.709 15:09:55 -- scripts/common.sh@392 -- # return 1 00:05:46.709 15:09:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:46.709 1+0 records in 00:05:46.709 1+0 records out 00:05:46.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049305 s, 213 MB/s 00:05:46.709 15:09:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:46.709 15:09:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:46.709 15:09:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:46.709 15:09:55 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:46.709 15:09:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:46.709 No valid GPT data, bailing 00:05:46.709 15:09:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:46.709 15:09:55 -- scripts/common.sh@391 -- # pt= 00:05:46.709 15:09:55 -- scripts/common.sh@392 -- # return 1 00:05:46.709 15:09:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:46.709 1+0 records in 00:05:46.709 1+0 records out 00:05:46.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447992 s, 234 MB/s 00:05:46.709 15:09:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:46.709 15:09:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:46.709 15:09:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:46.709 15:09:55 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:46.709 15:09:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:46.709 No valid GPT data, bailing 00:05:46.709 15:09:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:46.709 15:09:55 -- scripts/common.sh@391 -- # pt= 00:05:46.709 15:09:55 -- scripts/common.sh@392 -- # return 1 00:05:46.709 15:09:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:46.709 1+0 records in 00:05:46.709 1+0 records out 00:05:46.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044269 s, 237 MB/s 00:05:46.709 15:09:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:46.709 15:09:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:46.709 15:09:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:46.709 15:09:55 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:46.709 15:09:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:46.967 No valid GPT data, bailing 00:05:46.967 15:09:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:46.967 15:09:55 -- scripts/common.sh@391 -- # pt= 00:05:46.967 15:09:55 -- scripts/common.sh@392 -- # return 1 00:05:46.967 15:09:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:46.968 1+0 records in 00:05:46.968 1+0 records out 00:05:46.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00384844 s, 272 MB/s 00:05:46.968 15:09:56 -- spdk/autotest.sh@118 -- # sync 00:05:46.968 15:09:56 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:46.968 15:09:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:46.968 15:09:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:48.875 15:09:57 -- spdk/autotest.sh@124 -- # uname -s 00:05:48.875 15:09:57 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:48.875 15:09:57 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:48.875 15:09:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.875 15:09:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.875 15:09:57 -- common/autotest_common.sh@10 -- # set +x 00:05:48.875 ************************************ 00:05:48.875 START TEST setup.sh 00:05:48.875 ************************************ 00:05:48.875 15:09:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:48.875 * Looking for test storage... 00:05:48.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:48.875 15:09:57 -- setup/test-setup.sh@10 -- # uname -s 00:05:48.875 15:09:57 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:48.875 15:09:57 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:48.875 15:09:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.875 15:09:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.875 15:09:57 -- common/autotest_common.sh@10 -- # set +x 00:05:48.875 ************************************ 00:05:48.875 START TEST acl 00:05:48.875 ************************************ 00:05:48.875 15:09:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:49.133 * Looking for test storage... 00:05:49.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:49.133 15:09:58 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:49.133 15:09:58 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:49.133 15:09:58 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:49.133 15:09:58 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:49.133 15:09:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:49.133 15:09:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:49.133 15:09:58 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:49.133 15:09:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:49.133 15:09:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:49.133 15:09:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:49.133 15:09:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:49.133 15:09:58 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:49.133 15:09:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:49.133 15:09:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:49.133 15:09:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:49.133 15:09:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:49.133 15:09:58 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:49.133 15:09:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:49.133 15:09:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:49.133 15:09:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:49.133 15:09:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:49.133 15:09:58 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:49.133 15:09:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:49.133 15:09:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:49.133 15:09:58 -- setup/acl.sh@12 -- # devs=() 00:05:49.133 15:09:58 -- setup/acl.sh@12 -- # declare -a devs 00:05:49.133 15:09:58 -- setup/acl.sh@13 -- # drivers=() 00:05:49.133 15:09:58 -- setup/acl.sh@13 -- # declare -A drivers 00:05:49.133 15:09:58 -- setup/acl.sh@51 -- # setup reset 00:05:49.133 15:09:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:49.133 15:09:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:49.699 15:09:58 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:49.699 15:09:58 -- setup/acl.sh@16 -- # local dev driver 00:05:49.699 15:09:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:49.699 15:09:58 -- setup/acl.sh@15 -- # setup output status 00:05:49.699 15:09:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.699 15:09:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:50.266 15:09:59 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:50.266 15:09:59 -- setup/acl.sh@19 -- # continue 00:05:50.266 15:09:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:50.266 Hugepages 00:05:50.266 node hugesize free / total 00:05:50.266 15:09:59 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:50.266 15:09:59 -- setup/acl.sh@19 -- # continue 00:05:50.266 15:09:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:50.266 00:05:50.266 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:50.266 15:09:59 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:50.266 15:09:59 -- setup/acl.sh@19 -- # continue 00:05:50.266 15:09:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:50.524 15:09:59 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:50.524 15:09:59 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:50.524 15:09:59 -- setup/acl.sh@20 -- # continue 00:05:50.524 15:09:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:50.524 15:09:59 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:50.524 15:09:59 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:50.524 15:09:59 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:50.524 15:09:59 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:50.524 15:09:59 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:50.524 15:09:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:50.524 15:09:59 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:50.524 15:09:59 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:50.524 15:09:59 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:50.524 15:09:59 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:50.524 15:09:59 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:50.524 15:09:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:50.524 15:09:59 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:50.524 15:09:59 -- setup/acl.sh@54 -- # run_test denied denied 00:05:50.524 15:09:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.524 15:09:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.524 15:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:50.524 ************************************ 00:05:50.524 START TEST denied 00:05:50.524 ************************************ 00:05:50.524 15:09:59 -- common/autotest_common.sh@1111 -- # denied 00:05:50.524 15:09:59 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:50.524 15:09:59 -- setup/acl.sh@38 -- # setup output config 00:05:50.524 15:09:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.524 15:09:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:50.524 15:09:59 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:51.458 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:51.458 15:10:00 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:51.458 15:10:00 -- setup/acl.sh@28 -- # local dev driver 00:05:51.458 15:10:00 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:51.458 15:10:00 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:51.458 15:10:00 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:51.458 15:10:00 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:51.458 15:10:00 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:51.458 15:10:00 -- setup/acl.sh@41 -- # setup reset 00:05:51.458 15:10:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:51.458 15:10:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:52.024 00:05:52.024 real 0m1.350s 00:05:52.024 user 0m0.539s 00:05:52.024 sys 0m0.776s 00:05:52.024 15:10:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.024 ************************************ 00:05:52.024 END TEST denied 00:05:52.024 15:10:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.024 ************************************ 00:05:52.024 15:10:01 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:52.024 15:10:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.024 15:10:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.024 15:10:01 -- common/autotest_common.sh@10 -- # set +x 00:05:52.024 ************************************ 00:05:52.024 START TEST allowed 00:05:52.024 ************************************ 00:05:52.024 15:10:01 -- common/autotest_common.sh@1111 -- # allowed 00:05:52.024 15:10:01 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:52.024 15:10:01 -- setup/acl.sh@45 -- # setup output config 00:05:52.024 15:10:01 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:52.024 15:10:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.024 15:10:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:52.959 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:52.959 15:10:01 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:52.959 15:10:01 -- setup/acl.sh@28 -- # local dev driver 00:05:52.959 15:10:01 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:52.959 15:10:01 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:52.959 15:10:01 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:52.959 15:10:01 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:52.959 15:10:01 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:52.959 15:10:01 -- setup/acl.sh@48 -- # setup reset 00:05:52.959 15:10:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:52.959 15:10:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:53.561 00:05:53.561 real 0m1.483s 00:05:53.561 user 0m0.652s 00:05:53.561 sys 0m0.816s 00:05:53.561 15:10:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.561 15:10:02 -- common/autotest_common.sh@10 -- # set +x 00:05:53.561 ************************************ 00:05:53.561 END TEST allowed 00:05:53.561 ************************************ 00:05:53.561 00:05:53.561 real 0m4.662s 00:05:53.561 user 0m2.052s 00:05:53.561 sys 0m2.539s 00:05:53.561 15:10:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.561 15:10:02 -- common/autotest_common.sh@10 -- # set +x 00:05:53.561 ************************************ 00:05:53.561 END TEST acl 00:05:53.561 ************************************ 00:05:53.561 15:10:02 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:53.561 15:10:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.561 15:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.561 15:10:02 -- common/autotest_common.sh@10 -- # set +x 00:05:53.820 ************************************ 00:05:53.820 START TEST hugepages 00:05:53.820 ************************************ 00:05:53.820 15:10:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:53.820 * Looking for test storage... 00:05:53.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:53.820 15:10:02 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:53.820 15:10:02 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:53.820 15:10:02 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:53.820 15:10:02 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:53.820 15:10:02 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:53.820 15:10:02 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:53.820 15:10:02 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:53.820 15:10:02 -- setup/common.sh@18 -- # local node= 00:05:53.820 15:10:02 -- setup/common.sh@19 -- # local var val 00:05:53.820 15:10:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:53.820 15:10:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:53.820 15:10:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:53.820 15:10:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:53.820 15:10:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:53.820 15:10:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5590420 kB' 'MemAvailable: 7394308 kB' 'Buffers: 2436 kB' 'Cached: 2016552 kB' 'SwapCached: 0 kB' 'Active: 835744 kB' 'Inactive: 1290636 kB' 'Active(anon): 117880 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 109092 kB' 'Mapped: 48704 kB' 'Shmem: 10488 kB' 'KReclaimable: 64652 kB' 'Slab: 137760 kB' 'SReclaimable: 64652 kB' 'SUnreclaim: 73108 kB' 'KernelStack: 6560 kB' 'PageTables: 4660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 342376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.820 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.820 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # continue 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:53.821 15:10:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:53.821 15:10:02 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.821 15:10:02 -- setup/common.sh@33 -- # echo 2048 00:05:53.821 15:10:02 -- setup/common.sh@33 -- # return 0 00:05:53.821 15:10:02 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:53.821 15:10:02 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:53.821 15:10:02 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:53.821 15:10:02 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:53.821 15:10:02 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:53.821 15:10:02 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:53.821 15:10:02 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:53.821 15:10:02 -- setup/hugepages.sh@207 -- # get_nodes 00:05:53.821 15:10:02 -- setup/hugepages.sh@27 -- # local node 00:05:53.821 15:10:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:53.821 15:10:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:53.821 15:10:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:53.821 15:10:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:53.821 15:10:02 -- setup/hugepages.sh@208 -- # clear_hp 00:05:53.821 15:10:02 -- setup/hugepages.sh@37 -- # local node hp 00:05:53.821 15:10:02 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:53.821 15:10:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:53.821 15:10:02 -- setup/hugepages.sh@41 -- # echo 0 00:05:53.821 15:10:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:53.821 15:10:02 -- setup/hugepages.sh@41 -- # echo 0 00:05:53.821 15:10:02 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:53.822 15:10:02 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:53.822 15:10:02 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:53.822 15:10:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.822 15:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.822 15:10:02 -- common/autotest_common.sh@10 -- # set +x 00:05:53.822 ************************************ 00:05:53.822 START TEST default_setup 00:05:53.822 ************************************ 00:05:53.822 15:10:03 -- common/autotest_common.sh@1111 -- # default_setup 00:05:53.822 15:10:03 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:53.822 15:10:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:53.822 15:10:03 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:53.822 15:10:03 -- setup/hugepages.sh@51 -- # shift 00:05:53.822 15:10:03 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:53.822 15:10:03 -- setup/hugepages.sh@52 -- # local node_ids 00:05:53.822 15:10:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:53.822 15:10:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:53.822 15:10:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:53.822 15:10:03 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:53.822 15:10:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:53.822 15:10:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:53.822 15:10:03 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:53.822 15:10:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:53.822 15:10:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:53.822 15:10:03 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:53.822 15:10:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:53.822 15:10:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:53.822 15:10:03 -- setup/hugepages.sh@73 -- # return 0 00:05:53.822 15:10:03 -- setup/hugepages.sh@137 -- # setup output 00:05:53.822 15:10:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.822 15:10:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:54.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:54.759 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:54.759 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:54.759 15:10:03 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:54.759 15:10:03 -- setup/hugepages.sh@89 -- # local node 00:05:54.759 15:10:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:54.759 15:10:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:54.759 15:10:03 -- setup/hugepages.sh@92 -- # local surp 00:05:54.759 15:10:03 -- setup/hugepages.sh@93 -- # local resv 00:05:54.759 15:10:03 -- setup/hugepages.sh@94 -- # local anon 00:05:54.759 15:10:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:54.759 15:10:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:54.759 15:10:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:54.759 15:10:03 -- setup/common.sh@18 -- # local node= 00:05:54.759 15:10:03 -- setup/common.sh@19 -- # local var val 00:05:54.759 15:10:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.759 15:10:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.759 15:10:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.759 15:10:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.759 15:10:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.759 15:10:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7699236 kB' 'MemAvailable: 9502980 kB' 'Buffers: 2436 kB' 'Cached: 2016584 kB' 'SwapCached: 0 kB' 'Active: 851664 kB' 'Inactive: 1290684 kB' 'Active(anon): 133800 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 125228 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137208 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72936 kB' 'KernelStack: 6464 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.759 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.759 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.760 15:10:03 -- setup/common.sh@33 -- # echo 0 00:05:54.760 15:10:03 -- setup/common.sh@33 -- # return 0 00:05:54.760 15:10:03 -- setup/hugepages.sh@97 -- # anon=0 00:05:54.760 15:10:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:54.760 15:10:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:54.760 15:10:03 -- setup/common.sh@18 -- # local node= 00:05:54.760 15:10:03 -- setup/common.sh@19 -- # local var val 00:05:54.760 15:10:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.760 15:10:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.760 15:10:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.760 15:10:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.760 15:10:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.760 15:10:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.760 15:10:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7699236 kB' 'MemAvailable: 9502980 kB' 'Buffers: 2436 kB' 'Cached: 2016584 kB' 'SwapCached: 0 kB' 'Active: 852088 kB' 'Inactive: 1290684 kB' 'Active(anon): 134224 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 125368 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137208 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72936 kB' 'KernelStack: 6500 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.760 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.760 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.761 15:10:03 -- setup/common.sh@33 -- # echo 0 00:05:54.761 15:10:03 -- setup/common.sh@33 -- # return 0 00:05:54.761 15:10:03 -- setup/hugepages.sh@99 -- # surp=0 00:05:54.761 15:10:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:54.761 15:10:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:54.761 15:10:03 -- setup/common.sh@18 -- # local node= 00:05:54.761 15:10:03 -- setup/common.sh@19 -- # local var val 00:05:54.761 15:10:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.761 15:10:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.761 15:10:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.761 15:10:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.761 15:10:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.761 15:10:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7698984 kB' 'MemAvailable: 9502732 kB' 'Buffers: 2436 kB' 'Cached: 2016588 kB' 'SwapCached: 0 kB' 'Active: 851756 kB' 'Inactive: 1290688 kB' 'Active(anon): 133892 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 125284 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137100 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72828 kB' 'KernelStack: 6480 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.761 15:10:03 -- setup/common.sh@33 -- # echo 0 00:05:54.761 15:10:03 -- setup/common.sh@33 -- # return 0 00:05:54.761 15:10:03 -- setup/hugepages.sh@100 -- # resv=0 00:05:54.761 15:10:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:54.761 nr_hugepages=1024 00:05:54.761 resv_hugepages=0 00:05:54.761 15:10:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:54.761 surplus_hugepages=0 00:05:54.761 15:10:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:54.761 anon_hugepages=0 00:05:54.761 15:10:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:54.761 15:10:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:54.761 15:10:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:54.761 15:10:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:54.761 15:10:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:54.761 15:10:03 -- setup/common.sh@18 -- # local node= 00:05:54.761 15:10:03 -- setup/common.sh@19 -- # local var val 00:05:54.761 15:10:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.761 15:10:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.761 15:10:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.761 15:10:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.761 15:10:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.761 15:10:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7698984 kB' 'MemAvailable: 9502732 kB' 'Buffers: 2436 kB' 'Cached: 2016588 kB' 'SwapCached: 0 kB' 'Active: 851632 kB' 'Inactive: 1290688 kB' 'Active(anon): 133768 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 124936 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137100 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72828 kB' 'KernelStack: 6496 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.761 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.761 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.762 15:10:03 -- setup/common.sh@33 -- # echo 1024 00:05:54.762 15:10:03 -- setup/common.sh@33 -- # return 0 00:05:54.762 15:10:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:54.762 15:10:03 -- setup/hugepages.sh@112 -- # get_nodes 00:05:54.762 15:10:03 -- setup/hugepages.sh@27 -- # local node 00:05:54.762 15:10:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:54.762 15:10:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:54.762 15:10:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:54.762 15:10:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:54.762 15:10:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:54.762 15:10:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:54.762 15:10:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:54.762 15:10:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:54.762 15:10:03 -- setup/common.sh@18 -- # local node=0 00:05:54.762 15:10:03 -- setup/common.sh@19 -- # local var val 00:05:54.762 15:10:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.762 15:10:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.762 15:10:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:54.762 15:10:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:54.762 15:10:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.762 15:10:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7698984 kB' 'MemUsed: 4542988 kB' 'SwapCached: 0 kB' 'Active: 851892 kB' 'Inactive: 1290688 kB' 'Active(anon): 134028 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'FilePages: 2019024 kB' 'Mapped: 48720 kB' 'AnonPages: 125196 kB' 'Shmem: 10464 kB' 'KernelStack: 6496 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64272 kB' 'Slab: 137100 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.762 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.762 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # continue 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.763 15:10:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.763 15:10:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.763 15:10:03 -- setup/common.sh@33 -- # echo 0 00:05:54.763 15:10:03 -- setup/common.sh@33 -- # return 0 00:05:54.763 15:10:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:54.763 15:10:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:54.763 15:10:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:54.763 15:10:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:54.763 node0=1024 expecting 1024 00:05:54.763 15:10:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:54.763 15:10:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:54.763 00:05:54.763 real 0m0.950s 00:05:54.763 user 0m0.443s 00:05:54.763 sys 0m0.462s 00:05:54.763 15:10:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.763 15:10:03 -- common/autotest_common.sh@10 -- # set +x 00:05:54.763 ************************************ 00:05:54.763 END TEST default_setup 00:05:54.763 ************************************ 00:05:55.023 15:10:04 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:55.023 15:10:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.023 15:10:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.023 15:10:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.023 ************************************ 00:05:55.023 START TEST per_node_1G_alloc 00:05:55.023 ************************************ 00:05:55.023 15:10:04 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:05:55.023 15:10:04 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:55.023 15:10:04 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:55.023 15:10:04 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:55.023 15:10:04 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:55.023 15:10:04 -- setup/hugepages.sh@51 -- # shift 00:05:55.023 15:10:04 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:55.023 15:10:04 -- setup/hugepages.sh@52 -- # local node_ids 00:05:55.023 15:10:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:55.023 15:10:04 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:55.023 15:10:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:55.023 15:10:04 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:55.023 15:10:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:55.023 15:10:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:55.023 15:10:04 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:55.023 15:10:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:55.023 15:10:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:55.023 15:10:04 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:55.023 15:10:04 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:55.023 15:10:04 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:55.023 15:10:04 -- setup/hugepages.sh@73 -- # return 0 00:05:55.023 15:10:04 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:55.023 15:10:04 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:55.023 15:10:04 -- setup/hugepages.sh@146 -- # setup output 00:05:55.023 15:10:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:55.023 15:10:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:55.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:55.283 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:55.283 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:55.283 15:10:04 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:55.283 15:10:04 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:55.283 15:10:04 -- setup/hugepages.sh@89 -- # local node 00:05:55.283 15:10:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:55.283 15:10:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:55.283 15:10:04 -- setup/hugepages.sh@92 -- # local surp 00:05:55.283 15:10:04 -- setup/hugepages.sh@93 -- # local resv 00:05:55.283 15:10:04 -- setup/hugepages.sh@94 -- # local anon 00:05:55.283 15:10:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:55.283 15:10:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:55.283 15:10:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:55.283 15:10:04 -- setup/common.sh@18 -- # local node= 00:05:55.283 15:10:04 -- setup/common.sh@19 -- # local var val 00:05:55.283 15:10:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:55.283 15:10:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.283 15:10:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.283 15:10:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.283 15:10:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.283 15:10:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.283 15:10:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8747008 kB' 'MemAvailable: 10550768 kB' 'Buffers: 2436 kB' 'Cached: 2016592 kB' 'SwapCached: 0 kB' 'Active: 852556 kB' 'Inactive: 1290700 kB' 'Active(anon): 134692 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'AnonPages: 125620 kB' 'Mapped: 48860 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137244 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72972 kB' 'KernelStack: 6468 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.283 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.283 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.284 15:10:04 -- setup/common.sh@33 -- # echo 0 00:05:55.284 15:10:04 -- setup/common.sh@33 -- # return 0 00:05:55.284 15:10:04 -- setup/hugepages.sh@97 -- # anon=0 00:05:55.284 15:10:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:55.284 15:10:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:55.284 15:10:04 -- setup/common.sh@18 -- # local node= 00:05:55.284 15:10:04 -- setup/common.sh@19 -- # local var val 00:05:55.284 15:10:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:55.284 15:10:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.284 15:10:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.284 15:10:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.284 15:10:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.284 15:10:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8747008 kB' 'MemAvailable: 10550768 kB' 'Buffers: 2436 kB' 'Cached: 2016592 kB' 'SwapCached: 0 kB' 'Active: 852116 kB' 'Inactive: 1290700 kB' 'Active(anon): 134252 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'AnonPages: 125352 kB' 'Mapped: 48860 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137256 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72984 kB' 'KernelStack: 6436 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.284 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.284 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.285 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.285 15:10:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.546 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.546 15:10:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.547 15:10:04 -- setup/common.sh@33 -- # echo 0 00:05:55.547 15:10:04 -- setup/common.sh@33 -- # return 0 00:05:55.547 15:10:04 -- setup/hugepages.sh@99 -- # surp=0 00:05:55.547 15:10:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:55.547 15:10:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:55.547 15:10:04 -- setup/common.sh@18 -- # local node= 00:05:55.547 15:10:04 -- setup/common.sh@19 -- # local var val 00:05:55.547 15:10:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:55.547 15:10:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.547 15:10:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.547 15:10:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.547 15:10:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.547 15:10:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8747008 kB' 'MemAvailable: 10550768 kB' 'Buffers: 2436 kB' 'Cached: 2016592 kB' 'SwapCached: 0 kB' 'Active: 851652 kB' 'Inactive: 1290700 kB' 'Active(anon): 133788 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'AnonPages: 124888 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137256 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72984 kB' 'KernelStack: 6448 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.547 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.547 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.548 15:10:04 -- setup/common.sh@33 -- # echo 0 00:05:55.548 15:10:04 -- setup/common.sh@33 -- # return 0 00:05:55.548 15:10:04 -- setup/hugepages.sh@100 -- # resv=0 00:05:55.548 15:10:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:55.548 nr_hugepages=512 00:05:55.548 resv_hugepages=0 00:05:55.548 15:10:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:55.548 surplus_hugepages=0 00:05:55.548 15:10:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:55.548 anon_hugepages=0 00:05:55.548 15:10:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:55.548 15:10:04 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:55.548 15:10:04 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:55.548 15:10:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:55.548 15:10:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:55.548 15:10:04 -- setup/common.sh@18 -- # local node= 00:05:55.548 15:10:04 -- setup/common.sh@19 -- # local var val 00:05:55.548 15:10:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:55.548 15:10:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.548 15:10:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.548 15:10:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.548 15:10:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.548 15:10:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8747008 kB' 'MemAvailable: 10550768 kB' 'Buffers: 2436 kB' 'Cached: 2016592 kB' 'SwapCached: 0 kB' 'Active: 851912 kB' 'Inactive: 1290700 kB' 'Active(anon): 134048 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'AnonPages: 125148 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137256 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72984 kB' 'KernelStack: 6448 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.548 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.548 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.549 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.549 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.550 15:10:04 -- setup/common.sh@33 -- # echo 512 00:05:55.550 15:10:04 -- setup/common.sh@33 -- # return 0 00:05:55.550 15:10:04 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:55.550 15:10:04 -- setup/hugepages.sh@112 -- # get_nodes 00:05:55.550 15:10:04 -- setup/hugepages.sh@27 -- # local node 00:05:55.550 15:10:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:55.550 15:10:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:55.550 15:10:04 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:55.550 15:10:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:55.550 15:10:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:55.550 15:10:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:55.550 15:10:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:55.550 15:10:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:55.550 15:10:04 -- setup/common.sh@18 -- # local node=0 00:05:55.550 15:10:04 -- setup/common.sh@19 -- # local var val 00:05:55.550 15:10:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:55.550 15:10:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.550 15:10:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:55.550 15:10:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:55.550 15:10:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.550 15:10:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8747008 kB' 'MemUsed: 3494964 kB' 'SwapCached: 0 kB' 'Active: 851924 kB' 'Inactive: 1290700 kB' 'Active(anon): 134060 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'FilePages: 2019028 kB' 'Mapped: 48732 kB' 'AnonPages: 125188 kB' 'Shmem: 10464 kB' 'KernelStack: 6496 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64272 kB' 'Slab: 137256 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.550 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.550 15:10:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.551 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.551 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.551 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.551 15:10:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.551 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.551 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.551 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.551 15:10:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.551 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.551 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.551 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.551 15:10:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.551 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.551 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.551 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.551 15:10:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.551 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.551 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.551 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.551 15:10:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.551 15:10:04 -- setup/common.sh@32 -- # continue 00:05:55.551 15:10:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:55.551 15:10:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:55.551 15:10:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.551 15:10:04 -- setup/common.sh@33 -- # echo 0 00:05:55.551 15:10:04 -- setup/common.sh@33 -- # return 0 00:05:55.551 15:10:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:55.551 15:10:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:55.551 15:10:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:55.551 15:10:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:55.551 node0=512 expecting 512 00:05:55.551 15:10:04 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:55.551 15:10:04 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:55.551 00:05:55.551 real 0m0.520s 00:05:55.551 user 0m0.255s 00:05:55.551 sys 0m0.299s 00:05:55.551 15:10:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.551 15:10:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.551 ************************************ 00:05:55.551 END TEST per_node_1G_alloc 00:05:55.551 ************************************ 00:05:55.551 15:10:04 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:55.551 15:10:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.551 15:10:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.551 15:10:04 -- common/autotest_common.sh@10 -- # set +x 00:05:55.551 ************************************ 00:05:55.551 START TEST even_2G_alloc 00:05:55.551 ************************************ 00:05:55.551 15:10:04 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:05:55.551 15:10:04 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:55.551 15:10:04 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:55.551 15:10:04 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:55.551 15:10:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:55.551 15:10:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:55.551 15:10:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:55.551 15:10:04 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:55.551 15:10:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:55.551 15:10:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:55.551 15:10:04 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:55.551 15:10:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:55.551 15:10:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:55.551 15:10:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:55.551 15:10:04 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:55.551 15:10:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:55.551 15:10:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:55.551 15:10:04 -- setup/hugepages.sh@83 -- # : 0 00:05:55.551 15:10:04 -- setup/hugepages.sh@84 -- # : 0 00:05:55.551 15:10:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:55.551 15:10:04 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:55.551 15:10:04 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:55.551 15:10:04 -- setup/hugepages.sh@153 -- # setup output 00:05:55.551 15:10:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:55.551 15:10:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:56.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:56.122 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:56.122 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:56.122 15:10:05 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:56.122 15:10:05 -- setup/hugepages.sh@89 -- # local node 00:05:56.122 15:10:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:56.122 15:10:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:56.122 15:10:05 -- setup/hugepages.sh@92 -- # local surp 00:05:56.122 15:10:05 -- setup/hugepages.sh@93 -- # local resv 00:05:56.122 15:10:05 -- setup/hugepages.sh@94 -- # local anon 00:05:56.122 15:10:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:56.122 15:10:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:56.122 15:10:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:56.122 15:10:05 -- setup/common.sh@18 -- # local node= 00:05:56.122 15:10:05 -- setup/common.sh@19 -- # local var val 00:05:56.122 15:10:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.122 15:10:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.122 15:10:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.122 15:10:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.122 15:10:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.122 15:10:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7695060 kB' 'MemAvailable: 9498820 kB' 'Buffers: 2436 kB' 'Cached: 2016592 kB' 'SwapCached: 0 kB' 'Active: 852168 kB' 'Inactive: 1290700 kB' 'Active(anon): 134304 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1004 kB' 'Writeback: 0 kB' 'AnonPages: 125412 kB' 'Mapped: 48868 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137288 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73016 kB' 'KernelStack: 6452 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.122 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.122 15:10:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.123 15:10:05 -- setup/common.sh@33 -- # echo 0 00:05:56.123 15:10:05 -- setup/common.sh@33 -- # return 0 00:05:56.123 15:10:05 -- setup/hugepages.sh@97 -- # anon=0 00:05:56.123 15:10:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:56.123 15:10:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:56.123 15:10:05 -- setup/common.sh@18 -- # local node= 00:05:56.123 15:10:05 -- setup/common.sh@19 -- # local var val 00:05:56.123 15:10:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.123 15:10:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.123 15:10:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.123 15:10:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.123 15:10:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.123 15:10:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7695060 kB' 'MemAvailable: 9498820 kB' 'Buffers: 2436 kB' 'Cached: 2016592 kB' 'SwapCached: 0 kB' 'Active: 851924 kB' 'Inactive: 1290700 kB' 'Active(anon): 134060 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1004 kB' 'Writeback: 0 kB' 'AnonPages: 125168 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137280 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73008 kB' 'KernelStack: 6480 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.123 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.123 15:10:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.124 15:10:05 -- setup/common.sh@33 -- # echo 0 00:05:56.124 15:10:05 -- setup/common.sh@33 -- # return 0 00:05:56.124 15:10:05 -- setup/hugepages.sh@99 -- # surp=0 00:05:56.124 15:10:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:56.124 15:10:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:56.124 15:10:05 -- setup/common.sh@18 -- # local node= 00:05:56.124 15:10:05 -- setup/common.sh@19 -- # local var val 00:05:56.124 15:10:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.124 15:10:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.124 15:10:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.124 15:10:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.124 15:10:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.124 15:10:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7695320 kB' 'MemAvailable: 9499080 kB' 'Buffers: 2436 kB' 'Cached: 2016592 kB' 'SwapCached: 0 kB' 'Active: 851664 kB' 'Inactive: 1290700 kB' 'Active(anon): 133800 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1004 kB' 'Writeback: 0 kB' 'AnonPages: 124908 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137280 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73008 kB' 'KernelStack: 6480 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.124 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.124 15:10:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.125 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.125 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.126 15:10:05 -- setup/common.sh@33 -- # echo 0 00:05:56.126 15:10:05 -- setup/common.sh@33 -- # return 0 00:05:56.126 15:10:05 -- setup/hugepages.sh@100 -- # resv=0 00:05:56.126 nr_hugepages=1024 00:05:56.126 15:10:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:56.126 resv_hugepages=0 00:05:56.126 15:10:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:56.126 surplus_hugepages=0 00:05:56.126 15:10:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:56.126 anon_hugepages=0 00:05:56.126 15:10:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:56.126 15:10:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:56.126 15:10:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:56.126 15:10:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:56.126 15:10:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:56.126 15:10:05 -- setup/common.sh@18 -- # local node= 00:05:56.126 15:10:05 -- setup/common.sh@19 -- # local var val 00:05:56.126 15:10:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.126 15:10:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.126 15:10:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.126 15:10:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.126 15:10:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.126 15:10:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7695320 kB' 'MemAvailable: 9499080 kB' 'Buffers: 2436 kB' 'Cached: 2016592 kB' 'SwapCached: 0 kB' 'Active: 851692 kB' 'Inactive: 1290700 kB' 'Active(anon): 133828 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1004 kB' 'Writeback: 0 kB' 'AnonPages: 125200 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137280 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73008 kB' 'KernelStack: 6496 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.126 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.126 15:10:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.127 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.127 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.127 15:10:05 -- setup/common.sh@33 -- # echo 1024 00:05:56.127 15:10:05 -- setup/common.sh@33 -- # return 0 00:05:56.127 15:10:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:56.127 15:10:05 -- setup/hugepages.sh@112 -- # get_nodes 00:05:56.127 15:10:05 -- setup/hugepages.sh@27 -- # local node 00:05:56.127 15:10:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:56.127 15:10:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:56.127 15:10:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:56.127 15:10:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:56.127 15:10:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:56.127 15:10:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:56.127 15:10:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:56.127 15:10:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:56.127 15:10:05 -- setup/common.sh@18 -- # local node=0 00:05:56.127 15:10:05 -- setup/common.sh@19 -- # local var val 00:05:56.127 15:10:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.127 15:10:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.127 15:10:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:56.128 15:10:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:56.128 15:10:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.128 15:10:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7695360 kB' 'MemUsed: 4546612 kB' 'SwapCached: 0 kB' 'Active: 851664 kB' 'Inactive: 1290700 kB' 'Active(anon): 133800 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1004 kB' 'Writeback: 0 kB' 'FilePages: 2019028 kB' 'Mapped: 48740 kB' 'AnonPages: 125164 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64272 kB' 'Slab: 137276 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.128 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.128 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.129 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.129 15:10:05 -- setup/common.sh@33 -- # echo 0 00:05:56.129 15:10:05 -- setup/common.sh@33 -- # return 0 00:05:56.129 15:10:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:56.129 15:10:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:56.129 15:10:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:56.129 15:10:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:56.129 node0=1024 expecting 1024 00:05:56.129 15:10:05 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:56.129 15:10:05 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:56.129 00:05:56.129 real 0m0.514s 00:05:56.129 user 0m0.269s 00:05:56.129 sys 0m0.278s 00:05:56.129 15:10:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.129 15:10:05 -- common/autotest_common.sh@10 -- # set +x 00:05:56.129 ************************************ 00:05:56.129 END TEST even_2G_alloc 00:05:56.129 ************************************ 00:05:56.129 15:10:05 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:56.129 15:10:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.129 15:10:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.129 15:10:05 -- common/autotest_common.sh@10 -- # set +x 00:05:56.129 ************************************ 00:05:56.129 START TEST odd_alloc 00:05:56.129 ************************************ 00:05:56.129 15:10:05 -- common/autotest_common.sh@1111 -- # odd_alloc 00:05:56.129 15:10:05 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:56.129 15:10:05 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:56.129 15:10:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:56.129 15:10:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:56.129 15:10:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:56.129 15:10:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:56.387 15:10:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:56.387 15:10:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:56.387 15:10:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:56.387 15:10:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:56.387 15:10:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:56.387 15:10:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:56.387 15:10:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:56.387 15:10:05 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:56.387 15:10:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:56.387 15:10:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:56.387 15:10:05 -- setup/hugepages.sh@83 -- # : 0 00:05:56.387 15:10:05 -- setup/hugepages.sh@84 -- # : 0 00:05:56.387 15:10:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:56.387 15:10:05 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:56.387 15:10:05 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:56.387 15:10:05 -- setup/hugepages.sh@160 -- # setup output 00:05:56.387 15:10:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:56.387 15:10:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:56.696 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:56.696 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:56.696 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:56.696 15:10:05 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:56.696 15:10:05 -- setup/hugepages.sh@89 -- # local node 00:05:56.696 15:10:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:56.696 15:10:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:56.696 15:10:05 -- setup/hugepages.sh@92 -- # local surp 00:05:56.696 15:10:05 -- setup/hugepages.sh@93 -- # local resv 00:05:56.696 15:10:05 -- setup/hugepages.sh@94 -- # local anon 00:05:56.696 15:10:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:56.696 15:10:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:56.696 15:10:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:56.696 15:10:05 -- setup/common.sh@18 -- # local node= 00:05:56.696 15:10:05 -- setup/common.sh@19 -- # local var val 00:05:56.696 15:10:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.696 15:10:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.696 15:10:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.696 15:10:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.696 15:10:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.696 15:10:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7691424 kB' 'MemAvailable: 9495188 kB' 'Buffers: 2436 kB' 'Cached: 2016596 kB' 'SwapCached: 0 kB' 'Active: 851932 kB' 'Inactive: 1290704 kB' 'Active(anon): 134068 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1156 kB' 'Writeback: 0 kB' 'AnonPages: 125004 kB' 'Mapped: 48852 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137268 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72996 kB' 'KernelStack: 6468 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 357576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.696 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.696 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.697 15:10:05 -- setup/common.sh@33 -- # echo 0 00:05:56.697 15:10:05 -- setup/common.sh@33 -- # return 0 00:05:56.697 15:10:05 -- setup/hugepages.sh@97 -- # anon=0 00:05:56.697 15:10:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:56.697 15:10:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:56.697 15:10:05 -- setup/common.sh@18 -- # local node= 00:05:56.697 15:10:05 -- setup/common.sh@19 -- # local var val 00:05:56.697 15:10:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.697 15:10:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.697 15:10:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.697 15:10:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.697 15:10:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.697 15:10:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7691424 kB' 'MemAvailable: 9495188 kB' 'Buffers: 2436 kB' 'Cached: 2016596 kB' 'SwapCached: 0 kB' 'Active: 851044 kB' 'Inactive: 1290704 kB' 'Active(anon): 133180 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1156 kB' 'Writeback: 0 kB' 'AnonPages: 124336 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137268 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72996 kB' 'KernelStack: 6436 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 357576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.697 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.697 15:10:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.698 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.698 15:10:05 -- setup/common.sh@33 -- # echo 0 00:05:56.698 15:10:05 -- setup/common.sh@33 -- # return 0 00:05:56.698 15:10:05 -- setup/hugepages.sh@99 -- # surp=0 00:05:56.698 15:10:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:56.698 15:10:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:56.698 15:10:05 -- setup/common.sh@18 -- # local node= 00:05:56.698 15:10:05 -- setup/common.sh@19 -- # local var val 00:05:56.698 15:10:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.698 15:10:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.698 15:10:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.698 15:10:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.698 15:10:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.698 15:10:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.698 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7691424 kB' 'MemAvailable: 9495188 kB' 'Buffers: 2436 kB' 'Cached: 2016596 kB' 'SwapCached: 0 kB' 'Active: 851076 kB' 'Inactive: 1290704 kB' 'Active(anon): 133212 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1156 kB' 'Writeback: 0 kB' 'AnonPages: 124320 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137268 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72996 kB' 'KernelStack: 6404 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 357576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.699 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.699 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.700 15:10:05 -- setup/common.sh@33 -- # echo 0 00:05:56.700 15:10:05 -- setup/common.sh@33 -- # return 0 00:05:56.700 15:10:05 -- setup/hugepages.sh@100 -- # resv=0 00:05:56.700 nr_hugepages=1025 00:05:56.700 15:10:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:56.700 resv_hugepages=0 00:05:56.700 15:10:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:56.700 surplus_hugepages=0 00:05:56.700 15:10:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:56.700 anon_hugepages=0 00:05:56.700 15:10:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:56.700 15:10:05 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:56.700 15:10:05 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:56.700 15:10:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:56.700 15:10:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:56.700 15:10:05 -- setup/common.sh@18 -- # local node= 00:05:56.700 15:10:05 -- setup/common.sh@19 -- # local var val 00:05:56.700 15:10:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.700 15:10:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.700 15:10:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.700 15:10:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.700 15:10:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.700 15:10:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7691424 kB' 'MemAvailable: 9495188 kB' 'Buffers: 2436 kB' 'Cached: 2016596 kB' 'SwapCached: 0 kB' 'Active: 851120 kB' 'Inactive: 1290704 kB' 'Active(anon): 133256 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1156 kB' 'Writeback: 0 kB' 'AnonPages: 124332 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137268 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72996 kB' 'KernelStack: 6432 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 357576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.700 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.700 15:10:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.701 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.701 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.701 15:10:05 -- setup/common.sh@33 -- # echo 1025 00:05:56.701 15:10:05 -- setup/common.sh@33 -- # return 0 00:05:56.701 15:10:05 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:56.701 15:10:05 -- setup/hugepages.sh@112 -- # get_nodes 00:05:56.701 15:10:05 -- setup/hugepages.sh@27 -- # local node 00:05:56.701 15:10:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:56.701 15:10:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:56.701 15:10:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:56.701 15:10:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:56.701 15:10:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:56.702 15:10:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:56.702 15:10:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:56.702 15:10:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:56.702 15:10:05 -- setup/common.sh@18 -- # local node=0 00:05:56.702 15:10:05 -- setup/common.sh@19 -- # local var val 00:05:56.702 15:10:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:56.702 15:10:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.702 15:10:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:56.702 15:10:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:56.702 15:10:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.702 15:10:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7691424 kB' 'MemUsed: 4550548 kB' 'SwapCached: 0 kB' 'Active: 851088 kB' 'Inactive: 1290704 kB' 'Active(anon): 133224 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1156 kB' 'Writeback: 0 kB' 'FilePages: 2019032 kB' 'Mapped: 48952 kB' 'AnonPages: 124348 kB' 'Shmem: 10464 kB' 'KernelStack: 6484 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64272 kB' 'Slab: 137268 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 72996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.702 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.702 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.703 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.703 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.703 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.703 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.703 15:10:05 -- setup/common.sh@32 -- # continue 00:05:56.703 15:10:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:56.703 15:10:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:56.703 15:10:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.703 15:10:05 -- setup/common.sh@33 -- # echo 0 00:05:56.703 15:10:05 -- setup/common.sh@33 -- # return 0 00:05:56.703 15:10:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:56.703 15:10:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:56.703 15:10:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:56.703 15:10:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:56.703 node0=1025 expecting 1025 00:05:56.703 15:10:05 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:56.703 15:10:05 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:56.703 00:05:56.703 real 0m0.524s 00:05:56.703 user 0m0.271s 00:05:56.703 sys 0m0.290s 00:05:56.703 15:10:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.703 15:10:05 -- common/autotest_common.sh@10 -- # set +x 00:05:56.703 ************************************ 00:05:56.703 END TEST odd_alloc 00:05:56.703 ************************************ 00:05:56.703 15:10:05 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:56.703 15:10:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.703 15:10:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.703 15:10:05 -- common/autotest_common.sh@10 -- # set +x 00:05:56.962 ************************************ 00:05:56.962 START TEST custom_alloc 00:05:56.962 ************************************ 00:05:56.962 15:10:05 -- common/autotest_common.sh@1111 -- # custom_alloc 00:05:56.962 15:10:05 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:56.962 15:10:06 -- setup/hugepages.sh@169 -- # local node 00:05:56.962 15:10:06 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:56.962 15:10:06 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:56.962 15:10:06 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:56.962 15:10:06 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:56.962 15:10:06 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:56.962 15:10:06 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:56.962 15:10:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:56.962 15:10:06 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:56.962 15:10:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:56.962 15:10:06 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:56.962 15:10:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:56.962 15:10:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:56.962 15:10:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:56.962 15:10:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:56.962 15:10:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:56.962 15:10:06 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:56.962 15:10:06 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:56.962 15:10:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:56.962 15:10:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:56.962 15:10:06 -- setup/hugepages.sh@83 -- # : 0 00:05:56.962 15:10:06 -- setup/hugepages.sh@84 -- # : 0 00:05:56.962 15:10:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:56.962 15:10:06 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:56.962 15:10:06 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:56.962 15:10:06 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:56.962 15:10:06 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:56.962 15:10:06 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:56.962 15:10:06 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:56.962 15:10:06 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:56.962 15:10:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:56.962 15:10:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:56.962 15:10:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:56.962 15:10:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:56.962 15:10:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:56.962 15:10:06 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:56.962 15:10:06 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:56.962 15:10:06 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:56.962 15:10:06 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:56.962 15:10:06 -- setup/hugepages.sh@78 -- # return 0 00:05:56.962 15:10:06 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:56.962 15:10:06 -- setup/hugepages.sh@187 -- # setup output 00:05:56.962 15:10:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:56.962 15:10:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:57.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:57.223 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:57.223 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:57.223 15:10:06 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:57.223 15:10:06 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:57.223 15:10:06 -- setup/hugepages.sh@89 -- # local node 00:05:57.223 15:10:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:57.223 15:10:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:57.223 15:10:06 -- setup/hugepages.sh@92 -- # local surp 00:05:57.223 15:10:06 -- setup/hugepages.sh@93 -- # local resv 00:05:57.223 15:10:06 -- setup/hugepages.sh@94 -- # local anon 00:05:57.223 15:10:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:57.223 15:10:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:57.223 15:10:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:57.223 15:10:06 -- setup/common.sh@18 -- # local node= 00:05:57.223 15:10:06 -- setup/common.sh@19 -- # local var val 00:05:57.223 15:10:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:57.223 15:10:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.223 15:10:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.223 15:10:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.223 15:10:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.223 15:10:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.223 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.223 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8737424 kB' 'MemAvailable: 10541228 kB' 'Buffers: 2436 kB' 'Cached: 2016636 kB' 'SwapCached: 0 kB' 'Active: 851452 kB' 'Inactive: 1290744 kB' 'Active(anon): 133588 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1296 kB' 'Writeback: 0 kB' 'AnonPages: 124696 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137272 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73000 kB' 'KernelStack: 6484 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 357708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.224 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.224 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.225 15:10:06 -- setup/common.sh@33 -- # echo 0 00:05:57.225 15:10:06 -- setup/common.sh@33 -- # return 0 00:05:57.225 15:10:06 -- setup/hugepages.sh@97 -- # anon=0 00:05:57.225 15:10:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:57.225 15:10:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:57.225 15:10:06 -- setup/common.sh@18 -- # local node= 00:05:57.225 15:10:06 -- setup/common.sh@19 -- # local var val 00:05:57.225 15:10:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:57.225 15:10:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.225 15:10:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.225 15:10:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.225 15:10:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.225 15:10:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8737424 kB' 'MemAvailable: 10541228 kB' 'Buffers: 2436 kB' 'Cached: 2016636 kB' 'SwapCached: 0 kB' 'Active: 850768 kB' 'Inactive: 1290744 kB' 'Active(anon): 132904 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1296 kB' 'Writeback: 0 kB' 'AnonPages: 124296 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137332 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73060 kB' 'KernelStack: 6480 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 357708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.225 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.225 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.226 15:10:06 -- setup/common.sh@33 -- # echo 0 00:05:57.226 15:10:06 -- setup/common.sh@33 -- # return 0 00:05:57.226 15:10:06 -- setup/hugepages.sh@99 -- # surp=0 00:05:57.226 15:10:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:57.226 15:10:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:57.226 15:10:06 -- setup/common.sh@18 -- # local node= 00:05:57.226 15:10:06 -- setup/common.sh@19 -- # local var val 00:05:57.226 15:10:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:57.226 15:10:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.226 15:10:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.226 15:10:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.226 15:10:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.226 15:10:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8737424 kB' 'MemAvailable: 10541228 kB' 'Buffers: 2436 kB' 'Cached: 2016636 kB' 'SwapCached: 0 kB' 'Active: 850880 kB' 'Inactive: 1290744 kB' 'Active(anon): 133016 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1296 kB' 'Writeback: 0 kB' 'AnonPages: 124408 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137332 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73060 kB' 'KernelStack: 6464 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 357708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.226 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.226 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.227 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.227 15:10:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.227 15:10:06 -- setup/common.sh@33 -- # echo 0 00:05:57.227 15:10:06 -- setup/common.sh@33 -- # return 0 00:05:57.227 15:10:06 -- setup/hugepages.sh@100 -- # resv=0 00:05:57.227 nr_hugepages=512 00:05:57.227 15:10:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:57.227 resv_hugepages=0 00:05:57.227 surplus_hugepages=0 00:05:57.228 15:10:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:57.228 15:10:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:57.228 anon_hugepages=0 00:05:57.228 15:10:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:57.488 15:10:06 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:57.488 15:10:06 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:57.488 15:10:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:57.488 15:10:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:57.488 15:10:06 -- setup/common.sh@18 -- # local node= 00:05:57.488 15:10:06 -- setup/common.sh@19 -- # local var val 00:05:57.489 15:10:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:57.489 15:10:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.489 15:10:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.489 15:10:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.489 15:10:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.489 15:10:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8737424 kB' 'MemAvailable: 10541228 kB' 'Buffers: 2436 kB' 'Cached: 2016636 kB' 'SwapCached: 0 kB' 'Active: 850808 kB' 'Inactive: 1290744 kB' 'Active(anon): 132944 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1296 kB' 'Writeback: 0 kB' 'AnonPages: 124104 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137332 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73060 kB' 'KernelStack: 6480 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 357708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.489 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.489 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.490 15:10:06 -- setup/common.sh@33 -- # echo 512 00:05:57.490 15:10:06 -- setup/common.sh@33 -- # return 0 00:05:57.490 15:10:06 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:57.490 15:10:06 -- setup/hugepages.sh@112 -- # get_nodes 00:05:57.490 15:10:06 -- setup/hugepages.sh@27 -- # local node 00:05:57.490 15:10:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:57.490 15:10:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:57.490 15:10:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:57.490 15:10:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:57.490 15:10:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:57.490 15:10:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:57.490 15:10:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:57.490 15:10:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:57.490 15:10:06 -- setup/common.sh@18 -- # local node=0 00:05:57.490 15:10:06 -- setup/common.sh@19 -- # local var val 00:05:57.490 15:10:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:57.490 15:10:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.490 15:10:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:57.490 15:10:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:57.490 15:10:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.490 15:10:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8737424 kB' 'MemUsed: 3504548 kB' 'SwapCached: 0 kB' 'Active: 850832 kB' 'Inactive: 1290744 kB' 'Active(anon): 132968 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1296 kB' 'Writeback: 0 kB' 'FilePages: 2019072 kB' 'Mapped: 48764 kB' 'AnonPages: 124384 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64272 kB' 'Slab: 137332 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.490 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.490 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # continue 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:57.491 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:57.491 15:10:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.491 15:10:06 -- setup/common.sh@33 -- # echo 0 00:05:57.491 15:10:06 -- setup/common.sh@33 -- # return 0 00:05:57.491 15:10:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:57.491 15:10:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:57.491 15:10:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:57.491 15:10:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:57.491 node0=512 expecting 512 00:05:57.491 15:10:06 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:57.491 15:10:06 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:57.491 00:05:57.491 real 0m0.522s 00:05:57.491 user 0m0.260s 00:05:57.491 sys 0m0.294s 00:05:57.491 15:10:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.491 15:10:06 -- common/autotest_common.sh@10 -- # set +x 00:05:57.491 ************************************ 00:05:57.491 END TEST custom_alloc 00:05:57.491 ************************************ 00:05:57.491 15:10:06 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:57.491 15:10:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.491 15:10:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.491 15:10:06 -- common/autotest_common.sh@10 -- # set +x 00:05:57.491 ************************************ 00:05:57.491 START TEST no_shrink_alloc 00:05:57.491 ************************************ 00:05:57.491 15:10:06 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:05:57.491 15:10:06 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:57.491 15:10:06 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:57.491 15:10:06 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:57.491 15:10:06 -- setup/hugepages.sh@51 -- # shift 00:05:57.491 15:10:06 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:57.491 15:10:06 -- setup/hugepages.sh@52 -- # local node_ids 00:05:57.491 15:10:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:57.491 15:10:06 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:57.491 15:10:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:57.491 15:10:06 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:57.491 15:10:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:57.491 15:10:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:57.491 15:10:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:57.491 15:10:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:57.491 15:10:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:57.491 15:10:06 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:57.491 15:10:06 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:57.491 15:10:06 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:57.491 15:10:06 -- setup/hugepages.sh@73 -- # return 0 00:05:57.491 15:10:06 -- setup/hugepages.sh@198 -- # setup output 00:05:57.491 15:10:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:57.491 15:10:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:57.750 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:57.750 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:57.750 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:57.750 15:10:06 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:57.750 15:10:06 -- setup/hugepages.sh@89 -- # local node 00:05:57.750 15:10:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:57.750 15:10:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:58.012 15:10:06 -- setup/hugepages.sh@92 -- # local surp 00:05:58.012 15:10:06 -- setup/hugepages.sh@93 -- # local resv 00:05:58.012 15:10:06 -- setup/hugepages.sh@94 -- # local anon 00:05:58.012 15:10:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:58.012 15:10:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:58.012 15:10:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:58.012 15:10:06 -- setup/common.sh@18 -- # local node= 00:05:58.012 15:10:06 -- setup/common.sh@19 -- # local var val 00:05:58.012 15:10:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.012 15:10:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.012 15:10:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.012 15:10:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.012 15:10:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.012 15:10:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.012 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.012 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.012 15:10:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7688404 kB' 'MemAvailable: 9492212 kB' 'Buffers: 2436 kB' 'Cached: 2016640 kB' 'SwapCached: 0 kB' 'Active: 851600 kB' 'Inactive: 1290748 kB' 'Active(anon): 133736 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1464 kB' 'Writeback: 0 kB' 'AnonPages: 124816 kB' 'Mapped: 48904 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137360 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73088 kB' 'KernelStack: 6468 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 357404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:58.012 15:10:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.012 15:10:06 -- setup/common.sh@32 -- # continue 00:05:58.012 15:10:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.012 15:10:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.012 15:10:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.012 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.012 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.012 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.013 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.013 15:10:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 15:10:07 -- setup/common.sh@33 -- # echo 0 00:05:58.014 15:10:07 -- setup/common.sh@33 -- # return 0 00:05:58.014 15:10:07 -- setup/hugepages.sh@97 -- # anon=0 00:05:58.014 15:10:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:58.014 15:10:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:58.014 15:10:07 -- setup/common.sh@18 -- # local node= 00:05:58.014 15:10:07 -- setup/common.sh@19 -- # local var val 00:05:58.014 15:10:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.014 15:10:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.014 15:10:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.014 15:10:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.014 15:10:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.014 15:10:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7689660 kB' 'MemAvailable: 9493468 kB' 'Buffers: 2436 kB' 'Cached: 2016640 kB' 'SwapCached: 0 kB' 'Active: 850804 kB' 'Inactive: 1290748 kB' 'Active(anon): 132940 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1464 kB' 'Writeback: 0 kB' 'AnonPages: 124048 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137352 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73080 kB' 'KernelStack: 6464 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 357404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.014 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.014 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 15:10:07 -- setup/common.sh@33 -- # echo 0 00:05:58.016 15:10:07 -- setup/common.sh@33 -- # return 0 00:05:58.016 15:10:07 -- setup/hugepages.sh@99 -- # surp=0 00:05:58.016 15:10:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:58.016 15:10:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:58.016 15:10:07 -- setup/common.sh@18 -- # local node= 00:05:58.016 15:10:07 -- setup/common.sh@19 -- # local var val 00:05:58.016 15:10:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.016 15:10:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.016 15:10:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.016 15:10:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.016 15:10:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.016 15:10:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7689660 kB' 'MemAvailable: 9493468 kB' 'Buffers: 2436 kB' 'Cached: 2016640 kB' 'SwapCached: 0 kB' 'Active: 851100 kB' 'Inactive: 1290748 kB' 'Active(anon): 133236 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1464 kB' 'Writeback: 0 kB' 'AnonPages: 124376 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137352 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73080 kB' 'KernelStack: 6496 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 357404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.016 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.016 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 15:10:07 -- setup/common.sh@33 -- # echo 0 00:05:58.018 15:10:07 -- setup/common.sh@33 -- # return 0 00:05:58.018 15:10:07 -- setup/hugepages.sh@100 -- # resv=0 00:05:58.018 nr_hugepages=1024 00:05:58.018 15:10:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:58.018 resv_hugepages=0 00:05:58.018 15:10:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:58.018 surplus_hugepages=0 00:05:58.018 15:10:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:58.018 anon_hugepages=0 00:05:58.018 15:10:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:58.018 15:10:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:58.018 15:10:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:58.018 15:10:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:58.018 15:10:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:58.018 15:10:07 -- setup/common.sh@18 -- # local node= 00:05:58.018 15:10:07 -- setup/common.sh@19 -- # local var val 00:05:58.018 15:10:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.018 15:10:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.018 15:10:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.018 15:10:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.018 15:10:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.018 15:10:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7689660 kB' 'MemAvailable: 9493468 kB' 'Buffers: 2436 kB' 'Cached: 2016640 kB' 'SwapCached: 0 kB' 'Active: 851076 kB' 'Inactive: 1290748 kB' 'Active(anon): 133212 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1464 kB' 'Writeback: 0 kB' 'AnonPages: 124340 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 64272 kB' 'Slab: 137352 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73080 kB' 'KernelStack: 6480 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 357404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 15:10:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 15:10:07 -- setup/common.sh@33 -- # echo 1024 00:05:58.019 15:10:07 -- setup/common.sh@33 -- # return 0 00:05:58.019 15:10:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:58.019 15:10:07 -- setup/hugepages.sh@112 -- # get_nodes 00:05:58.019 15:10:07 -- setup/hugepages.sh@27 -- # local node 00:05:58.019 15:10:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:58.019 15:10:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:58.019 15:10:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:58.019 15:10:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:58.019 15:10:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:58.019 15:10:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:58.019 15:10:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:58.019 15:10:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:58.019 15:10:07 -- setup/common.sh@18 -- # local node=0 00:05:58.019 15:10:07 -- setup/common.sh@19 -- # local var val 00:05:58.019 15:10:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.019 15:10:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.019 15:10:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:58.019 15:10:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:58.019 15:10:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.019 15:10:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 15:10:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7689660 kB' 'MemUsed: 4552312 kB' 'SwapCached: 0 kB' 'Active: 851068 kB' 'Inactive: 1290748 kB' 'Active(anon): 133204 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1464 kB' 'Writeback: 0 kB' 'FilePages: 2019076 kB' 'Mapped: 48776 kB' 'AnonPages: 124348 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64272 kB' 'Slab: 137352 kB' 'SReclaimable: 64272 kB' 'SUnreclaim: 73080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:58.019 15:10:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 15:10:07 -- setup/common.sh@33 -- # echo 0 00:05:58.020 15:10:07 -- setup/common.sh@33 -- # return 0 00:05:58.020 15:10:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:58.020 15:10:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:58.020 15:10:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:58.020 15:10:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:58.020 15:10:07 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:58.020 node0=1024 expecting 1024 00:05:58.020 15:10:07 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:58.020 15:10:07 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:58.020 15:10:07 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:58.020 15:10:07 -- setup/hugepages.sh@202 -- # setup output 00:05:58.020 15:10:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.020 15:10:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:58.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.280 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:58.280 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:58.280 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:58.280 15:10:07 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:58.280 15:10:07 -- setup/hugepages.sh@89 -- # local node 00:05:58.280 15:10:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:58.280 15:10:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:58.280 15:10:07 -- setup/hugepages.sh@92 -- # local surp 00:05:58.280 15:10:07 -- setup/hugepages.sh@93 -- # local resv 00:05:58.280 15:10:07 -- setup/hugepages.sh@94 -- # local anon 00:05:58.280 15:10:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:58.280 15:10:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:58.280 15:10:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:58.280 15:10:07 -- setup/common.sh@18 -- # local node= 00:05:58.280 15:10:07 -- setup/common.sh@19 -- # local var val 00:05:58.280 15:10:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.280 15:10:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.280 15:10:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.280 15:10:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.280 15:10:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.280 15:10:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.280 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.280 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.280 15:10:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7693544 kB' 'MemAvailable: 9497352 kB' 'Buffers: 2436 kB' 'Cached: 2016640 kB' 'SwapCached: 0 kB' 'Active: 846656 kB' 'Inactive: 1290748 kB' 'Active(anon): 128792 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1464 kB' 'Writeback: 0 kB' 'AnonPages: 119896 kB' 'Mapped: 48156 kB' 'Shmem: 10464 kB' 'KReclaimable: 64268 kB' 'Slab: 137228 kB' 'SReclaimable: 64268 kB' 'SUnreclaim: 72960 kB' 'KernelStack: 6404 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:58.280 15:10:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.280 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.280 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.280 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.280 15:10:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.280 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.281 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.281 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.543 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.543 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.543 15:10:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.543 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.543 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.543 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.543 15:10:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.543 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.543 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.543 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.543 15:10:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.543 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.543 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.543 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.543 15:10:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.543 15:10:07 -- setup/common.sh@33 -- # echo 0 00:05:58.543 15:10:07 -- setup/common.sh@33 -- # return 0 00:05:58.543 15:10:07 -- setup/hugepages.sh@97 -- # anon=0 00:05:58.543 15:10:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:58.543 15:10:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:58.543 15:10:07 -- setup/common.sh@18 -- # local node= 00:05:58.543 15:10:07 -- setup/common.sh@19 -- # local var val 00:05:58.543 15:10:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.543 15:10:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.543 15:10:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.543 15:10:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.543 15:10:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.543 15:10:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.543 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.543 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7693796 kB' 'MemAvailable: 9497604 kB' 'Buffers: 2436 kB' 'Cached: 2016640 kB' 'SwapCached: 0 kB' 'Active: 846196 kB' 'Inactive: 1290748 kB' 'Active(anon): 128332 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1464 kB' 'Writeback: 0 kB' 'AnonPages: 119464 kB' 'Mapped: 48036 kB' 'Shmem: 10464 kB' 'KReclaimable: 64268 kB' 'Slab: 137220 kB' 'SReclaimable: 64268 kB' 'SUnreclaim: 72952 kB' 'KernelStack: 6368 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.544 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.544 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.545 15:10:07 -- setup/common.sh@33 -- # echo 0 00:05:58.545 15:10:07 -- setup/common.sh@33 -- # return 0 00:05:58.545 15:10:07 -- setup/hugepages.sh@99 -- # surp=0 00:05:58.545 15:10:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:58.545 15:10:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:58.545 15:10:07 -- setup/common.sh@18 -- # local node= 00:05:58.545 15:10:07 -- setup/common.sh@19 -- # local var val 00:05:58.545 15:10:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.545 15:10:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.545 15:10:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.545 15:10:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.545 15:10:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.545 15:10:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7693796 kB' 'MemAvailable: 9497604 kB' 'Buffers: 2436 kB' 'Cached: 2016640 kB' 'SwapCached: 0 kB' 'Active: 846184 kB' 'Inactive: 1290748 kB' 'Active(anon): 128320 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1464 kB' 'Writeback: 0 kB' 'AnonPages: 119456 kB' 'Mapped: 48036 kB' 'Shmem: 10464 kB' 'KReclaimable: 64268 kB' 'Slab: 137188 kB' 'SReclaimable: 64268 kB' 'SUnreclaim: 72920 kB' 'KernelStack: 6352 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.545 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.545 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.546 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.546 15:10:07 -- setup/common.sh@33 -- # echo 0 00:05:58.546 15:10:07 -- setup/common.sh@33 -- # return 0 00:05:58.546 nr_hugepages=1024 00:05:58.546 resv_hugepages=0 00:05:58.546 15:10:07 -- setup/hugepages.sh@100 -- # resv=0 00:05:58.546 15:10:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:58.546 15:10:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:58.546 surplus_hugepages=0 00:05:58.546 anon_hugepages=0 00:05:58.546 15:10:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:58.546 15:10:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:58.546 15:10:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:58.546 15:10:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:58.546 15:10:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:58.546 15:10:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:58.546 15:10:07 -- setup/common.sh@18 -- # local node= 00:05:58.546 15:10:07 -- setup/common.sh@19 -- # local var val 00:05:58.546 15:10:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.546 15:10:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.546 15:10:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.546 15:10:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.546 15:10:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.546 15:10:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.546 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7693796 kB' 'MemAvailable: 9497604 kB' 'Buffers: 2436 kB' 'Cached: 2016640 kB' 'SwapCached: 0 kB' 'Active: 846240 kB' 'Inactive: 1290748 kB' 'Active(anon): 128376 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1464 kB' 'Writeback: 0 kB' 'AnonPages: 119484 kB' 'Mapped: 48036 kB' 'Shmem: 10464 kB' 'KReclaimable: 64268 kB' 'Slab: 137188 kB' 'SReclaimable: 64268 kB' 'SUnreclaim: 72920 kB' 'KernelStack: 6352 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.547 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.547 15:10:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.548 15:10:07 -- setup/common.sh@33 -- # echo 1024 00:05:58.548 15:10:07 -- setup/common.sh@33 -- # return 0 00:05:58.548 15:10:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:58.548 15:10:07 -- setup/hugepages.sh@112 -- # get_nodes 00:05:58.548 15:10:07 -- setup/hugepages.sh@27 -- # local node 00:05:58.548 15:10:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:58.548 15:10:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:58.548 15:10:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:58.548 15:10:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:58.548 15:10:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:58.548 15:10:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:58.548 15:10:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:58.548 15:10:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:58.548 15:10:07 -- setup/common.sh@18 -- # local node=0 00:05:58.548 15:10:07 -- setup/common.sh@19 -- # local var val 00:05:58.548 15:10:07 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.548 15:10:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.548 15:10:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:58.548 15:10:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:58.548 15:10:07 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.548 15:10:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.548 15:10:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7693796 kB' 'MemUsed: 4548176 kB' 'SwapCached: 0 kB' 'Active: 846168 kB' 'Inactive: 1290748 kB' 'Active(anon): 128304 kB' 'Inactive(anon): 0 kB' 'Active(file): 717864 kB' 'Inactive(file): 1290748 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1464 kB' 'Writeback: 0 kB' 'FilePages: 2019076 kB' 'Mapped: 48036 kB' 'AnonPages: 119408 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64268 kB' 'Slab: 137188 kB' 'SReclaimable: 64268 kB' 'SUnreclaim: 72920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.548 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.548 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # continue 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.549 15:10:07 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.549 15:10:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.549 15:10:07 -- setup/common.sh@33 -- # echo 0 00:05:58.549 15:10:07 -- setup/common.sh@33 -- # return 0 00:05:58.549 node0=1024 expecting 1024 00:05:58.549 ************************************ 00:05:58.549 END TEST no_shrink_alloc 00:05:58.549 ************************************ 00:05:58.549 15:10:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:58.549 15:10:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:58.549 15:10:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:58.549 15:10:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:58.549 15:10:07 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:58.549 15:10:07 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:58.549 00:05:58.549 real 0m1.053s 00:05:58.549 user 0m0.509s 00:05:58.549 sys 0m0.574s 00:05:58.549 15:10:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.549 15:10:07 -- common/autotest_common.sh@10 -- # set +x 00:05:58.549 15:10:07 -- setup/hugepages.sh@217 -- # clear_hp 00:05:58.549 15:10:07 -- setup/hugepages.sh@37 -- # local node hp 00:05:58.549 15:10:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:58.549 15:10:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:58.549 15:10:07 -- setup/hugepages.sh@41 -- # echo 0 00:05:58.549 15:10:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:58.549 15:10:07 -- setup/hugepages.sh@41 -- # echo 0 00:05:58.549 15:10:07 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:58.549 15:10:07 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:58.549 ************************************ 00:05:58.549 END TEST hugepages 00:05:58.549 ************************************ 00:05:58.549 00:05:58.549 real 0m4.896s 00:05:58.549 user 0m2.303s 00:05:58.549 sys 0m2.629s 00:05:58.549 15:10:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.549 15:10:07 -- common/autotest_common.sh@10 -- # set +x 00:05:58.549 15:10:07 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:58.549 15:10:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.549 15:10:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.549 15:10:07 -- common/autotest_common.sh@10 -- # set +x 00:05:58.808 ************************************ 00:05:58.808 START TEST driver 00:05:58.808 ************************************ 00:05:58.808 15:10:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:58.808 * Looking for test storage... 00:05:58.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:58.808 15:10:07 -- setup/driver.sh@68 -- # setup reset 00:05:58.808 15:10:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:58.808 15:10:07 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:59.375 15:10:08 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:59.375 15:10:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.375 15:10:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.375 15:10:08 -- common/autotest_common.sh@10 -- # set +x 00:05:59.375 ************************************ 00:05:59.375 START TEST guess_driver 00:05:59.375 ************************************ 00:05:59.375 15:10:08 -- common/autotest_common.sh@1111 -- # guess_driver 00:05:59.375 15:10:08 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:59.375 15:10:08 -- setup/driver.sh@47 -- # local fail=0 00:05:59.375 15:10:08 -- setup/driver.sh@49 -- # pick_driver 00:05:59.375 15:10:08 -- setup/driver.sh@36 -- # vfio 00:05:59.375 15:10:08 -- setup/driver.sh@21 -- # local iommu_grups 00:05:59.375 15:10:08 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:59.375 15:10:08 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:59.375 15:10:08 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:59.375 15:10:08 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:59.375 15:10:08 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:59.375 15:10:08 -- setup/driver.sh@32 -- # return 1 00:05:59.375 15:10:08 -- setup/driver.sh@38 -- # uio 00:05:59.375 15:10:08 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:59.375 15:10:08 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:59.375 15:10:08 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:59.375 15:10:08 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:59.375 15:10:08 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:59.375 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:59.376 15:10:08 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:59.376 15:10:08 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:59.376 15:10:08 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:59.376 15:10:08 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:59.376 Looking for driver=uio_pci_generic 00:05:59.376 15:10:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:59.376 15:10:08 -- setup/driver.sh@45 -- # setup output config 00:05:59.376 15:10:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:59.376 15:10:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:59.942 15:10:09 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:59.942 15:10:09 -- setup/driver.sh@58 -- # continue 00:05:59.942 15:10:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.200 15:10:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.200 15:10:09 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:00.200 15:10:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.200 15:10:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:00.200 15:10:09 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:00.200 15:10:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:00.200 15:10:09 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:00.200 15:10:09 -- setup/driver.sh@65 -- # setup reset 00:06:00.200 15:10:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:00.200 15:10:09 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:00.765 00:06:00.765 real 0m1.406s 00:06:00.765 user 0m0.518s 00:06:00.765 sys 0m0.881s 00:06:00.765 15:10:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.765 ************************************ 00:06:00.765 END TEST guess_driver 00:06:00.765 ************************************ 00:06:00.765 15:10:09 -- common/autotest_common.sh@10 -- # set +x 00:06:00.765 ************************************ 00:06:00.765 END TEST driver 00:06:00.765 ************************************ 00:06:00.765 00:06:00.765 real 0m2.126s 00:06:00.765 user 0m0.783s 00:06:00.765 sys 0m1.375s 00:06:00.765 15:10:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.765 15:10:09 -- common/autotest_common.sh@10 -- # set +x 00:06:00.765 15:10:10 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:00.765 15:10:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.765 15:10:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.765 15:10:10 -- common/autotest_common.sh@10 -- # set +x 00:06:01.031 ************************************ 00:06:01.031 START TEST devices 00:06:01.031 ************************************ 00:06:01.032 15:10:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:01.032 * Looking for test storage... 00:06:01.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:01.032 15:10:10 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:01.032 15:10:10 -- setup/devices.sh@192 -- # setup reset 00:06:01.032 15:10:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:01.032 15:10:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:01.973 15:10:10 -- setup/devices.sh@194 -- # get_zoned_devs 00:06:01.973 15:10:10 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:01.973 15:10:10 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:01.973 15:10:10 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:01.973 15:10:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:01.973 15:10:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:01.973 15:10:10 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:01.973 15:10:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:01.973 15:10:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:01.973 15:10:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:01.973 15:10:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:06:01.973 15:10:10 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:06:01.973 15:10:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:06:01.973 15:10:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:01.973 15:10:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:01.973 15:10:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:06:01.973 15:10:10 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:06:01.973 15:10:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:06:01.973 15:10:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:01.973 15:10:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:01.973 15:10:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:01.973 15:10:10 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:01.973 15:10:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:01.973 15:10:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:01.973 15:10:10 -- setup/devices.sh@196 -- # blocks=() 00:06:01.973 15:10:10 -- setup/devices.sh@196 -- # declare -a blocks 00:06:01.973 15:10:10 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:01.973 15:10:10 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:01.973 15:10:10 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:01.973 15:10:10 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:01.973 15:10:10 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:01.973 15:10:10 -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:01.973 15:10:10 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:01.973 15:10:10 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:01.973 15:10:10 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:01.973 15:10:10 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:01.973 15:10:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:01.973 No valid GPT data, bailing 00:06:01.973 15:10:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:01.973 15:10:10 -- scripts/common.sh@391 -- # pt= 00:06:01.973 15:10:10 -- scripts/common.sh@392 -- # return 1 00:06:01.973 15:10:10 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:01.973 15:10:10 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:01.973 15:10:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:01.973 15:10:10 -- setup/common.sh@80 -- # echo 4294967296 00:06:01.973 15:10:10 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:01.973 15:10:10 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:01.973 15:10:10 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:01.974 15:10:10 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:01.974 15:10:10 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:06:01.974 15:10:10 -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:01.974 15:10:10 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:01.974 15:10:10 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:01.974 15:10:10 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:06:01.974 15:10:10 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:06:01.974 15:10:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:06:01.974 No valid GPT data, bailing 00:06:01.974 15:10:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:06:01.974 15:10:11 -- scripts/common.sh@391 -- # pt= 00:06:01.974 15:10:11 -- scripts/common.sh@392 -- # return 1 00:06:01.974 15:10:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:06:01.974 15:10:11 -- setup/common.sh@76 -- # local dev=nvme0n2 00:06:01.974 15:10:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:06:01.974 15:10:11 -- setup/common.sh@80 -- # echo 4294967296 00:06:01.974 15:10:11 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:01.974 15:10:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:01.974 15:10:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:01.974 15:10:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:01.974 15:10:11 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:06:01.974 15:10:11 -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:01.974 15:10:11 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:01.974 15:10:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:01.974 15:10:11 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:06:01.974 15:10:11 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:06:01.974 15:10:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:06:01.974 No valid GPT data, bailing 00:06:01.974 15:10:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:06:01.974 15:10:11 -- scripts/common.sh@391 -- # pt= 00:06:01.974 15:10:11 -- scripts/common.sh@392 -- # return 1 00:06:01.974 15:10:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:06:01.974 15:10:11 -- setup/common.sh@76 -- # local dev=nvme0n3 00:06:01.974 15:10:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:06:01.974 15:10:11 -- setup/common.sh@80 -- # echo 4294967296 00:06:01.974 15:10:11 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:01.974 15:10:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:01.974 15:10:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:01.974 15:10:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:01.974 15:10:11 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:06:01.974 15:10:11 -- setup/devices.sh@201 -- # ctrl=nvme1 00:06:01.974 15:10:11 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:01.974 15:10:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:01.974 15:10:11 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:06:01.974 15:10:11 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:06:01.974 15:10:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:06:01.974 No valid GPT data, bailing 00:06:01.974 15:10:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:01.974 15:10:11 -- scripts/common.sh@391 -- # pt= 00:06:01.974 15:10:11 -- scripts/common.sh@392 -- # return 1 00:06:01.974 15:10:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:06:01.974 15:10:11 -- setup/common.sh@76 -- # local dev=nvme1n1 00:06:01.974 15:10:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:06:01.974 15:10:11 -- setup/common.sh@80 -- # echo 5368709120 00:06:01.974 15:10:11 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:01.974 15:10:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:01.974 15:10:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:01.974 15:10:11 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:06:01.974 15:10:11 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:01.974 15:10:11 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:01.974 15:10:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.974 15:10:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.974 15:10:11 -- common/autotest_common.sh@10 -- # set +x 00:06:02.232 ************************************ 00:06:02.232 START TEST nvme_mount 00:06:02.232 ************************************ 00:06:02.232 15:10:11 -- common/autotest_common.sh@1111 -- # nvme_mount 00:06:02.232 15:10:11 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:02.232 15:10:11 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:02.232 15:10:11 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:02.232 15:10:11 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:02.232 15:10:11 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:02.232 15:10:11 -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:02.232 15:10:11 -- setup/common.sh@40 -- # local part_no=1 00:06:02.232 15:10:11 -- setup/common.sh@41 -- # local size=1073741824 00:06:02.232 15:10:11 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:02.232 15:10:11 -- setup/common.sh@44 -- # parts=() 00:06:02.232 15:10:11 -- setup/common.sh@44 -- # local parts 00:06:02.232 15:10:11 -- setup/common.sh@46 -- # (( part = 1 )) 00:06:02.232 15:10:11 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:02.232 15:10:11 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:02.232 15:10:11 -- setup/common.sh@46 -- # (( part++ )) 00:06:02.232 15:10:11 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:02.232 15:10:11 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:02.232 15:10:11 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:02.232 15:10:11 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:03.167 Creating new GPT entries in memory. 00:06:03.167 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:03.167 other utilities. 00:06:03.167 15:10:12 -- setup/common.sh@57 -- # (( part = 1 )) 00:06:03.167 15:10:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:03.167 15:10:12 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:03.167 15:10:12 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:03.167 15:10:12 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:04.101 Creating new GPT entries in memory. 00:06:04.101 The operation has completed successfully. 00:06:04.101 15:10:13 -- setup/common.sh@57 -- # (( part++ )) 00:06:04.101 15:10:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:04.101 15:10:13 -- setup/common.sh@62 -- # wait 56583 00:06:04.101 15:10:13 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.101 15:10:13 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:04.101 15:10:13 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.101 15:10:13 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:04.101 15:10:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:04.361 15:10:13 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.361 15:10:13 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:04.361 15:10:13 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:04.361 15:10:13 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:04.361 15:10:13 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.361 15:10:13 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:04.361 15:10:13 -- setup/devices.sh@53 -- # local found=0 00:06:04.361 15:10:13 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:04.361 15:10:13 -- setup/devices.sh@56 -- # : 00:06:04.361 15:10:13 -- setup/devices.sh@59 -- # local pci status 00:06:04.361 15:10:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.361 15:10:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:04.361 15:10:13 -- setup/devices.sh@47 -- # setup output config 00:06:04.361 15:10:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:04.361 15:10:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:04.361 15:10:13 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:04.361 15:10:13 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:04.361 15:10:13 -- setup/devices.sh@63 -- # found=1 00:06:04.361 15:10:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.361 15:10:13 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:04.361 15:10:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.620 15:10:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:04.620 15:10:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.620 15:10:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:04.620 15:10:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.620 15:10:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:04.620 15:10:13 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:04.620 15:10:13 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.620 15:10:13 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:04.620 15:10:13 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:04.620 15:10:13 -- setup/devices.sh@110 -- # cleanup_nvme 00:06:04.620 15:10:13 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.620 15:10:13 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.878 15:10:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:04.878 15:10:13 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:04.878 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:04.878 15:10:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:04.878 15:10:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:05.136 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:05.136 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:05.136 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:05.136 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:05.136 15:10:14 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:05.136 15:10:14 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:05.136 15:10:14 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.136 15:10:14 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:05.136 15:10:14 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:05.136 15:10:14 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.136 15:10:14 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:05.136 15:10:14 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:05.136 15:10:14 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:05.136 15:10:14 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.136 15:10:14 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:05.136 15:10:14 -- setup/devices.sh@53 -- # local found=0 00:06:05.136 15:10:14 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:05.136 15:10:14 -- setup/devices.sh@56 -- # : 00:06:05.136 15:10:14 -- setup/devices.sh@59 -- # local pci status 00:06:05.136 15:10:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.136 15:10:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:05.136 15:10:14 -- setup/devices.sh@47 -- # setup output config 00:06:05.136 15:10:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.136 15:10:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:05.136 15:10:14 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.136 15:10:14 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:05.136 15:10:14 -- setup/devices.sh@63 -- # found=1 00:06:05.136 15:10:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.136 15:10:14 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.136 15:10:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.393 15:10:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.393 15:10:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.393 15:10:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.393 15:10:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.393 15:10:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:05.393 15:10:14 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:05.393 15:10:14 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.393 15:10:14 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:05.393 15:10:14 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:05.651 15:10:14 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.651 15:10:14 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:06:05.651 15:10:14 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:05.651 15:10:14 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:05.651 15:10:14 -- setup/devices.sh@50 -- # local mount_point= 00:06:05.651 15:10:14 -- setup/devices.sh@51 -- # local test_file= 00:06:05.651 15:10:14 -- setup/devices.sh@53 -- # local found=0 00:06:05.651 15:10:14 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:05.651 15:10:14 -- setup/devices.sh@59 -- # local pci status 00:06:05.651 15:10:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.651 15:10:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:05.652 15:10:14 -- setup/devices.sh@47 -- # setup output config 00:06:05.652 15:10:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.652 15:10:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:05.910 15:10:14 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.910 15:10:14 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:05.910 15:10:14 -- setup/devices.sh@63 -- # found=1 00:06:05.910 15:10:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.910 15:10:14 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.910 15:10:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.910 15:10:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.910 15:10:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.910 15:10:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.910 15:10:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.168 15:10:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:06.168 15:10:15 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:06.168 15:10:15 -- setup/devices.sh@68 -- # return 0 00:06:06.168 15:10:15 -- setup/devices.sh@128 -- # cleanup_nvme 00:06:06.168 15:10:15 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:06.168 15:10:15 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:06.168 15:10:15 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:06.168 15:10:15 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:06.168 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:06.168 00:06:06.168 real 0m3.958s 00:06:06.168 user 0m0.697s 00:06:06.168 sys 0m0.995s 00:06:06.168 15:10:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.168 15:10:15 -- common/autotest_common.sh@10 -- # set +x 00:06:06.168 ************************************ 00:06:06.168 END TEST nvme_mount 00:06:06.168 ************************************ 00:06:06.168 15:10:15 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:06.168 15:10:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.168 15:10:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.168 15:10:15 -- common/autotest_common.sh@10 -- # set +x 00:06:06.168 ************************************ 00:06:06.168 START TEST dm_mount 00:06:06.168 ************************************ 00:06:06.168 15:10:15 -- common/autotest_common.sh@1111 -- # dm_mount 00:06:06.168 15:10:15 -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:06.168 15:10:15 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:06.168 15:10:15 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:06.168 15:10:15 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:06.168 15:10:15 -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:06.168 15:10:15 -- setup/common.sh@40 -- # local part_no=2 00:06:06.168 15:10:15 -- setup/common.sh@41 -- # local size=1073741824 00:06:06.168 15:10:15 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:06.168 15:10:15 -- setup/common.sh@44 -- # parts=() 00:06:06.168 15:10:15 -- setup/common.sh@44 -- # local parts 00:06:06.168 15:10:15 -- setup/common.sh@46 -- # (( part = 1 )) 00:06:06.168 15:10:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:06.168 15:10:15 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:06.168 15:10:15 -- setup/common.sh@46 -- # (( part++ )) 00:06:06.168 15:10:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:06.168 15:10:15 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:06.168 15:10:15 -- setup/common.sh@46 -- # (( part++ )) 00:06:06.168 15:10:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:06.168 15:10:15 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:06.168 15:10:15 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:06.168 15:10:15 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:07.104 Creating new GPT entries in memory. 00:06:07.104 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:07.104 other utilities. 00:06:07.104 15:10:16 -- setup/common.sh@57 -- # (( part = 1 )) 00:06:07.104 15:10:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:07.104 15:10:16 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:07.362 15:10:16 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:07.362 15:10:16 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:08.323 Creating new GPT entries in memory. 00:06:08.323 The operation has completed successfully. 00:06:08.323 15:10:17 -- setup/common.sh@57 -- # (( part++ )) 00:06:08.323 15:10:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:08.323 15:10:17 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:08.323 15:10:17 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:08.323 15:10:17 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:09.260 The operation has completed successfully. 00:06:09.260 15:10:18 -- setup/common.sh@57 -- # (( part++ )) 00:06:09.260 15:10:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:09.260 15:10:18 -- setup/common.sh@62 -- # wait 57020 00:06:09.260 15:10:18 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:09.260 15:10:18 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:09.260 15:10:18 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:09.260 15:10:18 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:09.260 15:10:18 -- setup/devices.sh@160 -- # for t in {1..5} 00:06:09.260 15:10:18 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:09.260 15:10:18 -- setup/devices.sh@161 -- # break 00:06:09.260 15:10:18 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:09.260 15:10:18 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:09.260 15:10:18 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:09.260 15:10:18 -- setup/devices.sh@166 -- # dm=dm-0 00:06:09.260 15:10:18 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:09.260 15:10:18 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:09.260 15:10:18 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:09.261 15:10:18 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:09.261 15:10:18 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:09.261 15:10:18 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:09.261 15:10:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:09.261 15:10:18 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:09.261 15:10:18 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:09.261 15:10:18 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:09.261 15:10:18 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:09.261 15:10:18 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:09.261 15:10:18 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:09.261 15:10:18 -- setup/devices.sh@53 -- # local found=0 00:06:09.261 15:10:18 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:09.261 15:10:18 -- setup/devices.sh@56 -- # : 00:06:09.261 15:10:18 -- setup/devices.sh@59 -- # local pci status 00:06:09.261 15:10:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:09.261 15:10:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.261 15:10:18 -- setup/devices.sh@47 -- # setup output config 00:06:09.261 15:10:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:09.261 15:10:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:09.520 15:10:18 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:09.520 15:10:18 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:09.520 15:10:18 -- setup/devices.sh@63 -- # found=1 00:06:09.520 15:10:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.520 15:10:18 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:09.520 15:10:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.778 15:10:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:09.778 15:10:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.778 15:10:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:09.778 15:10:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.778 15:10:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:09.778 15:10:18 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:09.778 15:10:18 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:09.778 15:10:18 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:09.778 15:10:18 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:09.778 15:10:18 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:09.778 15:10:18 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:09.778 15:10:18 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:09.778 15:10:18 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:09.778 15:10:18 -- setup/devices.sh@50 -- # local mount_point= 00:06:09.778 15:10:18 -- setup/devices.sh@51 -- # local test_file= 00:06:09.778 15:10:18 -- setup/devices.sh@53 -- # local found=0 00:06:09.778 15:10:18 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:09.778 15:10:18 -- setup/devices.sh@59 -- # local pci status 00:06:09.778 15:10:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.778 15:10:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:09.778 15:10:18 -- setup/devices.sh@47 -- # setup output config 00:06:09.778 15:10:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:09.778 15:10:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:10.036 15:10:19 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:10.036 15:10:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:10.036 15:10:19 -- setup/devices.sh@63 -- # found=1 00:06:10.036 15:10:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.036 15:10:19 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:10.036 15:10:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.295 15:10:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:10.295 15:10:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.295 15:10:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:10.295 15:10:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.295 15:10:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:10.295 15:10:19 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:10.295 15:10:19 -- setup/devices.sh@68 -- # return 0 00:06:10.295 15:10:19 -- setup/devices.sh@187 -- # cleanup_dm 00:06:10.295 15:10:19 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:10.295 15:10:19 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:10.295 15:10:19 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:10.295 15:10:19 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:10.295 15:10:19 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:10.295 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:10.295 15:10:19 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:10.295 15:10:19 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:10.295 ************************************ 00:06:10.295 END TEST dm_mount 00:06:10.295 ************************************ 00:06:10.295 00:06:10.295 real 0m4.149s 00:06:10.295 user 0m0.425s 00:06:10.295 sys 0m0.676s 00:06:10.295 15:10:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.295 15:10:19 -- common/autotest_common.sh@10 -- # set +x 00:06:10.295 15:10:19 -- setup/devices.sh@1 -- # cleanup 00:06:10.295 15:10:19 -- setup/devices.sh@11 -- # cleanup_nvme 00:06:10.295 15:10:19 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:10.295 15:10:19 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:10.295 15:10:19 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:10.295 15:10:19 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:10.295 15:10:19 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:10.554 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:10.554 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:10.554 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:10.554 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:10.554 15:10:19 -- setup/devices.sh@12 -- # cleanup_dm 00:06:10.554 15:10:19 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:10.554 15:10:19 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:10.554 15:10:19 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:10.554 15:10:19 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:10.554 15:10:19 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:10.554 15:10:19 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:10.827 ************************************ 00:06:10.828 END TEST devices 00:06:10.828 ************************************ 00:06:10.828 00:06:10.828 real 0m9.729s 00:06:10.828 user 0m1.795s 00:06:10.828 sys 0m2.304s 00:06:10.828 15:10:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.828 15:10:19 -- common/autotest_common.sh@10 -- # set +x 00:06:10.828 00:06:10.828 real 0m21.952s 00:06:10.828 user 0m7.130s 00:06:10.828 sys 0m9.129s 00:06:10.828 ************************************ 00:06:10.828 END TEST setup.sh 00:06:10.828 ************************************ 00:06:10.828 15:10:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.828 15:10:19 -- common/autotest_common.sh@10 -- # set +x 00:06:10.828 15:10:19 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:11.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:11.396 Hugepages 00:06:11.396 node hugesize free / total 00:06:11.396 node0 1048576kB 0 / 0 00:06:11.396 node0 2048kB 2048 / 2048 00:06:11.396 00:06:11.396 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:11.396 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:11.654 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:11.654 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:06:11.654 15:10:20 -- spdk/autotest.sh@130 -- # uname -s 00:06:11.654 15:10:20 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:11.654 15:10:20 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:11.654 15:10:20 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:12.221 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:12.221 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:12.480 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:12.480 15:10:21 -- common/autotest_common.sh@1518 -- # sleep 1 00:06:13.415 15:10:22 -- common/autotest_common.sh@1519 -- # bdfs=() 00:06:13.415 15:10:22 -- common/autotest_common.sh@1519 -- # local bdfs 00:06:13.415 15:10:22 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:13.415 15:10:22 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:13.415 15:10:22 -- common/autotest_common.sh@1499 -- # bdfs=() 00:06:13.415 15:10:22 -- common/autotest_common.sh@1499 -- # local bdfs 00:06:13.415 15:10:22 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:13.415 15:10:22 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:13.415 15:10:22 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:06:13.415 15:10:22 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:06:13.415 15:10:22 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:13.415 15:10:22 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:13.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:13.980 Waiting for block devices as requested 00:06:13.980 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:13.980 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:13.980 15:10:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:13.980 15:10:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:13.980 15:10:23 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:06:13.980 15:10:23 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:13.980 15:10:23 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:13.981 15:10:23 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:13.981 15:10:23 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:13.981 15:10:23 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:06:13.981 15:10:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:13.981 15:10:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:13.981 15:10:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:13.981 15:10:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:13.981 15:10:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:13.981 15:10:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:13.981 15:10:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:13.981 15:10:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:13.981 15:10:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:13.981 15:10:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:13.981 15:10:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:13.981 15:10:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:13.981 15:10:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:13.981 15:10:23 -- common/autotest_common.sh@1543 -- # continue 00:06:13.981 15:10:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:13.981 15:10:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:13.981 15:10:23 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:13.981 15:10:23 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:06:13.981 15:10:23 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:13.981 15:10:23 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:13.981 15:10:23 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:13.981 15:10:23 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:06:13.981 15:10:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:13.981 15:10:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:13.981 15:10:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:13.981 15:10:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:13.981 15:10:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:13.981 15:10:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:13.981 15:10:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:13.981 15:10:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:13.981 15:10:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:13.981 15:10:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:13.981 15:10:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:13.981 15:10:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:13.981 15:10:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:13.981 15:10:23 -- common/autotest_common.sh@1543 -- # continue 00:06:13.981 15:10:23 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:13.981 15:10:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:13.981 15:10:23 -- common/autotest_common.sh@10 -- # set +x 00:06:14.245 15:10:23 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:14.245 15:10:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:14.245 15:10:23 -- common/autotest_common.sh@10 -- # set +x 00:06:14.245 15:10:23 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:14.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.812 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:14.812 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.070 15:10:24 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:15.070 15:10:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:15.070 15:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:15.070 15:10:24 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:15.070 15:10:24 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:06:15.070 15:10:24 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:06:15.070 15:10:24 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:15.070 15:10:24 -- common/autotest_common.sh@1563 -- # local bdfs 00:06:15.070 15:10:24 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:06:15.070 15:10:24 -- common/autotest_common.sh@1499 -- # bdfs=() 00:06:15.070 15:10:24 -- common/autotest_common.sh@1499 -- # local bdfs 00:06:15.070 15:10:24 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:15.070 15:10:24 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:15.070 15:10:24 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:06:15.070 15:10:24 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:06:15.071 15:10:24 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:15.071 15:10:24 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:06:15.071 15:10:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:15.071 15:10:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:15.071 15:10:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:15.071 15:10:24 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:06:15.071 15:10:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:15.071 15:10:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:15.071 15:10:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:15.071 15:10:24 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:06:15.071 15:10:24 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:06:15.071 15:10:24 -- common/autotest_common.sh@1579 -- # return 0 00:06:15.071 15:10:24 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:15.071 15:10:24 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:15.071 15:10:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:15.071 15:10:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:15.071 15:10:24 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:15.071 15:10:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:15.071 15:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:15.071 15:10:24 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:15.071 15:10:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.071 15:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.071 15:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:15.071 ************************************ 00:06:15.071 START TEST env 00:06:15.071 ************************************ 00:06:15.071 15:10:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:15.330 * Looking for test storage... 00:06:15.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:15.330 15:10:24 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:15.330 15:10:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.330 15:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.330 15:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:15.330 ************************************ 00:06:15.330 START TEST env_memory 00:06:15.330 ************************************ 00:06:15.330 15:10:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:15.330 00:06:15.330 00:06:15.330 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.330 http://cunit.sourceforge.net/ 00:06:15.330 00:06:15.330 00:06:15.330 Suite: memory 00:06:15.330 Test: alloc and free memory map ...[2024-04-24 15:10:24.489275] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:15.330 passed 00:06:15.330 Test: mem map translation ...[2024-04-24 15:10:24.521035] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:15.330 [2024-04-24 15:10:24.521072] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:15.330 [2024-04-24 15:10:24.521127] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:15.330 [2024-04-24 15:10:24.521137] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:15.330 passed 00:06:15.588 Test: mem map registration ...[2024-04-24 15:10:24.584746] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:15.588 [2024-04-24 15:10:24.584790] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:15.588 passed 00:06:15.588 Test: mem map adjacent registrations ...passed 00:06:15.588 00:06:15.588 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.588 suites 1 1 n/a 0 0 00:06:15.588 tests 4 4 4 0 0 00:06:15.588 asserts 152 152 152 0 n/a 00:06:15.588 00:06:15.588 Elapsed time = 0.215 seconds 00:06:15.588 00:06:15.588 real 0m0.228s 00:06:15.588 user 0m0.215s 00:06:15.588 sys 0m0.012s 00:06:15.588 15:10:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.588 15:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:15.588 ************************************ 00:06:15.588 END TEST env_memory 00:06:15.588 ************************************ 00:06:15.588 15:10:24 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:15.588 15:10:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.588 15:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.588 15:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:15.588 ************************************ 00:06:15.588 START TEST env_vtophys 00:06:15.588 ************************************ 00:06:15.588 15:10:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:15.588 EAL: lib.eal log level changed from notice to debug 00:06:15.588 EAL: Detected lcore 0 as core 0 on socket 0 00:06:15.588 EAL: Detected lcore 1 as core 0 on socket 0 00:06:15.588 EAL: Detected lcore 2 as core 0 on socket 0 00:06:15.588 EAL: Detected lcore 3 as core 0 on socket 0 00:06:15.588 EAL: Detected lcore 4 as core 0 on socket 0 00:06:15.588 EAL: Detected lcore 5 as core 0 on socket 0 00:06:15.588 EAL: Detected lcore 6 as core 0 on socket 0 00:06:15.588 EAL: Detected lcore 7 as core 0 on socket 0 00:06:15.588 EAL: Detected lcore 8 as core 0 on socket 0 00:06:15.588 EAL: Detected lcore 9 as core 0 on socket 0 00:06:15.588 EAL: Maximum logical cores by configuration: 128 00:06:15.588 EAL: Detected CPU lcores: 10 00:06:15.589 EAL: Detected NUMA nodes: 1 00:06:15.589 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:15.589 EAL: Detected shared linkage of DPDK 00:06:15.589 EAL: No shared files mode enabled, IPC will be disabled 00:06:15.847 EAL: Selected IOVA mode 'PA' 00:06:15.847 EAL: Probing VFIO support... 00:06:15.847 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:15.847 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:15.847 EAL: Ask a virtual area of 0x2e000 bytes 00:06:15.847 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:15.847 EAL: Setting up physically contiguous memory... 00:06:15.847 EAL: Setting maximum number of open files to 524288 00:06:15.847 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:15.847 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:15.847 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.847 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:15.847 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.847 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.847 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:15.847 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:15.847 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.847 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:15.847 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.847 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.847 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:15.847 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:15.847 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.847 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:15.847 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.847 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.847 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:15.847 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:15.847 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.847 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:15.847 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.847 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.847 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:15.847 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:15.847 EAL: Hugepages will be freed exactly as allocated. 00:06:15.847 EAL: No shared files mode enabled, IPC is disabled 00:06:15.847 EAL: No shared files mode enabled, IPC is disabled 00:06:15.847 EAL: TSC frequency is ~2200000 KHz 00:06:15.847 EAL: Main lcore 0 is ready (tid=7f88fc018a00;cpuset=[0]) 00:06:15.847 EAL: Trying to obtain current memory policy. 00:06:15.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.847 EAL: Restoring previous memory policy: 0 00:06:15.847 EAL: request: mp_malloc_sync 00:06:15.847 EAL: No shared files mode enabled, IPC is disabled 00:06:15.847 EAL: Heap on socket 0 was expanded by 2MB 00:06:15.847 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:15.847 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:15.847 EAL: Mem event callback 'spdk:(nil)' registered 00:06:15.847 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:15.847 00:06:15.847 00:06:15.847 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.847 http://cunit.sourceforge.net/ 00:06:15.847 00:06:15.847 00:06:15.847 Suite: components_suite 00:06:15.847 Test: vtophys_malloc_test ...passed 00:06:15.847 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:15.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.847 EAL: Restoring previous memory policy: 4 00:06:15.847 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.847 EAL: request: mp_malloc_sync 00:06:15.847 EAL: No shared files mode enabled, IPC is disabled 00:06:15.847 EAL: Heap on socket 0 was expanded by 4MB 00:06:15.847 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.847 EAL: request: mp_malloc_sync 00:06:15.847 EAL: No shared files mode enabled, IPC is disabled 00:06:15.847 EAL: Heap on socket 0 was shrunk by 4MB 00:06:15.847 EAL: Trying to obtain current memory policy. 00:06:15.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.847 EAL: Restoring previous memory policy: 4 00:06:15.847 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.847 EAL: request: mp_malloc_sync 00:06:15.847 EAL: No shared files mode enabled, IPC is disabled 00:06:15.847 EAL: Heap on socket 0 was expanded by 6MB 00:06:15.847 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.847 EAL: request: mp_malloc_sync 00:06:15.847 EAL: No shared files mode enabled, IPC is disabled 00:06:15.847 EAL: Heap on socket 0 was shrunk by 6MB 00:06:15.847 EAL: Trying to obtain current memory policy. 00:06:15.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.847 EAL: Restoring previous memory policy: 4 00:06:15.847 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.847 EAL: request: mp_malloc_sync 00:06:15.847 EAL: No shared files mode enabled, IPC is disabled 00:06:15.847 EAL: Heap on socket 0 was expanded by 10MB 00:06:15.847 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.847 EAL: request: mp_malloc_sync 00:06:15.847 EAL: No shared files mode enabled, IPC is disabled 00:06:15.847 EAL: Heap on socket 0 was shrunk by 10MB 00:06:15.847 EAL: Trying to obtain current memory policy. 00:06:15.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.848 EAL: Restoring previous memory policy: 4 00:06:15.848 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.848 EAL: request: mp_malloc_sync 00:06:15.848 EAL: No shared files mode enabled, IPC is disabled 00:06:15.848 EAL: Heap on socket 0 was expanded by 18MB 00:06:15.848 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.848 EAL: request: mp_malloc_sync 00:06:15.848 EAL: No shared files mode enabled, IPC is disabled 00:06:15.848 EAL: Heap on socket 0 was shrunk by 18MB 00:06:15.848 EAL: Trying to obtain current memory policy. 00:06:15.848 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.848 EAL: Restoring previous memory policy: 4 00:06:15.848 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.848 EAL: request: mp_malloc_sync 00:06:15.848 EAL: No shared files mode enabled, IPC is disabled 00:06:15.848 EAL: Heap on socket 0 was expanded by 34MB 00:06:15.848 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.848 EAL: request: mp_malloc_sync 00:06:15.848 EAL: No shared files mode enabled, IPC is disabled 00:06:15.848 EAL: Heap on socket 0 was shrunk by 34MB 00:06:15.848 EAL: Trying to obtain current memory policy. 00:06:15.848 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.848 EAL: Restoring previous memory policy: 4 00:06:15.848 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.848 EAL: request: mp_malloc_sync 00:06:15.848 EAL: No shared files mode enabled, IPC is disabled 00:06:15.848 EAL: Heap on socket 0 was expanded by 66MB 00:06:15.848 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.848 EAL: request: mp_malloc_sync 00:06:15.848 EAL: No shared files mode enabled, IPC is disabled 00:06:15.848 EAL: Heap on socket 0 was shrunk by 66MB 00:06:15.848 EAL: Trying to obtain current memory policy. 00:06:15.848 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.848 EAL: Restoring previous memory policy: 4 00:06:15.848 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.848 EAL: request: mp_malloc_sync 00:06:15.848 EAL: No shared files mode enabled, IPC is disabled 00:06:15.848 EAL: Heap on socket 0 was expanded by 130MB 00:06:15.848 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.106 EAL: request: mp_malloc_sync 00:06:16.106 EAL: No shared files mode enabled, IPC is disabled 00:06:16.106 EAL: Heap on socket 0 was shrunk by 130MB 00:06:16.106 EAL: Trying to obtain current memory policy. 00:06:16.106 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.106 EAL: Restoring previous memory policy: 4 00:06:16.106 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.106 EAL: request: mp_malloc_sync 00:06:16.106 EAL: No shared files mode enabled, IPC is disabled 00:06:16.106 EAL: Heap on socket 0 was expanded by 258MB 00:06:16.106 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.106 EAL: request: mp_malloc_sync 00:06:16.106 EAL: No shared files mode enabled, IPC is disabled 00:06:16.106 EAL: Heap on socket 0 was shrunk by 258MB 00:06:16.106 EAL: Trying to obtain current memory policy. 00:06:16.106 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.364 EAL: Restoring previous memory policy: 4 00:06:16.364 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.364 EAL: request: mp_malloc_sync 00:06:16.364 EAL: No shared files mode enabled, IPC is disabled 00:06:16.364 EAL: Heap on socket 0 was expanded by 514MB 00:06:16.364 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.364 EAL: request: mp_malloc_sync 00:06:16.364 EAL: No shared files mode enabled, IPC is disabled 00:06:16.364 EAL: Heap on socket 0 was shrunk by 514MB 00:06:16.364 EAL: Trying to obtain current memory policy. 00:06:16.364 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.930 EAL: Restoring previous memory policy: 4 00:06:16.930 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.930 EAL: request: mp_malloc_sync 00:06:16.930 EAL: No shared files mode enabled, IPC is disabled 00:06:16.930 EAL: Heap on socket 0 was expanded by 1026MB 00:06:16.930 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.188 passed 00:06:17.188 00:06:17.188 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.188 suites 1 1 n/a 0 0 00:06:17.188 tests 2 2 2 0 0 00:06:17.188 asserts 5358 5358 5358 0 n/a 00:06:17.188 00:06:17.188 Elapsed time = 1.278 seconds 00:06:17.188 EAL: request: mp_malloc_sync 00:06:17.188 EAL: No shared files mode enabled, IPC is disabled 00:06:17.188 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:17.188 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.188 EAL: request: mp_malloc_sync 00:06:17.188 EAL: No shared files mode enabled, IPC is disabled 00:06:17.188 EAL: Heap on socket 0 was shrunk by 2MB 00:06:17.188 EAL: No shared files mode enabled, IPC is disabled 00:06:17.188 EAL: No shared files mode enabled, IPC is disabled 00:06:17.188 EAL: No shared files mode enabled, IPC is disabled 00:06:17.188 00:06:17.188 real 0m1.480s 00:06:17.188 user 0m0.811s 00:06:17.188 sys 0m0.535s 00:06:17.188 15:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.188 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:06:17.188 ************************************ 00:06:17.188 END TEST env_vtophys 00:06:17.188 ************************************ 00:06:17.188 15:10:26 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:17.188 15:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.188 15:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.188 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:06:17.188 ************************************ 00:06:17.188 START TEST env_pci 00:06:17.188 ************************************ 00:06:17.188 15:10:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:17.188 00:06:17.188 00:06:17.188 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.188 http://cunit.sourceforge.net/ 00:06:17.188 00:06:17.188 00:06:17.188 Suite: pci 00:06:17.188 Test: pci_hook ...[2024-04-24 15:10:26.402246] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58230 has claimed it 00:06:17.188 passed 00:06:17.188 00:06:17.188 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.188 suites 1 1 n/a 0 0 00:06:17.188 tests 1 1 1 0 0 00:06:17.188 asserts 25 25 25 0 n/a 00:06:17.188 00:06:17.188 Elapsed time = 0.002 seconds 00:06:17.188 EAL: Cannot find device (10000:00:01.0) 00:06:17.188 EAL: Failed to attach device on primary process 00:06:17.188 00:06:17.188 real 0m0.017s 00:06:17.188 user 0m0.007s 00:06:17.188 sys 0m0.010s 00:06:17.188 15:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.188 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:06:17.188 ************************************ 00:06:17.188 END TEST env_pci 00:06:17.188 ************************************ 00:06:17.446 15:10:26 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:17.446 15:10:26 -- env/env.sh@15 -- # uname 00:06:17.446 15:10:26 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:17.446 15:10:26 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:17.446 15:10:26 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:17.446 15:10:26 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:06:17.446 15:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.446 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:06:17.446 ************************************ 00:06:17.446 START TEST env_dpdk_post_init 00:06:17.446 ************************************ 00:06:17.446 15:10:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:17.446 EAL: Detected CPU lcores: 10 00:06:17.446 EAL: Detected NUMA nodes: 1 00:06:17.446 EAL: Detected shared linkage of DPDK 00:06:17.446 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:17.446 EAL: Selected IOVA mode 'PA' 00:06:17.446 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:17.704 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:17.704 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:17.704 Starting DPDK initialization... 00:06:17.704 Starting SPDK post initialization... 00:06:17.704 SPDK NVMe probe 00:06:17.704 Attaching to 0000:00:10.0 00:06:17.704 Attaching to 0000:00:11.0 00:06:17.704 Attached to 0000:00:10.0 00:06:17.704 Attached to 0000:00:11.0 00:06:17.704 Cleaning up... 00:06:17.704 00:06:17.704 real 0m0.191s 00:06:17.704 user 0m0.057s 00:06:17.704 sys 0m0.033s 00:06:17.704 15:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.704 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:06:17.704 ************************************ 00:06:17.704 END TEST env_dpdk_post_init 00:06:17.704 ************************************ 00:06:17.704 15:10:26 -- env/env.sh@26 -- # uname 00:06:17.704 15:10:26 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:17.704 15:10:26 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:17.704 15:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.704 15:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.704 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:06:17.704 ************************************ 00:06:17.704 START TEST env_mem_callbacks 00:06:17.704 ************************************ 00:06:17.704 15:10:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:17.704 EAL: Detected CPU lcores: 10 00:06:17.704 EAL: Detected NUMA nodes: 1 00:06:17.704 EAL: Detected shared linkage of DPDK 00:06:17.704 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:17.704 EAL: Selected IOVA mode 'PA' 00:06:17.963 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:17.963 00:06:17.963 00:06:17.963 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.963 http://cunit.sourceforge.net/ 00:06:17.963 00:06:17.963 00:06:17.963 Suite: memory 00:06:17.963 Test: test ... 00:06:17.963 register 0x200000200000 2097152 00:06:17.963 malloc 3145728 00:06:17.963 register 0x200000400000 4194304 00:06:17.963 buf 0x200000500000 len 3145728 PASSED 00:06:17.963 malloc 64 00:06:17.963 buf 0x2000004fff40 len 64 PASSED 00:06:17.963 malloc 4194304 00:06:17.963 register 0x200000800000 6291456 00:06:17.963 buf 0x200000a00000 len 4194304 PASSED 00:06:17.963 free 0x200000500000 3145728 00:06:17.963 free 0x2000004fff40 64 00:06:17.963 unregister 0x200000400000 4194304 PASSED 00:06:17.963 free 0x200000a00000 4194304 00:06:17.963 unregister 0x200000800000 6291456 PASSED 00:06:17.963 malloc 8388608 00:06:17.963 register 0x200000400000 10485760 00:06:17.963 buf 0x200000600000 len 8388608 PASSED 00:06:17.963 free 0x200000600000 8388608 00:06:17.963 unregister 0x200000400000 10485760 PASSED 00:06:17.963 passed 00:06:17.963 00:06:17.963 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.963 suites 1 1 n/a 0 0 00:06:17.963 tests 1 1 1 0 0 00:06:17.963 asserts 15 15 15 0 n/a 00:06:17.963 00:06:17.963 Elapsed time = 0.006 seconds 00:06:17.963 00:06:17.963 real 0m0.143s 00:06:17.963 user 0m0.022s 00:06:17.963 sys 0m0.021s 00:06:17.963 15:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.963 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:06:17.963 ************************************ 00:06:17.963 END TEST env_mem_callbacks 00:06:17.963 ************************************ 00:06:17.963 00:06:17.963 real 0m2.748s 00:06:17.963 user 0m1.364s 00:06:17.963 sys 0m0.968s 00:06:17.963 15:10:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.963 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:06:17.963 ************************************ 00:06:17.963 END TEST env 00:06:17.963 ************************************ 00:06:17.963 15:10:27 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:17.963 15:10:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.963 15:10:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.963 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:06:17.963 ************************************ 00:06:17.963 START TEST rpc 00:06:17.963 ************************************ 00:06:17.963 15:10:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:17.963 * Looking for test storage... 00:06:18.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:18.222 15:10:27 -- rpc/rpc.sh@65 -- # spdk_pid=58353 00:06:18.222 15:10:27 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.222 15:10:27 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:18.222 15:10:27 -- rpc/rpc.sh@67 -- # waitforlisten 58353 00:06:18.222 15:10:27 -- common/autotest_common.sh@817 -- # '[' -z 58353 ']' 00:06:18.222 15:10:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.222 15:10:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:18.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.222 15:10:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.222 15:10:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:18.222 15:10:27 -- common/autotest_common.sh@10 -- # set +x 00:06:18.222 [2024-04-24 15:10:27.275265] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:18.222 [2024-04-24 15:10:27.275364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58353 ] 00:06:18.222 [2024-04-24 15:10:27.413423] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.481 [2024-04-24 15:10:27.543177] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:18.481 [2024-04-24 15:10:27.543240] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58353' to capture a snapshot of events at runtime. 00:06:18.481 [2024-04-24 15:10:27.543256] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:18.481 [2024-04-24 15:10:27.543266] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:18.481 [2024-04-24 15:10:27.543275] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58353 for offline analysis/debug. 00:06:18.481 [2024-04-24 15:10:27.543324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.047 15:10:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:19.047 15:10:28 -- common/autotest_common.sh@850 -- # return 0 00:06:19.047 15:10:28 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:19.047 15:10:28 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:19.047 15:10:28 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:19.047 15:10:28 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:19.047 15:10:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.047 15:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.047 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.305 ************************************ 00:06:19.305 START TEST rpc_integrity 00:06:19.305 ************************************ 00:06:19.306 15:10:28 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:06:19.306 15:10:28 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:19.306 15:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.306 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.306 15:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.306 15:10:28 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:19.306 15:10:28 -- rpc/rpc.sh@13 -- # jq length 00:06:19.306 15:10:28 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:19.306 15:10:28 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:19.306 15:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.306 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.306 15:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.306 15:10:28 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:19.306 15:10:28 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:19.306 15:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.306 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.306 15:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.306 15:10:28 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:19.306 { 00:06:19.306 "name": "Malloc0", 00:06:19.306 "aliases": [ 00:06:19.306 "be0ebd4e-9590-4301-95be-e613e882e1fc" 00:06:19.306 ], 00:06:19.306 "product_name": "Malloc disk", 00:06:19.306 "block_size": 512, 00:06:19.306 "num_blocks": 16384, 00:06:19.306 "uuid": "be0ebd4e-9590-4301-95be-e613e882e1fc", 00:06:19.306 "assigned_rate_limits": { 00:06:19.306 "rw_ios_per_sec": 0, 00:06:19.306 "rw_mbytes_per_sec": 0, 00:06:19.306 "r_mbytes_per_sec": 0, 00:06:19.306 "w_mbytes_per_sec": 0 00:06:19.306 }, 00:06:19.306 "claimed": false, 00:06:19.306 "zoned": false, 00:06:19.306 "supported_io_types": { 00:06:19.306 "read": true, 00:06:19.306 "write": true, 00:06:19.306 "unmap": true, 00:06:19.306 "write_zeroes": true, 00:06:19.306 "flush": true, 00:06:19.306 "reset": true, 00:06:19.306 "compare": false, 00:06:19.306 "compare_and_write": false, 00:06:19.306 "abort": true, 00:06:19.306 "nvme_admin": false, 00:06:19.306 "nvme_io": false 00:06:19.306 }, 00:06:19.306 "memory_domains": [ 00:06:19.306 { 00:06:19.306 "dma_device_id": "system", 00:06:19.306 "dma_device_type": 1 00:06:19.306 }, 00:06:19.306 { 00:06:19.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.306 "dma_device_type": 2 00:06:19.306 } 00:06:19.306 ], 00:06:19.306 "driver_specific": {} 00:06:19.306 } 00:06:19.306 ]' 00:06:19.306 15:10:28 -- rpc/rpc.sh@17 -- # jq length 00:06:19.306 15:10:28 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:19.306 15:10:28 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:19.306 15:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.306 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.306 [2024-04-24 15:10:28.508754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:19.306 [2024-04-24 15:10:28.508871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:19.306 [2024-04-24 15:10:28.508890] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c43b10 00:06:19.306 [2024-04-24 15:10:28.508899] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:19.306 [2024-04-24 15:10:28.510734] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:19.306 [2024-04-24 15:10:28.510797] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:19.306 Passthru0 00:06:19.306 15:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.306 15:10:28 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:19.306 15:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.306 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.306 15:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.306 15:10:28 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:19.306 { 00:06:19.306 "name": "Malloc0", 00:06:19.306 "aliases": [ 00:06:19.306 "be0ebd4e-9590-4301-95be-e613e882e1fc" 00:06:19.306 ], 00:06:19.306 "product_name": "Malloc disk", 00:06:19.306 "block_size": 512, 00:06:19.306 "num_blocks": 16384, 00:06:19.306 "uuid": "be0ebd4e-9590-4301-95be-e613e882e1fc", 00:06:19.306 "assigned_rate_limits": { 00:06:19.306 "rw_ios_per_sec": 0, 00:06:19.306 "rw_mbytes_per_sec": 0, 00:06:19.306 "r_mbytes_per_sec": 0, 00:06:19.306 "w_mbytes_per_sec": 0 00:06:19.306 }, 00:06:19.306 "claimed": true, 00:06:19.306 "claim_type": "exclusive_write", 00:06:19.306 "zoned": false, 00:06:19.306 "supported_io_types": { 00:06:19.306 "read": true, 00:06:19.306 "write": true, 00:06:19.306 "unmap": true, 00:06:19.306 "write_zeroes": true, 00:06:19.306 "flush": true, 00:06:19.306 "reset": true, 00:06:19.306 "compare": false, 00:06:19.306 "compare_and_write": false, 00:06:19.306 "abort": true, 00:06:19.306 "nvme_admin": false, 00:06:19.306 "nvme_io": false 00:06:19.306 }, 00:06:19.306 "memory_domains": [ 00:06:19.306 { 00:06:19.306 "dma_device_id": "system", 00:06:19.306 "dma_device_type": 1 00:06:19.306 }, 00:06:19.306 { 00:06:19.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.306 "dma_device_type": 2 00:06:19.306 } 00:06:19.306 ], 00:06:19.306 "driver_specific": {} 00:06:19.306 }, 00:06:19.306 { 00:06:19.306 "name": "Passthru0", 00:06:19.306 "aliases": [ 00:06:19.306 "2d0b1dfb-5a60-56af-965d-5fbbbac2577b" 00:06:19.306 ], 00:06:19.306 "product_name": "passthru", 00:06:19.306 "block_size": 512, 00:06:19.306 "num_blocks": 16384, 00:06:19.306 "uuid": "2d0b1dfb-5a60-56af-965d-5fbbbac2577b", 00:06:19.306 "assigned_rate_limits": { 00:06:19.306 "rw_ios_per_sec": 0, 00:06:19.306 "rw_mbytes_per_sec": 0, 00:06:19.306 "r_mbytes_per_sec": 0, 00:06:19.306 "w_mbytes_per_sec": 0 00:06:19.306 }, 00:06:19.306 "claimed": false, 00:06:19.306 "zoned": false, 00:06:19.306 "supported_io_types": { 00:06:19.306 "read": true, 00:06:19.306 "write": true, 00:06:19.306 "unmap": true, 00:06:19.306 "write_zeroes": true, 00:06:19.306 "flush": true, 00:06:19.306 "reset": true, 00:06:19.306 "compare": false, 00:06:19.306 "compare_and_write": false, 00:06:19.306 "abort": true, 00:06:19.306 "nvme_admin": false, 00:06:19.306 "nvme_io": false 00:06:19.306 }, 00:06:19.306 "memory_domains": [ 00:06:19.306 { 00:06:19.306 "dma_device_id": "system", 00:06:19.306 "dma_device_type": 1 00:06:19.306 }, 00:06:19.306 { 00:06:19.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.306 "dma_device_type": 2 00:06:19.306 } 00:06:19.306 ], 00:06:19.306 "driver_specific": { 00:06:19.306 "passthru": { 00:06:19.306 "name": "Passthru0", 00:06:19.306 "base_bdev_name": "Malloc0" 00:06:19.306 } 00:06:19.306 } 00:06:19.306 } 00:06:19.306 ]' 00:06:19.306 15:10:28 -- rpc/rpc.sh@21 -- # jq length 00:06:19.564 15:10:28 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:19.564 15:10:28 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:19.564 15:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.564 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.564 15:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.564 15:10:28 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:19.564 15:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.564 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.564 15:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.564 15:10:28 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:19.564 15:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.564 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.564 15:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.564 15:10:28 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:19.564 15:10:28 -- rpc/rpc.sh@26 -- # jq length 00:06:19.564 ************************************ 00:06:19.564 END TEST rpc_integrity 00:06:19.564 ************************************ 00:06:19.564 15:10:28 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:19.564 00:06:19.564 real 0m0.317s 00:06:19.564 user 0m0.206s 00:06:19.564 sys 0m0.042s 00:06:19.564 15:10:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.564 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.564 15:10:28 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:19.564 15:10:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.564 15:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.564 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.564 ************************************ 00:06:19.564 START TEST rpc_plugins 00:06:19.564 ************************************ 00:06:19.564 15:10:28 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:06:19.564 15:10:28 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:19.564 15:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.565 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.565 15:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.565 15:10:28 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:19.565 15:10:28 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:19.565 15:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.565 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.823 15:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.823 15:10:28 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:19.823 { 00:06:19.823 "name": "Malloc1", 00:06:19.823 "aliases": [ 00:06:19.823 "77344ae5-4055-4200-a139-23e76ac6b7e8" 00:06:19.823 ], 00:06:19.823 "product_name": "Malloc disk", 00:06:19.823 "block_size": 4096, 00:06:19.823 "num_blocks": 256, 00:06:19.823 "uuid": "77344ae5-4055-4200-a139-23e76ac6b7e8", 00:06:19.823 "assigned_rate_limits": { 00:06:19.823 "rw_ios_per_sec": 0, 00:06:19.823 "rw_mbytes_per_sec": 0, 00:06:19.823 "r_mbytes_per_sec": 0, 00:06:19.823 "w_mbytes_per_sec": 0 00:06:19.823 }, 00:06:19.823 "claimed": false, 00:06:19.823 "zoned": false, 00:06:19.823 "supported_io_types": { 00:06:19.823 "read": true, 00:06:19.823 "write": true, 00:06:19.823 "unmap": true, 00:06:19.823 "write_zeroes": true, 00:06:19.823 "flush": true, 00:06:19.823 "reset": true, 00:06:19.823 "compare": false, 00:06:19.823 "compare_and_write": false, 00:06:19.823 "abort": true, 00:06:19.823 "nvme_admin": false, 00:06:19.823 "nvme_io": false 00:06:19.823 }, 00:06:19.823 "memory_domains": [ 00:06:19.823 { 00:06:19.823 "dma_device_id": "system", 00:06:19.823 "dma_device_type": 1 00:06:19.823 }, 00:06:19.823 { 00:06:19.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.823 "dma_device_type": 2 00:06:19.823 } 00:06:19.823 ], 00:06:19.823 "driver_specific": {} 00:06:19.823 } 00:06:19.823 ]' 00:06:19.823 15:10:28 -- rpc/rpc.sh@32 -- # jq length 00:06:19.823 15:10:28 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:19.823 15:10:28 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:19.823 15:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.823 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.823 15:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.823 15:10:28 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:19.823 15:10:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.823 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.823 15:10:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.823 15:10:28 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:19.823 15:10:28 -- rpc/rpc.sh@36 -- # jq length 00:06:19.823 ************************************ 00:06:19.823 END TEST rpc_plugins 00:06:19.823 ************************************ 00:06:19.823 15:10:28 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:19.823 00:06:19.823 real 0m0.161s 00:06:19.823 user 0m0.103s 00:06:19.823 sys 0m0.023s 00:06:19.823 15:10:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.823 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.823 15:10:28 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:19.823 15:10:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.823 15:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.823 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.823 ************************************ 00:06:19.823 START TEST rpc_trace_cmd_test 00:06:19.823 ************************************ 00:06:19.823 15:10:29 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:06:20.082 15:10:29 -- rpc/rpc.sh@40 -- # local info 00:06:20.082 15:10:29 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:20.082 15:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.082 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.082 15:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.082 15:10:29 -- rpc/rpc.sh@42 -- # info='{ 00:06:20.082 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58353", 00:06:20.082 "tpoint_group_mask": "0x8", 00:06:20.082 "iscsi_conn": { 00:06:20.082 "mask": "0x2", 00:06:20.082 "tpoint_mask": "0x0" 00:06:20.082 }, 00:06:20.082 "scsi": { 00:06:20.082 "mask": "0x4", 00:06:20.082 "tpoint_mask": "0x0" 00:06:20.082 }, 00:06:20.082 "bdev": { 00:06:20.082 "mask": "0x8", 00:06:20.082 "tpoint_mask": "0xffffffffffffffff" 00:06:20.082 }, 00:06:20.082 "nvmf_rdma": { 00:06:20.082 "mask": "0x10", 00:06:20.082 "tpoint_mask": "0x0" 00:06:20.082 }, 00:06:20.082 "nvmf_tcp": { 00:06:20.082 "mask": "0x20", 00:06:20.082 "tpoint_mask": "0x0" 00:06:20.082 }, 00:06:20.082 "ftl": { 00:06:20.082 "mask": "0x40", 00:06:20.082 "tpoint_mask": "0x0" 00:06:20.082 }, 00:06:20.082 "blobfs": { 00:06:20.082 "mask": "0x80", 00:06:20.082 "tpoint_mask": "0x0" 00:06:20.082 }, 00:06:20.082 "dsa": { 00:06:20.082 "mask": "0x200", 00:06:20.082 "tpoint_mask": "0x0" 00:06:20.082 }, 00:06:20.082 "thread": { 00:06:20.082 "mask": "0x400", 00:06:20.082 "tpoint_mask": "0x0" 00:06:20.082 }, 00:06:20.082 "nvme_pcie": { 00:06:20.082 "mask": "0x800", 00:06:20.082 "tpoint_mask": "0x0" 00:06:20.082 }, 00:06:20.082 "iaa": { 00:06:20.082 "mask": "0x1000", 00:06:20.082 "tpoint_mask": "0x0" 00:06:20.082 }, 00:06:20.082 "nvme_tcp": { 00:06:20.082 "mask": "0x2000", 00:06:20.082 "tpoint_mask": "0x0" 00:06:20.082 }, 00:06:20.082 "bdev_nvme": { 00:06:20.082 "mask": "0x4000", 00:06:20.082 "tpoint_mask": "0x0" 00:06:20.082 }, 00:06:20.082 "sock": { 00:06:20.082 "mask": "0x8000", 00:06:20.082 "tpoint_mask": "0x0" 00:06:20.082 } 00:06:20.082 }' 00:06:20.082 15:10:29 -- rpc/rpc.sh@43 -- # jq length 00:06:20.082 15:10:29 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:20.082 15:10:29 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:20.082 15:10:29 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:20.082 15:10:29 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:20.082 15:10:29 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:20.082 15:10:29 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:20.082 15:10:29 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:20.082 15:10:29 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:20.341 ************************************ 00:06:20.341 END TEST rpc_trace_cmd_test 00:06:20.341 ************************************ 00:06:20.341 15:10:29 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:20.341 00:06:20.341 real 0m0.285s 00:06:20.341 user 0m0.241s 00:06:20.341 sys 0m0.031s 00:06:20.341 15:10:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.341 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.341 15:10:29 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:20.341 15:10:29 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:20.341 15:10:29 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:20.341 15:10:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.341 15:10:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.341 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.341 ************************************ 00:06:20.341 START TEST rpc_daemon_integrity 00:06:20.341 ************************************ 00:06:20.341 15:10:29 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:06:20.341 15:10:29 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:20.341 15:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.341 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.341 15:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.341 15:10:29 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:20.341 15:10:29 -- rpc/rpc.sh@13 -- # jq length 00:06:20.341 15:10:29 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:20.341 15:10:29 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:20.341 15:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.341 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.341 15:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.341 15:10:29 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:20.341 15:10:29 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:20.341 15:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.341 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.341 15:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.341 15:10:29 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:20.341 { 00:06:20.341 "name": "Malloc2", 00:06:20.341 "aliases": [ 00:06:20.341 "8fb90fa4-ff88-45e6-bf72-f7e7aebf1ffc" 00:06:20.341 ], 00:06:20.341 "product_name": "Malloc disk", 00:06:20.341 "block_size": 512, 00:06:20.341 "num_blocks": 16384, 00:06:20.341 "uuid": "8fb90fa4-ff88-45e6-bf72-f7e7aebf1ffc", 00:06:20.341 "assigned_rate_limits": { 00:06:20.341 "rw_ios_per_sec": 0, 00:06:20.341 "rw_mbytes_per_sec": 0, 00:06:20.341 "r_mbytes_per_sec": 0, 00:06:20.341 "w_mbytes_per_sec": 0 00:06:20.341 }, 00:06:20.341 "claimed": false, 00:06:20.341 "zoned": false, 00:06:20.341 "supported_io_types": { 00:06:20.341 "read": true, 00:06:20.341 "write": true, 00:06:20.341 "unmap": true, 00:06:20.341 "write_zeroes": true, 00:06:20.341 "flush": true, 00:06:20.341 "reset": true, 00:06:20.341 "compare": false, 00:06:20.341 "compare_and_write": false, 00:06:20.341 "abort": true, 00:06:20.341 "nvme_admin": false, 00:06:20.341 "nvme_io": false 00:06:20.341 }, 00:06:20.341 "memory_domains": [ 00:06:20.341 { 00:06:20.341 "dma_device_id": "system", 00:06:20.341 "dma_device_type": 1 00:06:20.341 }, 00:06:20.341 { 00:06:20.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.341 "dma_device_type": 2 00:06:20.341 } 00:06:20.341 ], 00:06:20.341 "driver_specific": {} 00:06:20.341 } 00:06:20.341 ]' 00:06:20.341 15:10:29 -- rpc/rpc.sh@17 -- # jq length 00:06:20.600 15:10:29 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:20.600 15:10:29 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:20.600 15:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.600 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.600 [2024-04-24 15:10:29.625571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:20.600 [2024-04-24 15:10:29.625632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.600 [2024-04-24 15:10:29.625656] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c99bc0 00:06:20.600 [2024-04-24 15:10:29.625667] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.600 [2024-04-24 15:10:29.627274] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.600 [2024-04-24 15:10:29.627307] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:20.600 Passthru0 00:06:20.600 15:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.600 15:10:29 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:20.600 15:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.600 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.600 15:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.600 15:10:29 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:20.600 { 00:06:20.600 "name": "Malloc2", 00:06:20.600 "aliases": [ 00:06:20.600 "8fb90fa4-ff88-45e6-bf72-f7e7aebf1ffc" 00:06:20.600 ], 00:06:20.600 "product_name": "Malloc disk", 00:06:20.600 "block_size": 512, 00:06:20.600 "num_blocks": 16384, 00:06:20.600 "uuid": "8fb90fa4-ff88-45e6-bf72-f7e7aebf1ffc", 00:06:20.600 "assigned_rate_limits": { 00:06:20.600 "rw_ios_per_sec": 0, 00:06:20.600 "rw_mbytes_per_sec": 0, 00:06:20.600 "r_mbytes_per_sec": 0, 00:06:20.600 "w_mbytes_per_sec": 0 00:06:20.600 }, 00:06:20.600 "claimed": true, 00:06:20.600 "claim_type": "exclusive_write", 00:06:20.600 "zoned": false, 00:06:20.600 "supported_io_types": { 00:06:20.600 "read": true, 00:06:20.600 "write": true, 00:06:20.600 "unmap": true, 00:06:20.600 "write_zeroes": true, 00:06:20.600 "flush": true, 00:06:20.600 "reset": true, 00:06:20.600 "compare": false, 00:06:20.600 "compare_and_write": false, 00:06:20.600 "abort": true, 00:06:20.600 "nvme_admin": false, 00:06:20.600 "nvme_io": false 00:06:20.600 }, 00:06:20.600 "memory_domains": [ 00:06:20.600 { 00:06:20.600 "dma_device_id": "system", 00:06:20.600 "dma_device_type": 1 00:06:20.600 }, 00:06:20.600 { 00:06:20.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.600 "dma_device_type": 2 00:06:20.600 } 00:06:20.600 ], 00:06:20.600 "driver_specific": {} 00:06:20.600 }, 00:06:20.600 { 00:06:20.600 "name": "Passthru0", 00:06:20.600 "aliases": [ 00:06:20.600 "20ecd466-6815-57d4-81be-c9e1f57f28e9" 00:06:20.600 ], 00:06:20.600 "product_name": "passthru", 00:06:20.600 "block_size": 512, 00:06:20.600 "num_blocks": 16384, 00:06:20.600 "uuid": "20ecd466-6815-57d4-81be-c9e1f57f28e9", 00:06:20.600 "assigned_rate_limits": { 00:06:20.600 "rw_ios_per_sec": 0, 00:06:20.600 "rw_mbytes_per_sec": 0, 00:06:20.600 "r_mbytes_per_sec": 0, 00:06:20.600 "w_mbytes_per_sec": 0 00:06:20.600 }, 00:06:20.600 "claimed": false, 00:06:20.600 "zoned": false, 00:06:20.600 "supported_io_types": { 00:06:20.600 "read": true, 00:06:20.600 "write": true, 00:06:20.600 "unmap": true, 00:06:20.600 "write_zeroes": true, 00:06:20.600 "flush": true, 00:06:20.600 "reset": true, 00:06:20.600 "compare": false, 00:06:20.600 "compare_and_write": false, 00:06:20.600 "abort": true, 00:06:20.600 "nvme_admin": false, 00:06:20.600 "nvme_io": false 00:06:20.600 }, 00:06:20.600 "memory_domains": [ 00:06:20.600 { 00:06:20.600 "dma_device_id": "system", 00:06:20.600 "dma_device_type": 1 00:06:20.600 }, 00:06:20.600 { 00:06:20.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.600 "dma_device_type": 2 00:06:20.600 } 00:06:20.600 ], 00:06:20.600 "driver_specific": { 00:06:20.600 "passthru": { 00:06:20.600 "name": "Passthru0", 00:06:20.600 "base_bdev_name": "Malloc2" 00:06:20.600 } 00:06:20.600 } 00:06:20.600 } 00:06:20.600 ]' 00:06:20.600 15:10:29 -- rpc/rpc.sh@21 -- # jq length 00:06:20.600 15:10:29 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:20.600 15:10:29 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:20.600 15:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.600 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.600 15:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.600 15:10:29 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:20.600 15:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.600 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.600 15:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.600 15:10:29 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:20.600 15:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.600 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.600 15:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.600 15:10:29 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:20.600 15:10:29 -- rpc/rpc.sh@26 -- # jq length 00:06:20.600 ************************************ 00:06:20.600 END TEST rpc_daemon_integrity 00:06:20.600 ************************************ 00:06:20.600 15:10:29 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:20.600 00:06:20.600 real 0m0.321s 00:06:20.600 user 0m0.212s 00:06:20.600 sys 0m0.043s 00:06:20.600 15:10:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.600 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.600 15:10:29 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:20.600 15:10:29 -- rpc/rpc.sh@84 -- # killprocess 58353 00:06:20.600 15:10:29 -- common/autotest_common.sh@936 -- # '[' -z 58353 ']' 00:06:20.600 15:10:29 -- common/autotest_common.sh@940 -- # kill -0 58353 00:06:20.600 15:10:29 -- common/autotest_common.sh@941 -- # uname 00:06:20.858 15:10:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:20.858 15:10:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58353 00:06:20.858 killing process with pid 58353 00:06:20.858 15:10:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:20.858 15:10:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:20.858 15:10:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58353' 00:06:20.858 15:10:29 -- common/autotest_common.sh@955 -- # kill 58353 00:06:20.858 15:10:29 -- common/autotest_common.sh@960 -- # wait 58353 00:06:21.116 00:06:21.116 real 0m3.157s 00:06:21.116 user 0m4.124s 00:06:21.116 sys 0m0.795s 00:06:21.116 15:10:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.116 ************************************ 00:06:21.116 END TEST rpc 00:06:21.116 ************************************ 00:06:21.116 15:10:30 -- common/autotest_common.sh@10 -- # set +x 00:06:21.116 15:10:30 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:21.116 15:10:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:21.116 15:10:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.117 15:10:30 -- common/autotest_common.sh@10 -- # set +x 00:06:21.375 ************************************ 00:06:21.375 START TEST skip_rpc 00:06:21.375 ************************************ 00:06:21.375 15:10:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:21.375 * Looking for test storage... 00:06:21.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:21.375 15:10:30 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:21.375 15:10:30 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:21.375 15:10:30 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:21.375 15:10:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:21.375 15:10:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.375 15:10:30 -- common/autotest_common.sh@10 -- # set +x 00:06:21.375 ************************************ 00:06:21.375 START TEST skip_rpc 00:06:21.375 ************************************ 00:06:21.375 15:10:30 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:06:21.375 15:10:30 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58584 00:06:21.375 15:10:30 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:21.375 15:10:30 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.375 15:10:30 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:21.635 [2024-04-24 15:10:30.624258] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:21.635 [2024-04-24 15:10:30.624372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58584 ] 00:06:21.635 [2024-04-24 15:10:30.766339] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.893 [2024-04-24 15:10:30.905840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.182 15:10:35 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:27.182 15:10:35 -- common/autotest_common.sh@638 -- # local es=0 00:06:27.182 15:10:35 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:27.182 15:10:35 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:27.182 15:10:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:27.182 15:10:35 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:27.182 15:10:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:27.182 15:10:35 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:06:27.182 15:10:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.183 15:10:35 -- common/autotest_common.sh@10 -- # set +x 00:06:27.183 15:10:35 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:27.183 15:10:35 -- common/autotest_common.sh@641 -- # es=1 00:06:27.183 15:10:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:27.183 15:10:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:27.183 15:10:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:27.183 15:10:35 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:27.183 15:10:35 -- rpc/skip_rpc.sh@23 -- # killprocess 58584 00:06:27.183 15:10:35 -- common/autotest_common.sh@936 -- # '[' -z 58584 ']' 00:06:27.183 15:10:35 -- common/autotest_common.sh@940 -- # kill -0 58584 00:06:27.183 15:10:35 -- common/autotest_common.sh@941 -- # uname 00:06:27.183 15:10:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.183 15:10:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58584 00:06:27.183 15:10:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:27.183 15:10:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:27.183 15:10:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58584' 00:06:27.183 killing process with pid 58584 00:06:27.183 15:10:35 -- common/autotest_common.sh@955 -- # kill 58584 00:06:27.183 15:10:35 -- common/autotest_common.sh@960 -- # wait 58584 00:06:27.183 00:06:27.183 real 0m5.476s 00:06:27.183 user 0m5.076s 00:06:27.183 sys 0m0.301s 00:06:27.183 15:10:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.183 15:10:36 -- common/autotest_common.sh@10 -- # set +x 00:06:27.183 ************************************ 00:06:27.183 END TEST skip_rpc 00:06:27.183 ************************************ 00:06:27.183 15:10:36 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:27.183 15:10:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.183 15:10:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.183 15:10:36 -- common/autotest_common.sh@10 -- # set +x 00:06:27.183 ************************************ 00:06:27.183 START TEST skip_rpc_with_json 00:06:27.183 ************************************ 00:06:27.183 15:10:36 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:06:27.183 15:10:36 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:27.183 15:10:36 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58669 00:06:27.183 15:10:36 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.183 15:10:36 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.183 15:10:36 -- rpc/skip_rpc.sh@31 -- # waitforlisten 58669 00:06:27.183 15:10:36 -- common/autotest_common.sh@817 -- # '[' -z 58669 ']' 00:06:27.183 15:10:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.183 15:10:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:27.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.183 15:10:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.183 15:10:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:27.183 15:10:36 -- common/autotest_common.sh@10 -- # set +x 00:06:27.183 [2024-04-24 15:10:36.222846] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:27.183 [2024-04-24 15:10:36.222965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58669 ] 00:06:27.183 [2024-04-24 15:10:36.359545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.441 [2024-04-24 15:10:36.479674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.007 15:10:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:28.007 15:10:37 -- common/autotest_common.sh@850 -- # return 0 00:06:28.007 15:10:37 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:28.007 15:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.007 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:06:28.007 [2024-04-24 15:10:37.216374] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:28.007 request: 00:06:28.007 { 00:06:28.007 "trtype": "tcp", 00:06:28.007 "method": "nvmf_get_transports", 00:06:28.007 "req_id": 1 00:06:28.007 } 00:06:28.007 Got JSON-RPC error response 00:06:28.007 response: 00:06:28.007 { 00:06:28.007 "code": -19, 00:06:28.007 "message": "No such device" 00:06:28.007 } 00:06:28.007 15:10:37 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:28.007 15:10:37 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:28.007 15:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.007 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:06:28.007 [2024-04-24 15:10:37.228464] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.007 15:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.007 15:10:37 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:28.007 15:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.007 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:06:28.266 15:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.266 15:10:37 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:28.266 { 00:06:28.266 "subsystems": [ 00:06:28.266 { 00:06:28.266 "subsystem": "keyring", 00:06:28.266 "config": [] 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "subsystem": "iobuf", 00:06:28.266 "config": [ 00:06:28.266 { 00:06:28.266 "method": "iobuf_set_options", 00:06:28.266 "params": { 00:06:28.266 "small_pool_count": 8192, 00:06:28.266 "large_pool_count": 1024, 00:06:28.266 "small_bufsize": 8192, 00:06:28.266 "large_bufsize": 135168 00:06:28.266 } 00:06:28.266 } 00:06:28.266 ] 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "subsystem": "sock", 00:06:28.266 "config": [ 00:06:28.266 { 00:06:28.266 "method": "sock_impl_set_options", 00:06:28.266 "params": { 00:06:28.266 "impl_name": "uring", 00:06:28.266 "recv_buf_size": 2097152, 00:06:28.266 "send_buf_size": 2097152, 00:06:28.266 "enable_recv_pipe": true, 00:06:28.266 "enable_quickack": false, 00:06:28.266 "enable_placement_id": 0, 00:06:28.266 "enable_zerocopy_send_server": false, 00:06:28.266 "enable_zerocopy_send_client": false, 00:06:28.266 "zerocopy_threshold": 0, 00:06:28.266 "tls_version": 0, 00:06:28.266 "enable_ktls": false 00:06:28.266 } 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "method": "sock_impl_set_options", 00:06:28.266 "params": { 00:06:28.266 "impl_name": "posix", 00:06:28.266 "recv_buf_size": 2097152, 00:06:28.266 "send_buf_size": 2097152, 00:06:28.266 "enable_recv_pipe": true, 00:06:28.266 "enable_quickack": false, 00:06:28.266 "enable_placement_id": 0, 00:06:28.266 "enable_zerocopy_send_server": true, 00:06:28.266 "enable_zerocopy_send_client": false, 00:06:28.266 "zerocopy_threshold": 0, 00:06:28.266 "tls_version": 0, 00:06:28.266 "enable_ktls": false 00:06:28.266 } 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "method": "sock_impl_set_options", 00:06:28.266 "params": { 00:06:28.266 "impl_name": "ssl", 00:06:28.266 "recv_buf_size": 4096, 00:06:28.266 "send_buf_size": 4096, 00:06:28.266 "enable_recv_pipe": true, 00:06:28.266 "enable_quickack": false, 00:06:28.266 "enable_placement_id": 0, 00:06:28.266 "enable_zerocopy_send_server": true, 00:06:28.266 "enable_zerocopy_send_client": false, 00:06:28.266 "zerocopy_threshold": 0, 00:06:28.266 "tls_version": 0, 00:06:28.266 "enable_ktls": false 00:06:28.266 } 00:06:28.266 } 00:06:28.266 ] 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "subsystem": "vmd", 00:06:28.266 "config": [] 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "subsystem": "accel", 00:06:28.266 "config": [ 00:06:28.266 { 00:06:28.266 "method": "accel_set_options", 00:06:28.266 "params": { 00:06:28.266 "small_cache_size": 128, 00:06:28.266 "large_cache_size": 16, 00:06:28.266 "task_count": 2048, 00:06:28.266 "sequence_count": 2048, 00:06:28.266 "buf_count": 2048 00:06:28.266 } 00:06:28.266 } 00:06:28.266 ] 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "subsystem": "bdev", 00:06:28.266 "config": [ 00:06:28.266 { 00:06:28.266 "method": "bdev_set_options", 00:06:28.266 "params": { 00:06:28.266 "bdev_io_pool_size": 65535, 00:06:28.266 "bdev_io_cache_size": 256, 00:06:28.266 "bdev_auto_examine": true, 00:06:28.266 "iobuf_small_cache_size": 128, 00:06:28.266 "iobuf_large_cache_size": 16 00:06:28.266 } 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "method": "bdev_raid_set_options", 00:06:28.266 "params": { 00:06:28.266 "process_window_size_kb": 1024 00:06:28.266 } 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "method": "bdev_iscsi_set_options", 00:06:28.266 "params": { 00:06:28.266 "timeout_sec": 30 00:06:28.266 } 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "method": "bdev_nvme_set_options", 00:06:28.266 "params": { 00:06:28.266 "action_on_timeout": "none", 00:06:28.266 "timeout_us": 0, 00:06:28.266 "timeout_admin_us": 0, 00:06:28.266 "keep_alive_timeout_ms": 10000, 00:06:28.266 "arbitration_burst": 0, 00:06:28.266 "low_priority_weight": 0, 00:06:28.266 "medium_priority_weight": 0, 00:06:28.266 "high_priority_weight": 0, 00:06:28.266 "nvme_adminq_poll_period_us": 10000, 00:06:28.266 "nvme_ioq_poll_period_us": 0, 00:06:28.266 "io_queue_requests": 0, 00:06:28.266 "delay_cmd_submit": true, 00:06:28.266 "transport_retry_count": 4, 00:06:28.266 "bdev_retry_count": 3, 00:06:28.266 "transport_ack_timeout": 0, 00:06:28.266 "ctrlr_loss_timeout_sec": 0, 00:06:28.266 "reconnect_delay_sec": 0, 00:06:28.266 "fast_io_fail_timeout_sec": 0, 00:06:28.266 "disable_auto_failback": false, 00:06:28.266 "generate_uuids": false, 00:06:28.266 "transport_tos": 0, 00:06:28.266 "nvme_error_stat": false, 00:06:28.266 "rdma_srq_size": 0, 00:06:28.266 "io_path_stat": false, 00:06:28.266 "allow_accel_sequence": false, 00:06:28.266 "rdma_max_cq_size": 0, 00:06:28.266 "rdma_cm_event_timeout_ms": 0, 00:06:28.266 "dhchap_digests": [ 00:06:28.266 "sha256", 00:06:28.266 "sha384", 00:06:28.266 "sha512" 00:06:28.266 ], 00:06:28.266 "dhchap_dhgroups": [ 00:06:28.266 "null", 00:06:28.266 "ffdhe2048", 00:06:28.266 "ffdhe3072", 00:06:28.266 "ffdhe4096", 00:06:28.266 "ffdhe6144", 00:06:28.266 "ffdhe8192" 00:06:28.266 ] 00:06:28.266 } 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "method": "bdev_nvme_set_hotplug", 00:06:28.266 "params": { 00:06:28.266 "period_us": 100000, 00:06:28.266 "enable": false 00:06:28.266 } 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "method": "bdev_wait_for_examine" 00:06:28.266 } 00:06:28.266 ] 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "subsystem": "scsi", 00:06:28.266 "config": null 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "subsystem": "scheduler", 00:06:28.266 "config": [ 00:06:28.266 { 00:06:28.266 "method": "framework_set_scheduler", 00:06:28.266 "params": { 00:06:28.266 "name": "static" 00:06:28.266 } 00:06:28.266 } 00:06:28.266 ] 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "subsystem": "vhost_scsi", 00:06:28.266 "config": [] 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "subsystem": "vhost_blk", 00:06:28.266 "config": [] 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "subsystem": "ublk", 00:06:28.266 "config": [] 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "subsystem": "nbd", 00:06:28.266 "config": [] 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "subsystem": "nvmf", 00:06:28.266 "config": [ 00:06:28.266 { 00:06:28.266 "method": "nvmf_set_config", 00:06:28.266 "params": { 00:06:28.266 "discovery_filter": "match_any", 00:06:28.266 "admin_cmd_passthru": { 00:06:28.266 "identify_ctrlr": false 00:06:28.266 } 00:06:28.266 } 00:06:28.266 }, 00:06:28.266 { 00:06:28.266 "method": "nvmf_set_max_subsystems", 00:06:28.266 "params": { 00:06:28.266 "max_subsystems": 1024 00:06:28.267 } 00:06:28.267 }, 00:06:28.267 { 00:06:28.267 "method": "nvmf_set_crdt", 00:06:28.267 "params": { 00:06:28.267 "crdt1": 0, 00:06:28.267 "crdt2": 0, 00:06:28.267 "crdt3": 0 00:06:28.267 } 00:06:28.267 }, 00:06:28.267 { 00:06:28.267 "method": "nvmf_create_transport", 00:06:28.267 "params": { 00:06:28.267 "trtype": "TCP", 00:06:28.267 "max_queue_depth": 128, 00:06:28.267 "max_io_qpairs_per_ctrlr": 127, 00:06:28.267 "in_capsule_data_size": 4096, 00:06:28.267 "max_io_size": 131072, 00:06:28.267 "io_unit_size": 131072, 00:06:28.267 "max_aq_depth": 128, 00:06:28.267 "num_shared_buffers": 511, 00:06:28.267 "buf_cache_size": 4294967295, 00:06:28.267 "dif_insert_or_strip": false, 00:06:28.267 "zcopy": false, 00:06:28.267 "c2h_success": true, 00:06:28.267 "sock_priority": 0, 00:06:28.267 "abort_timeout_sec": 1, 00:06:28.267 "ack_timeout": 0 00:06:28.267 } 00:06:28.267 } 00:06:28.267 ] 00:06:28.267 }, 00:06:28.267 { 00:06:28.267 "subsystem": "iscsi", 00:06:28.267 "config": [ 00:06:28.267 { 00:06:28.267 "method": "iscsi_set_options", 00:06:28.267 "params": { 00:06:28.267 "node_base": "iqn.2016-06.io.spdk", 00:06:28.267 "max_sessions": 128, 00:06:28.267 "max_connections_per_session": 2, 00:06:28.267 "max_queue_depth": 64, 00:06:28.267 "default_time2wait": 2, 00:06:28.267 "default_time2retain": 20, 00:06:28.267 "first_burst_length": 8192, 00:06:28.267 "immediate_data": true, 00:06:28.267 "allow_duplicated_isid": false, 00:06:28.267 "error_recovery_level": 0, 00:06:28.267 "nop_timeout": 60, 00:06:28.267 "nop_in_interval": 30, 00:06:28.267 "disable_chap": false, 00:06:28.267 "require_chap": false, 00:06:28.267 "mutual_chap": false, 00:06:28.267 "chap_group": 0, 00:06:28.267 "max_large_datain_per_connection": 64, 00:06:28.267 "max_r2t_per_connection": 4, 00:06:28.267 "pdu_pool_size": 36864, 00:06:28.267 "immediate_data_pool_size": 16384, 00:06:28.267 "data_out_pool_size": 2048 00:06:28.267 } 00:06:28.267 } 00:06:28.267 ] 00:06:28.267 } 00:06:28.267 ] 00:06:28.267 } 00:06:28.267 15:10:37 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:28.267 15:10:37 -- rpc/skip_rpc.sh@40 -- # killprocess 58669 00:06:28.267 15:10:37 -- common/autotest_common.sh@936 -- # '[' -z 58669 ']' 00:06:28.267 15:10:37 -- common/autotest_common.sh@940 -- # kill -0 58669 00:06:28.267 15:10:37 -- common/autotest_common.sh@941 -- # uname 00:06:28.267 15:10:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:28.267 15:10:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58669 00:06:28.267 killing process with pid 58669 00:06:28.267 15:10:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:28.267 15:10:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:28.267 15:10:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58669' 00:06:28.267 15:10:37 -- common/autotest_common.sh@955 -- # kill 58669 00:06:28.267 15:10:37 -- common/autotest_common.sh@960 -- # wait 58669 00:06:28.833 15:10:37 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58702 00:06:28.833 15:10:37 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:28.834 15:10:37 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:34.104 15:10:42 -- rpc/skip_rpc.sh@50 -- # killprocess 58702 00:06:34.104 15:10:42 -- common/autotest_common.sh@936 -- # '[' -z 58702 ']' 00:06:34.104 15:10:42 -- common/autotest_common.sh@940 -- # kill -0 58702 00:06:34.104 15:10:42 -- common/autotest_common.sh@941 -- # uname 00:06:34.104 15:10:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:34.104 15:10:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58702 00:06:34.104 15:10:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:34.104 15:10:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:34.104 killing process with pid 58702 00:06:34.104 15:10:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58702' 00:06:34.104 15:10:42 -- common/autotest_common.sh@955 -- # kill 58702 00:06:34.104 15:10:42 -- common/autotest_common.sh@960 -- # wait 58702 00:06:34.104 15:10:43 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:34.104 15:10:43 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:34.104 00:06:34.104 real 0m7.191s 00:06:34.104 user 0m6.924s 00:06:34.104 sys 0m0.669s 00:06:34.104 15:10:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.362 15:10:43 -- common/autotest_common.sh@10 -- # set +x 00:06:34.362 ************************************ 00:06:34.362 END TEST skip_rpc_with_json 00:06:34.362 ************************************ 00:06:34.362 15:10:43 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:34.362 15:10:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.362 15:10:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.362 15:10:43 -- common/autotest_common.sh@10 -- # set +x 00:06:34.362 ************************************ 00:06:34.362 START TEST skip_rpc_with_delay 00:06:34.362 ************************************ 00:06:34.362 15:10:43 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:06:34.362 15:10:43 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:34.362 15:10:43 -- common/autotest_common.sh@638 -- # local es=0 00:06:34.362 15:10:43 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:34.362 15:10:43 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.362 15:10:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:34.362 15:10:43 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.362 15:10:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:34.362 15:10:43 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.362 15:10:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:34.362 15:10:43 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.362 15:10:43 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:34.362 15:10:43 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:34.362 [2024-04-24 15:10:43.538781] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:34.362 [2024-04-24 15:10:43.538951] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:34.362 15:10:43 -- common/autotest_common.sh@641 -- # es=1 00:06:34.362 15:10:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:34.362 15:10:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:34.362 15:10:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:34.362 00:06:34.362 real 0m0.084s 00:06:34.362 user 0m0.056s 00:06:34.362 sys 0m0.026s 00:06:34.362 15:10:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.362 ************************************ 00:06:34.362 END TEST skip_rpc_with_delay 00:06:34.362 15:10:43 -- common/autotest_common.sh@10 -- # set +x 00:06:34.362 ************************************ 00:06:34.362 15:10:43 -- rpc/skip_rpc.sh@77 -- # uname 00:06:34.362 15:10:43 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:34.362 15:10:43 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:34.362 15:10:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.362 15:10:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.362 15:10:43 -- common/autotest_common.sh@10 -- # set +x 00:06:34.621 ************************************ 00:06:34.621 START TEST exit_on_failed_rpc_init 00:06:34.621 ************************************ 00:06:34.621 15:10:43 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:06:34.621 15:10:43 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58826 00:06:34.621 15:10:43 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.621 15:10:43 -- rpc/skip_rpc.sh@63 -- # waitforlisten 58826 00:06:34.621 15:10:43 -- common/autotest_common.sh@817 -- # '[' -z 58826 ']' 00:06:34.621 15:10:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.621 15:10:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:34.621 15:10:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.621 15:10:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:34.621 15:10:43 -- common/autotest_common.sh@10 -- # set +x 00:06:34.621 [2024-04-24 15:10:43.762008] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:34.621 [2024-04-24 15:10:43.762140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58826 ] 00:06:34.880 [2024-04-24 15:10:43.902090] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.880 [2024-04-24 15:10:44.028823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.814 15:10:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:35.814 15:10:44 -- common/autotest_common.sh@850 -- # return 0 00:06:35.814 15:10:44 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.814 15:10:44 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:35.814 15:10:44 -- common/autotest_common.sh@638 -- # local es=0 00:06:35.814 15:10:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:35.814 15:10:44 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.814 15:10:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:35.814 15:10:44 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.814 15:10:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:35.814 15:10:44 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.814 15:10:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:35.814 15:10:44 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.814 15:10:44 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:35.814 15:10:44 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:35.814 [2024-04-24 15:10:44.831988] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:35.814 [2024-04-24 15:10:44.832081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58844 ] 00:06:35.814 [2024-04-24 15:10:44.964290] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.086 [2024-04-24 15:10:45.087857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.086 [2024-04-24 15:10:45.087958] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:36.086 [2024-04-24 15:10:45.087974] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:36.086 [2024-04-24 15:10:45.087983] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.086 15:10:45 -- common/autotest_common.sh@641 -- # es=234 00:06:36.086 15:10:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:36.086 15:10:45 -- common/autotest_common.sh@650 -- # es=106 00:06:36.086 15:10:45 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:36.086 15:10:45 -- common/autotest_common.sh@658 -- # es=1 00:06:36.086 15:10:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:36.086 15:10:45 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:36.086 15:10:45 -- rpc/skip_rpc.sh@70 -- # killprocess 58826 00:06:36.086 15:10:45 -- common/autotest_common.sh@936 -- # '[' -z 58826 ']' 00:06:36.086 15:10:45 -- common/autotest_common.sh@940 -- # kill -0 58826 00:06:36.086 15:10:45 -- common/autotest_common.sh@941 -- # uname 00:06:36.086 15:10:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:36.086 15:10:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58826 00:06:36.086 15:10:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:36.086 15:10:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:36.086 killing process with pid 58826 00:06:36.086 15:10:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58826' 00:06:36.086 15:10:45 -- common/autotest_common.sh@955 -- # kill 58826 00:06:36.086 15:10:45 -- common/autotest_common.sh@960 -- # wait 58826 00:06:36.665 00:06:36.665 real 0m2.009s 00:06:36.665 user 0m2.389s 00:06:36.665 sys 0m0.443s 00:06:36.665 15:10:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.665 15:10:45 -- common/autotest_common.sh@10 -- # set +x 00:06:36.665 ************************************ 00:06:36.665 END TEST exit_on_failed_rpc_init 00:06:36.665 ************************************ 00:06:36.665 15:10:45 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:36.665 00:06:36.665 real 0m15.353s 00:06:36.665 user 0m14.647s 00:06:36.665 sys 0m1.763s 00:06:36.665 15:10:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.665 15:10:45 -- common/autotest_common.sh@10 -- # set +x 00:06:36.665 ************************************ 00:06:36.665 END TEST skip_rpc 00:06:36.665 ************************************ 00:06:36.665 15:10:45 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:36.665 15:10:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.665 15:10:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.665 15:10:45 -- common/autotest_common.sh@10 -- # set +x 00:06:36.665 ************************************ 00:06:36.665 START TEST rpc_client 00:06:36.665 ************************************ 00:06:36.665 15:10:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:36.924 * Looking for test storage... 00:06:36.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:36.924 15:10:45 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:36.924 OK 00:06:36.924 15:10:45 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:36.924 00:06:36.924 real 0m0.106s 00:06:36.924 user 0m0.049s 00:06:36.924 sys 0m0.062s 00:06:36.924 15:10:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.924 ************************************ 00:06:36.924 END TEST rpc_client 00:06:36.924 ************************************ 00:06:36.924 15:10:45 -- common/autotest_common.sh@10 -- # set +x 00:06:36.924 15:10:46 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:36.924 15:10:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.924 15:10:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.924 15:10:46 -- common/autotest_common.sh@10 -- # set +x 00:06:36.924 ************************************ 00:06:36.924 START TEST json_config 00:06:36.924 ************************************ 00:06:36.924 15:10:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:36.924 15:10:46 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:36.924 15:10:46 -- nvmf/common.sh@7 -- # uname -s 00:06:36.924 15:10:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.924 15:10:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.924 15:10:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.924 15:10:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.924 15:10:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.924 15:10:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.924 15:10:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.924 15:10:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.924 15:10:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.924 15:10:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.924 15:10:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:06:36.924 15:10:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:06:36.924 15:10:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.924 15:10:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.924 15:10:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:36.924 15:10:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.924 15:10:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:36.924 15:10:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.924 15:10:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.924 15:10:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.924 15:10:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.924 15:10:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.924 15:10:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.924 15:10:46 -- paths/export.sh@5 -- # export PATH 00:06:36.924 15:10:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.924 15:10:46 -- nvmf/common.sh@47 -- # : 0 00:06:36.924 15:10:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.924 15:10:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.924 15:10:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.924 15:10:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.924 15:10:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.924 15:10:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.924 15:10:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.924 15:10:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.924 15:10:46 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:36.924 15:10:46 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:36.924 15:10:46 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:36.924 15:10:46 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:36.924 15:10:46 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:36.924 15:10:46 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:36.924 15:10:46 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:37.182 15:10:46 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:37.182 15:10:46 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:37.182 15:10:46 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:37.182 15:10:46 -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:37.182 15:10:46 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:37.182 15:10:46 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:37.182 15:10:46 -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:37.182 15:10:46 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:37.182 INFO: JSON configuration test init 00:06:37.182 15:10:46 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:37.182 15:10:46 -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:37.182 15:10:46 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:37.182 15:10:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:37.182 15:10:46 -- common/autotest_common.sh@10 -- # set +x 00:06:37.182 15:10:46 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:37.182 15:10:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:37.182 15:10:46 -- common/autotest_common.sh@10 -- # set +x 00:06:37.182 15:10:46 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:37.182 15:10:46 -- json_config/common.sh@9 -- # local app=target 00:06:37.182 15:10:46 -- json_config/common.sh@10 -- # shift 00:06:37.182 15:10:46 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:37.182 15:10:46 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:37.182 15:10:46 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:37.182 15:10:46 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.182 15:10:46 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.182 15:10:46 -- json_config/common.sh@22 -- # app_pid["$app"]=58972 00:06:37.182 15:10:46 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:37.182 Waiting for target to run... 00:06:37.182 15:10:46 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:37.182 15:10:46 -- json_config/common.sh@25 -- # waitforlisten 58972 /var/tmp/spdk_tgt.sock 00:06:37.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:37.182 15:10:46 -- common/autotest_common.sh@817 -- # '[' -z 58972 ']' 00:06:37.182 15:10:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:37.182 15:10:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:37.182 15:10:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:37.182 15:10:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:37.182 15:10:46 -- common/autotest_common.sh@10 -- # set +x 00:06:37.182 [2024-04-24 15:10:46.241693] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:37.182 [2024-04-24 15:10:46.242215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58972 ] 00:06:37.440 [2024-04-24 15:10:46.664688] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.698 [2024-04-24 15:10:46.766533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.264 15:10:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:38.265 15:10:47 -- common/autotest_common.sh@850 -- # return 0 00:06:38.265 00:06:38.265 15:10:47 -- json_config/common.sh@26 -- # echo '' 00:06:38.265 15:10:47 -- json_config/json_config.sh@269 -- # create_accel_config 00:06:38.265 15:10:47 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:38.265 15:10:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:38.265 15:10:47 -- common/autotest_common.sh@10 -- # set +x 00:06:38.265 15:10:47 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:38.265 15:10:47 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:38.265 15:10:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:38.265 15:10:47 -- common/autotest_common.sh@10 -- # set +x 00:06:38.265 15:10:47 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:38.265 15:10:47 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:38.265 15:10:47 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:38.831 15:10:47 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:38.831 15:10:47 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:38.831 15:10:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:38.831 15:10:47 -- common/autotest_common.sh@10 -- # set +x 00:06:38.831 15:10:47 -- json_config/json_config.sh@45 -- # local ret=0 00:06:38.831 15:10:47 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:38.831 15:10:47 -- json_config/json_config.sh@46 -- # local enabled_types 00:06:38.831 15:10:47 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:38.831 15:10:47 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:38.831 15:10:47 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:39.088 15:10:48 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:39.089 15:10:48 -- json_config/json_config.sh@48 -- # local get_types 00:06:39.089 15:10:48 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:39.089 15:10:48 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:39.089 15:10:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:39.089 15:10:48 -- common/autotest_common.sh@10 -- # set +x 00:06:39.089 15:10:48 -- json_config/json_config.sh@55 -- # return 0 00:06:39.089 15:10:48 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:39.089 15:10:48 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:39.089 15:10:48 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:39.089 15:10:48 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:39.089 15:10:48 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:39.089 15:10:48 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:39.089 15:10:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:39.089 15:10:48 -- common/autotest_common.sh@10 -- # set +x 00:06:39.089 15:10:48 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:39.089 15:10:48 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:39.089 15:10:48 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:39.089 15:10:48 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:39.089 15:10:48 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:39.347 MallocForNvmf0 00:06:39.347 15:10:48 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:39.347 15:10:48 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:39.605 MallocForNvmf1 00:06:39.605 15:10:48 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:39.605 15:10:48 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:39.864 [2024-04-24 15:10:48.886809] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.864 15:10:48 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:39.864 15:10:48 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:40.122 15:10:49 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:40.122 15:10:49 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:40.380 15:10:49 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:40.380 15:10:49 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:40.638 15:10:49 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:40.638 15:10:49 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:40.895 [2024-04-24 15:10:49.883316] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:40.895 15:10:49 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:40.895 15:10:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:40.895 15:10:49 -- common/autotest_common.sh@10 -- # set +x 00:06:40.895 15:10:49 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:40.895 15:10:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:40.895 15:10:49 -- common/autotest_common.sh@10 -- # set +x 00:06:40.895 15:10:49 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:40.895 15:10:49 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:40.895 15:10:49 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:41.153 MallocBdevForConfigChangeCheck 00:06:41.153 15:10:50 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:41.153 15:10:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:41.153 15:10:50 -- common/autotest_common.sh@10 -- # set +x 00:06:41.153 15:10:50 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:41.153 15:10:50 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:41.720 INFO: shutting down applications... 00:06:41.720 15:10:50 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:41.720 15:10:50 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:41.720 15:10:50 -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:41.720 15:10:50 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:41.720 15:10:50 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:41.978 Calling clear_iscsi_subsystem 00:06:41.978 Calling clear_nvmf_subsystem 00:06:41.978 Calling clear_nbd_subsystem 00:06:41.978 Calling clear_ublk_subsystem 00:06:41.978 Calling clear_vhost_blk_subsystem 00:06:41.978 Calling clear_vhost_scsi_subsystem 00:06:41.978 Calling clear_bdev_subsystem 00:06:41.978 15:10:51 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:41.978 15:10:51 -- json_config/json_config.sh@343 -- # count=100 00:06:41.978 15:10:51 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:41.978 15:10:51 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:41.978 15:10:51 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:41.978 15:10:51 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:42.235 15:10:51 -- json_config/json_config.sh@345 -- # break 00:06:42.235 15:10:51 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:42.235 15:10:51 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:42.235 15:10:51 -- json_config/common.sh@31 -- # local app=target 00:06:42.235 15:10:51 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:42.235 15:10:51 -- json_config/common.sh@35 -- # [[ -n 58972 ]] 00:06:42.235 15:10:51 -- json_config/common.sh@38 -- # kill -SIGINT 58972 00:06:42.235 15:10:51 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:42.235 15:10:51 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.235 15:10:51 -- json_config/common.sh@41 -- # kill -0 58972 00:06:42.235 15:10:51 -- json_config/common.sh@45 -- # sleep 0.5 00:06:42.800 15:10:51 -- json_config/common.sh@40 -- # (( i++ )) 00:06:42.800 15:10:51 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.800 SPDK target shutdown done 00:06:42.800 INFO: relaunching applications... 00:06:42.801 15:10:51 -- json_config/common.sh@41 -- # kill -0 58972 00:06:42.801 15:10:51 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:42.801 15:10:51 -- json_config/common.sh@43 -- # break 00:06:42.801 15:10:51 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:42.801 15:10:51 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:42.801 15:10:51 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:42.801 15:10:51 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:42.801 15:10:51 -- json_config/common.sh@9 -- # local app=target 00:06:42.801 15:10:51 -- json_config/common.sh@10 -- # shift 00:06:42.801 15:10:51 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:42.801 15:10:51 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:42.801 15:10:51 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:42.801 15:10:51 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:42.801 15:10:51 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:42.801 15:10:51 -- json_config/common.sh@22 -- # app_pid["$app"]=59168 00:06:42.801 15:10:51 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:42.801 15:10:51 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:42.801 Waiting for target to run... 00:06:42.801 15:10:51 -- json_config/common.sh@25 -- # waitforlisten 59168 /var/tmp/spdk_tgt.sock 00:06:42.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:42.801 15:10:51 -- common/autotest_common.sh@817 -- # '[' -z 59168 ']' 00:06:42.801 15:10:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:42.801 15:10:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:42.801 15:10:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:42.801 15:10:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:42.801 15:10:51 -- common/autotest_common.sh@10 -- # set +x 00:06:42.801 [2024-04-24 15:10:52.042880] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:42.801 [2024-04-24 15:10:52.042977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59168 ] 00:06:43.367 [2024-04-24 15:10:52.460616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.367 [2024-04-24 15:10:52.559077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.933 [2024-04-24 15:10:52.873287] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.933 [2024-04-24 15:10:52.905345] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:43.933 00:06:43.933 15:10:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:43.933 15:10:53 -- common/autotest_common.sh@850 -- # return 0 00:06:43.933 15:10:53 -- json_config/common.sh@26 -- # echo '' 00:06:43.933 15:10:53 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:43.933 15:10:53 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:43.933 INFO: Checking if target configuration is the same... 00:06:43.933 15:10:53 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:43.933 15:10:53 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:43.933 15:10:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:43.933 + '[' 2 -ne 2 ']' 00:06:43.933 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:43.933 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:43.933 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:43.933 +++ basename /dev/fd/62 00:06:43.933 ++ mktemp /tmp/62.XXX 00:06:43.933 + tmp_file_1=/tmp/62.h4z 00:06:43.933 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:43.933 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:43.933 + tmp_file_2=/tmp/spdk_tgt_config.json.84G 00:06:43.933 + ret=0 00:06:43.933 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:44.192 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:44.450 + diff -u /tmp/62.h4z /tmp/spdk_tgt_config.json.84G 00:06:44.450 INFO: JSON config files are the same 00:06:44.450 + echo 'INFO: JSON config files are the same' 00:06:44.450 + rm /tmp/62.h4z /tmp/spdk_tgt_config.json.84G 00:06:44.450 + exit 0 00:06:44.450 INFO: changing configuration and checking if this can be detected... 00:06:44.450 15:10:53 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:44.450 15:10:53 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:44.450 15:10:53 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:44.450 15:10:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:44.708 15:10:53 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:44.708 15:10:53 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:44.708 15:10:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:44.708 + '[' 2 -ne 2 ']' 00:06:44.708 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:44.708 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:44.708 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:44.708 +++ basename /dev/fd/62 00:06:44.708 ++ mktemp /tmp/62.XXX 00:06:44.708 + tmp_file_1=/tmp/62.zSP 00:06:44.708 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:44.708 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:44.708 + tmp_file_2=/tmp/spdk_tgt_config.json.fcq 00:06:44.708 + ret=0 00:06:44.708 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:44.965 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:45.224 + diff -u /tmp/62.zSP /tmp/spdk_tgt_config.json.fcq 00:06:45.224 + ret=1 00:06:45.224 + echo '=== Start of file: /tmp/62.zSP ===' 00:06:45.224 + cat /tmp/62.zSP 00:06:45.224 + echo '=== End of file: /tmp/62.zSP ===' 00:06:45.224 + echo '' 00:06:45.224 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fcq ===' 00:06:45.224 + cat /tmp/spdk_tgt_config.json.fcq 00:06:45.224 + echo '=== End of file: /tmp/spdk_tgt_config.json.fcq ===' 00:06:45.224 + echo '' 00:06:45.224 + rm /tmp/62.zSP /tmp/spdk_tgt_config.json.fcq 00:06:45.224 + exit 1 00:06:45.224 INFO: configuration change detected. 00:06:45.224 15:10:54 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:45.224 15:10:54 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:45.224 15:10:54 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:45.224 15:10:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:45.224 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:06:45.224 15:10:54 -- json_config/json_config.sh@307 -- # local ret=0 00:06:45.224 15:10:54 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:45.224 15:10:54 -- json_config/json_config.sh@317 -- # [[ -n 59168 ]] 00:06:45.224 15:10:54 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:45.224 15:10:54 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:45.224 15:10:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:45.224 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:06:45.224 15:10:54 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:45.224 15:10:54 -- json_config/json_config.sh@193 -- # uname -s 00:06:45.224 15:10:54 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:45.224 15:10:54 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:45.224 15:10:54 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:45.224 15:10:54 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:45.224 15:10:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:45.224 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:06:45.224 15:10:54 -- json_config/json_config.sh@323 -- # killprocess 59168 00:06:45.224 15:10:54 -- common/autotest_common.sh@936 -- # '[' -z 59168 ']' 00:06:45.224 15:10:54 -- common/autotest_common.sh@940 -- # kill -0 59168 00:06:45.224 15:10:54 -- common/autotest_common.sh@941 -- # uname 00:06:45.224 15:10:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:45.224 15:10:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59168 00:06:45.224 killing process with pid 59168 00:06:45.224 15:10:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:45.224 15:10:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:45.224 15:10:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59168' 00:06:45.224 15:10:54 -- common/autotest_common.sh@955 -- # kill 59168 00:06:45.224 15:10:54 -- common/autotest_common.sh@960 -- # wait 59168 00:06:45.482 15:10:54 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:45.482 15:10:54 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:45.482 15:10:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:45.482 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:06:45.482 15:10:54 -- json_config/json_config.sh@328 -- # return 0 00:06:45.482 INFO: Success 00:06:45.482 15:10:54 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:45.482 ************************************ 00:06:45.482 END TEST json_config 00:06:45.482 ************************************ 00:06:45.482 00:06:45.482 real 0m8.550s 00:06:45.482 user 0m12.361s 00:06:45.482 sys 0m1.744s 00:06:45.482 15:10:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:45.482 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:06:45.482 15:10:54 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:45.482 15:10:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.482 15:10:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.482 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:06:45.740 ************************************ 00:06:45.740 START TEST json_config_extra_key 00:06:45.740 ************************************ 00:06:45.740 15:10:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:45.740 15:10:54 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:45.740 15:10:54 -- nvmf/common.sh@7 -- # uname -s 00:06:45.740 15:10:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.740 15:10:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.740 15:10:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.740 15:10:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.740 15:10:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.740 15:10:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.740 15:10:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.740 15:10:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.740 15:10:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.740 15:10:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.740 15:10:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:06:45.740 15:10:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:06:45.740 15:10:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.740 15:10:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.740 15:10:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:45.740 15:10:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.740 15:10:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.740 15:10:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.740 15:10:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.740 15:10:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.740 15:10:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.740 15:10:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.740 15:10:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.740 15:10:54 -- paths/export.sh@5 -- # export PATH 00:06:45.740 15:10:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.740 15:10:54 -- nvmf/common.sh@47 -- # : 0 00:06:45.740 15:10:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:45.740 15:10:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:45.740 15:10:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.740 15:10:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.740 15:10:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.740 15:10:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:45.740 15:10:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:45.740 15:10:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:45.740 INFO: launching applications... 00:06:45.740 15:10:54 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:45.740 15:10:54 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:45.740 15:10:54 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:45.740 15:10:54 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:45.740 15:10:54 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:45.740 15:10:54 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:45.740 15:10:54 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:45.740 15:10:54 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:45.740 15:10:54 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:45.740 15:10:54 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:45.740 15:10:54 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:45.741 15:10:54 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:45.741 15:10:54 -- json_config/common.sh@9 -- # local app=target 00:06:45.741 15:10:54 -- json_config/common.sh@10 -- # shift 00:06:45.741 15:10:54 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:45.741 15:10:54 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:45.741 15:10:54 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:45.741 15:10:54 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:45.741 15:10:54 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:45.741 Waiting for target to run... 00:06:45.741 15:10:54 -- json_config/common.sh@22 -- # app_pid["$app"]=59315 00:06:45.741 15:10:54 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:45.741 15:10:54 -- json_config/common.sh@25 -- # waitforlisten 59315 /var/tmp/spdk_tgt.sock 00:06:45.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:45.741 15:10:54 -- common/autotest_common.sh@817 -- # '[' -z 59315 ']' 00:06:45.741 15:10:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:45.741 15:10:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:45.741 15:10:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:45.741 15:10:54 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:45.741 15:10:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:45.741 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:06:45.741 [2024-04-24 15:10:54.892043] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:45.741 [2024-04-24 15:10:54.892143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59315 ] 00:06:46.307 [2024-04-24 15:10:55.328577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.307 [2024-04-24 15:10:55.431516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.872 15:10:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:46.872 00:06:46.872 15:10:55 -- common/autotest_common.sh@850 -- # return 0 00:06:46.872 15:10:55 -- json_config/common.sh@26 -- # echo '' 00:06:46.872 INFO: shutting down applications... 00:06:46.872 15:10:55 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:46.872 15:10:55 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:46.872 15:10:55 -- json_config/common.sh@31 -- # local app=target 00:06:46.873 15:10:55 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:46.873 15:10:55 -- json_config/common.sh@35 -- # [[ -n 59315 ]] 00:06:46.873 15:10:55 -- json_config/common.sh@38 -- # kill -SIGINT 59315 00:06:46.873 15:10:55 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:46.873 15:10:55 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.873 15:10:55 -- json_config/common.sh@41 -- # kill -0 59315 00:06:46.873 15:10:55 -- json_config/common.sh@45 -- # sleep 0.5 00:06:47.438 15:10:56 -- json_config/common.sh@40 -- # (( i++ )) 00:06:47.438 15:10:56 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.438 15:10:56 -- json_config/common.sh@41 -- # kill -0 59315 00:06:47.438 15:10:56 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:47.438 15:10:56 -- json_config/common.sh@43 -- # break 00:06:47.438 15:10:56 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:47.438 15:10:56 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:47.438 SPDK target shutdown done 00:06:47.438 Success 00:06:47.438 15:10:56 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:47.438 00:06:47.438 real 0m1.635s 00:06:47.438 user 0m1.566s 00:06:47.438 sys 0m0.440s 00:06:47.438 15:10:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.438 ************************************ 00:06:47.438 END TEST json_config_extra_key 00:06:47.438 ************************************ 00:06:47.438 15:10:56 -- common/autotest_common.sh@10 -- # set +x 00:06:47.438 15:10:56 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:47.438 15:10:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.438 15:10:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.438 15:10:56 -- common/autotest_common.sh@10 -- # set +x 00:06:47.438 ************************************ 00:06:47.438 START TEST alias_rpc 00:06:47.438 ************************************ 00:06:47.438 15:10:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:47.438 * Looking for test storage... 00:06:47.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:47.438 15:10:56 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:47.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.438 15:10:56 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59385 00:06:47.438 15:10:56 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59385 00:06:47.438 15:10:56 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:47.438 15:10:56 -- common/autotest_common.sh@817 -- # '[' -z 59385 ']' 00:06:47.438 15:10:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.438 15:10:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:47.438 15:10:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.438 15:10:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:47.438 15:10:56 -- common/autotest_common.sh@10 -- # set +x 00:06:47.438 [2024-04-24 15:10:56.650686] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:47.438 [2024-04-24 15:10:56.650787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59385 ] 00:06:47.695 [2024-04-24 15:10:56.792513] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.695 [2024-04-24 15:10:56.923797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.628 15:10:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:48.628 15:10:57 -- common/autotest_common.sh@850 -- # return 0 00:06:48.628 15:10:57 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:48.886 15:10:57 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59385 00:06:48.886 15:10:57 -- common/autotest_common.sh@936 -- # '[' -z 59385 ']' 00:06:48.886 15:10:57 -- common/autotest_common.sh@940 -- # kill -0 59385 00:06:48.886 15:10:57 -- common/autotest_common.sh@941 -- # uname 00:06:48.886 15:10:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:48.886 15:10:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59385 00:06:48.886 killing process with pid 59385 00:06:48.886 15:10:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:48.886 15:10:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:48.886 15:10:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59385' 00:06:48.886 15:10:57 -- common/autotest_common.sh@955 -- # kill 59385 00:06:48.886 15:10:57 -- common/autotest_common.sh@960 -- # wait 59385 00:06:49.452 00:06:49.452 real 0m1.920s 00:06:49.452 user 0m2.206s 00:06:49.452 sys 0m0.446s 00:06:49.452 ************************************ 00:06:49.452 END TEST alias_rpc 00:06:49.452 ************************************ 00:06:49.452 15:10:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.452 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:06:49.452 15:10:58 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:49.452 15:10:58 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:49.452 15:10:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:49.452 15:10:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.452 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:06:49.452 ************************************ 00:06:49.452 START TEST spdkcli_tcp 00:06:49.452 ************************************ 00:06:49.452 15:10:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:49.452 * Looking for test storage... 00:06:49.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:49.452 15:10:58 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:49.452 15:10:58 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:49.452 15:10:58 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:49.452 15:10:58 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:49.452 15:10:58 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:49.452 15:10:58 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:49.452 15:10:58 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:49.452 15:10:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:49.452 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:06:49.452 15:10:58 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59466 00:06:49.452 15:10:58 -- spdkcli/tcp.sh@27 -- # waitforlisten 59466 00:06:49.452 15:10:58 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:49.452 15:10:58 -- common/autotest_common.sh@817 -- # '[' -z 59466 ']' 00:06:49.452 15:10:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.452 15:10:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:49.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.452 15:10:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.452 15:10:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:49.452 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:06:49.452 [2024-04-24 15:10:58.668180] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:49.452 [2024-04-24 15:10:58.668272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59466 ] 00:06:49.709 [2024-04-24 15:10:58.806369] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.709 [2024-04-24 15:10:58.936138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.709 [2024-04-24 15:10:58.936150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.643 15:10:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:50.643 15:10:59 -- common/autotest_common.sh@850 -- # return 0 00:06:50.643 15:10:59 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:50.643 15:10:59 -- spdkcli/tcp.sh@31 -- # socat_pid=59483 00:06:50.643 15:10:59 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:50.902 [ 00:06:50.902 "bdev_malloc_delete", 00:06:50.902 "bdev_malloc_create", 00:06:50.902 "bdev_null_resize", 00:06:50.902 "bdev_null_delete", 00:06:50.902 "bdev_null_create", 00:06:50.902 "bdev_nvme_cuse_unregister", 00:06:50.902 "bdev_nvme_cuse_register", 00:06:50.902 "bdev_opal_new_user", 00:06:50.902 "bdev_opal_set_lock_state", 00:06:50.902 "bdev_opal_delete", 00:06:50.902 "bdev_opal_get_info", 00:06:50.902 "bdev_opal_create", 00:06:50.902 "bdev_nvme_opal_revert", 00:06:50.902 "bdev_nvme_opal_init", 00:06:50.902 "bdev_nvme_send_cmd", 00:06:50.902 "bdev_nvme_get_path_iostat", 00:06:50.902 "bdev_nvme_get_mdns_discovery_info", 00:06:50.902 "bdev_nvme_stop_mdns_discovery", 00:06:50.902 "bdev_nvme_start_mdns_discovery", 00:06:50.902 "bdev_nvme_set_multipath_policy", 00:06:50.902 "bdev_nvme_set_preferred_path", 00:06:50.902 "bdev_nvme_get_io_paths", 00:06:50.902 "bdev_nvme_remove_error_injection", 00:06:50.902 "bdev_nvme_add_error_injection", 00:06:50.902 "bdev_nvme_get_discovery_info", 00:06:50.902 "bdev_nvme_stop_discovery", 00:06:50.902 "bdev_nvme_start_discovery", 00:06:50.902 "bdev_nvme_get_controller_health_info", 00:06:50.902 "bdev_nvme_disable_controller", 00:06:50.902 "bdev_nvme_enable_controller", 00:06:50.902 "bdev_nvme_reset_controller", 00:06:50.902 "bdev_nvme_get_transport_statistics", 00:06:50.902 "bdev_nvme_apply_firmware", 00:06:50.902 "bdev_nvme_detach_controller", 00:06:50.902 "bdev_nvme_get_controllers", 00:06:50.902 "bdev_nvme_attach_controller", 00:06:50.902 "bdev_nvme_set_hotplug", 00:06:50.902 "bdev_nvme_set_options", 00:06:50.902 "bdev_passthru_delete", 00:06:50.902 "bdev_passthru_create", 00:06:50.902 "bdev_lvol_grow_lvstore", 00:06:50.902 "bdev_lvol_get_lvols", 00:06:50.902 "bdev_lvol_get_lvstores", 00:06:50.902 "bdev_lvol_delete", 00:06:50.902 "bdev_lvol_set_read_only", 00:06:50.902 "bdev_lvol_resize", 00:06:50.902 "bdev_lvol_decouple_parent", 00:06:50.902 "bdev_lvol_inflate", 00:06:50.902 "bdev_lvol_rename", 00:06:50.902 "bdev_lvol_clone_bdev", 00:06:50.902 "bdev_lvol_clone", 00:06:50.902 "bdev_lvol_snapshot", 00:06:50.902 "bdev_lvol_create", 00:06:50.902 "bdev_lvol_delete_lvstore", 00:06:50.902 "bdev_lvol_rename_lvstore", 00:06:50.902 "bdev_lvol_create_lvstore", 00:06:50.902 "bdev_raid_set_options", 00:06:50.902 "bdev_raid_remove_base_bdev", 00:06:50.902 "bdev_raid_add_base_bdev", 00:06:50.902 "bdev_raid_delete", 00:06:50.902 "bdev_raid_create", 00:06:50.902 "bdev_raid_get_bdevs", 00:06:50.902 "bdev_error_inject_error", 00:06:50.902 "bdev_error_delete", 00:06:50.902 "bdev_error_create", 00:06:50.902 "bdev_split_delete", 00:06:50.902 "bdev_split_create", 00:06:50.902 "bdev_delay_delete", 00:06:50.902 "bdev_delay_create", 00:06:50.902 "bdev_delay_update_latency", 00:06:50.902 "bdev_zone_block_delete", 00:06:50.902 "bdev_zone_block_create", 00:06:50.902 "blobfs_create", 00:06:50.902 "blobfs_detect", 00:06:50.902 "blobfs_set_cache_size", 00:06:50.902 "bdev_aio_delete", 00:06:50.902 "bdev_aio_rescan", 00:06:50.902 "bdev_aio_create", 00:06:50.902 "bdev_ftl_set_property", 00:06:50.902 "bdev_ftl_get_properties", 00:06:50.902 "bdev_ftl_get_stats", 00:06:50.903 "bdev_ftl_unmap", 00:06:50.903 "bdev_ftl_unload", 00:06:50.903 "bdev_ftl_delete", 00:06:50.903 "bdev_ftl_load", 00:06:50.903 "bdev_ftl_create", 00:06:50.903 "bdev_virtio_attach_controller", 00:06:50.903 "bdev_virtio_scsi_get_devices", 00:06:50.903 "bdev_virtio_detach_controller", 00:06:50.903 "bdev_virtio_blk_set_hotplug", 00:06:50.903 "bdev_iscsi_delete", 00:06:50.903 "bdev_iscsi_create", 00:06:50.903 "bdev_iscsi_set_options", 00:06:50.903 "bdev_uring_delete", 00:06:50.903 "bdev_uring_rescan", 00:06:50.903 "bdev_uring_create", 00:06:50.903 "accel_error_inject_error", 00:06:50.903 "ioat_scan_accel_module", 00:06:50.903 "dsa_scan_accel_module", 00:06:50.903 "iaa_scan_accel_module", 00:06:50.903 "keyring_file_remove_key", 00:06:50.903 "keyring_file_add_key", 00:06:50.903 "iscsi_set_options", 00:06:50.903 "iscsi_get_auth_groups", 00:06:50.903 "iscsi_auth_group_remove_secret", 00:06:50.903 "iscsi_auth_group_add_secret", 00:06:50.903 "iscsi_delete_auth_group", 00:06:50.903 "iscsi_create_auth_group", 00:06:50.903 "iscsi_set_discovery_auth", 00:06:50.903 "iscsi_get_options", 00:06:50.903 "iscsi_target_node_request_logout", 00:06:50.903 "iscsi_target_node_set_redirect", 00:06:50.903 "iscsi_target_node_set_auth", 00:06:50.903 "iscsi_target_node_add_lun", 00:06:50.903 "iscsi_get_stats", 00:06:50.903 "iscsi_get_connections", 00:06:50.903 "iscsi_portal_group_set_auth", 00:06:50.903 "iscsi_start_portal_group", 00:06:50.903 "iscsi_delete_portal_group", 00:06:50.903 "iscsi_create_portal_group", 00:06:50.903 "iscsi_get_portal_groups", 00:06:50.903 "iscsi_delete_target_node", 00:06:50.903 "iscsi_target_node_remove_pg_ig_maps", 00:06:50.903 "iscsi_target_node_add_pg_ig_maps", 00:06:50.903 "iscsi_create_target_node", 00:06:50.903 "iscsi_get_target_nodes", 00:06:50.903 "iscsi_delete_initiator_group", 00:06:50.903 "iscsi_initiator_group_remove_initiators", 00:06:50.903 "iscsi_initiator_group_add_initiators", 00:06:50.903 "iscsi_create_initiator_group", 00:06:50.903 "iscsi_get_initiator_groups", 00:06:50.903 "nvmf_set_crdt", 00:06:50.903 "nvmf_set_config", 00:06:50.903 "nvmf_set_max_subsystems", 00:06:50.903 "nvmf_subsystem_get_listeners", 00:06:50.903 "nvmf_subsystem_get_qpairs", 00:06:50.903 "nvmf_subsystem_get_controllers", 00:06:50.903 "nvmf_get_stats", 00:06:50.903 "nvmf_get_transports", 00:06:50.903 "nvmf_create_transport", 00:06:50.903 "nvmf_get_targets", 00:06:50.903 "nvmf_delete_target", 00:06:50.903 "nvmf_create_target", 00:06:50.903 "nvmf_subsystem_allow_any_host", 00:06:50.903 "nvmf_subsystem_remove_host", 00:06:50.903 "nvmf_subsystem_add_host", 00:06:50.903 "nvmf_ns_remove_host", 00:06:50.903 "nvmf_ns_add_host", 00:06:50.903 "nvmf_subsystem_remove_ns", 00:06:50.903 "nvmf_subsystem_add_ns", 00:06:50.903 "nvmf_subsystem_listener_set_ana_state", 00:06:50.903 "nvmf_discovery_get_referrals", 00:06:50.903 "nvmf_discovery_remove_referral", 00:06:50.903 "nvmf_discovery_add_referral", 00:06:50.903 "nvmf_subsystem_remove_listener", 00:06:50.903 "nvmf_subsystem_add_listener", 00:06:50.903 "nvmf_delete_subsystem", 00:06:50.903 "nvmf_create_subsystem", 00:06:50.903 "nvmf_get_subsystems", 00:06:50.903 "env_dpdk_get_mem_stats", 00:06:50.903 "nbd_get_disks", 00:06:50.903 "nbd_stop_disk", 00:06:50.903 "nbd_start_disk", 00:06:50.903 "ublk_recover_disk", 00:06:50.903 "ublk_get_disks", 00:06:50.903 "ublk_stop_disk", 00:06:50.903 "ublk_start_disk", 00:06:50.903 "ublk_destroy_target", 00:06:50.903 "ublk_create_target", 00:06:50.903 "virtio_blk_create_transport", 00:06:50.903 "virtio_blk_get_transports", 00:06:50.903 "vhost_controller_set_coalescing", 00:06:50.903 "vhost_get_controllers", 00:06:50.903 "vhost_delete_controller", 00:06:50.903 "vhost_create_blk_controller", 00:06:50.903 "vhost_scsi_controller_remove_target", 00:06:50.903 "vhost_scsi_controller_add_target", 00:06:50.903 "vhost_start_scsi_controller", 00:06:50.903 "vhost_create_scsi_controller", 00:06:50.903 "thread_set_cpumask", 00:06:50.903 "framework_get_scheduler", 00:06:50.903 "framework_set_scheduler", 00:06:50.903 "framework_get_reactors", 00:06:50.903 "thread_get_io_channels", 00:06:50.903 "thread_get_pollers", 00:06:50.903 "thread_get_stats", 00:06:50.903 "framework_monitor_context_switch", 00:06:50.903 "spdk_kill_instance", 00:06:50.903 "log_enable_timestamps", 00:06:50.903 "log_get_flags", 00:06:50.903 "log_clear_flag", 00:06:50.903 "log_set_flag", 00:06:50.903 "log_get_level", 00:06:50.903 "log_set_level", 00:06:50.903 "log_get_print_level", 00:06:50.903 "log_set_print_level", 00:06:50.903 "framework_enable_cpumask_locks", 00:06:50.903 "framework_disable_cpumask_locks", 00:06:50.903 "framework_wait_init", 00:06:50.903 "framework_start_init", 00:06:50.903 "scsi_get_devices", 00:06:50.903 "bdev_get_histogram", 00:06:50.903 "bdev_enable_histogram", 00:06:50.903 "bdev_set_qos_limit", 00:06:50.903 "bdev_set_qd_sampling_period", 00:06:50.903 "bdev_get_bdevs", 00:06:50.903 "bdev_reset_iostat", 00:06:50.903 "bdev_get_iostat", 00:06:50.903 "bdev_examine", 00:06:50.903 "bdev_wait_for_examine", 00:06:50.903 "bdev_set_options", 00:06:50.903 "notify_get_notifications", 00:06:50.903 "notify_get_types", 00:06:50.903 "accel_get_stats", 00:06:50.903 "accel_set_options", 00:06:50.903 "accel_set_driver", 00:06:50.903 "accel_crypto_key_destroy", 00:06:50.903 "accel_crypto_keys_get", 00:06:50.903 "accel_crypto_key_create", 00:06:50.903 "accel_assign_opc", 00:06:50.903 "accel_get_module_info", 00:06:50.903 "accel_get_opc_assignments", 00:06:50.903 "vmd_rescan", 00:06:50.903 "vmd_remove_device", 00:06:50.903 "vmd_enable", 00:06:50.903 "sock_set_default_impl", 00:06:50.903 "sock_impl_set_options", 00:06:50.903 "sock_impl_get_options", 00:06:50.903 "iobuf_get_stats", 00:06:50.903 "iobuf_set_options", 00:06:50.903 "framework_get_pci_devices", 00:06:50.903 "framework_get_config", 00:06:50.903 "framework_get_subsystems", 00:06:50.903 "trace_get_info", 00:06:50.903 "trace_get_tpoint_group_mask", 00:06:50.903 "trace_disable_tpoint_group", 00:06:50.903 "trace_enable_tpoint_group", 00:06:50.903 "trace_clear_tpoint_mask", 00:06:50.903 "trace_set_tpoint_mask", 00:06:50.903 "keyring_get_keys", 00:06:50.903 "spdk_get_version", 00:06:50.903 "rpc_get_methods" 00:06:50.903 ] 00:06:50.903 15:10:59 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:50.903 15:10:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:50.903 15:10:59 -- common/autotest_common.sh@10 -- # set +x 00:06:50.903 15:10:59 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:50.903 15:10:59 -- spdkcli/tcp.sh@38 -- # killprocess 59466 00:06:50.903 15:10:59 -- common/autotest_common.sh@936 -- # '[' -z 59466 ']' 00:06:50.903 15:10:59 -- common/autotest_common.sh@940 -- # kill -0 59466 00:06:50.903 15:10:59 -- common/autotest_common.sh@941 -- # uname 00:06:50.903 15:10:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:50.903 15:10:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59466 00:06:50.903 killing process with pid 59466 00:06:50.903 15:10:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:50.903 15:10:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:50.903 15:10:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59466' 00:06:50.903 15:10:59 -- common/autotest_common.sh@955 -- # kill 59466 00:06:50.903 15:10:59 -- common/autotest_common.sh@960 -- # wait 59466 00:06:51.470 ************************************ 00:06:51.470 END TEST spdkcli_tcp 00:06:51.470 ************************************ 00:06:51.470 00:06:51.470 real 0m1.909s 00:06:51.470 user 0m3.541s 00:06:51.470 sys 0m0.476s 00:06:51.470 15:11:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:51.470 15:11:00 -- common/autotest_common.sh@10 -- # set +x 00:06:51.470 15:11:00 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:51.470 15:11:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:51.470 15:11:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.470 15:11:00 -- common/autotest_common.sh@10 -- # set +x 00:06:51.470 ************************************ 00:06:51.470 START TEST dpdk_mem_utility 00:06:51.470 ************************************ 00:06:51.470 15:11:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:51.470 * Looking for test storage... 00:06:51.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:51.470 15:11:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:51.470 15:11:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59562 00:06:51.470 15:11:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59562 00:06:51.470 15:11:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:51.470 15:11:00 -- common/autotest_common.sh@817 -- # '[' -z 59562 ']' 00:06:51.470 15:11:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.470 15:11:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:51.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.470 15:11:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.470 15:11:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:51.470 15:11:00 -- common/autotest_common.sh@10 -- # set +x 00:06:51.470 [2024-04-24 15:11:00.710146] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:51.470 [2024-04-24 15:11:00.710242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59562 ] 00:06:51.729 [2024-04-24 15:11:00.846217] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.729 [2024-04-24 15:11:00.967436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.664 15:11:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:52.664 15:11:01 -- common/autotest_common.sh@850 -- # return 0 00:06:52.664 15:11:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:52.664 15:11:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:52.664 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.664 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:06:52.664 { 00:06:52.664 "filename": "/tmp/spdk_mem_dump.txt" 00:06:52.664 } 00:06:52.664 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.664 15:11:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:52.664 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:52.664 1 heaps totaling size 814.000000 MiB 00:06:52.664 size: 814.000000 MiB heap id: 0 00:06:52.664 end heaps---------- 00:06:52.664 8 mempools totaling size 598.116089 MiB 00:06:52.664 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:52.664 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:52.664 size: 84.521057 MiB name: bdev_io_59562 00:06:52.664 size: 51.011292 MiB name: evtpool_59562 00:06:52.664 size: 50.003479 MiB name: msgpool_59562 00:06:52.664 size: 21.763794 MiB name: PDU_Pool 00:06:52.664 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:52.664 size: 0.026123 MiB name: Session_Pool 00:06:52.664 end mempools------- 00:06:52.664 6 memzones totaling size 4.142822 MiB 00:06:52.664 size: 1.000366 MiB name: RG_ring_0_59562 00:06:52.664 size: 1.000366 MiB name: RG_ring_1_59562 00:06:52.664 size: 1.000366 MiB name: RG_ring_4_59562 00:06:52.664 size: 1.000366 MiB name: RG_ring_5_59562 00:06:52.664 size: 0.125366 MiB name: RG_ring_2_59562 00:06:52.664 size: 0.015991 MiB name: RG_ring_3_59562 00:06:52.664 end memzones------- 00:06:52.664 15:11:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:52.664 heap id: 0 total size: 814.000000 MiB number of busy elements: 303 number of free elements: 15 00:06:52.664 list of free elements. size: 12.471375 MiB 00:06:52.664 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:52.664 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:52.664 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:52.664 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:52.664 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:52.664 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:52.664 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:52.664 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:52.664 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:52.664 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:06:52.664 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:52.664 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:52.664 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:52.664 element at address: 0x200027e00000 with size: 0.395935 MiB 00:06:52.664 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:52.664 list of standard malloc elements. size: 199.266052 MiB 00:06:52.664 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:52.664 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:52.664 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:52.664 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:52.664 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:52.664 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:52.664 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:52.664 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:52.664 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:52.664 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:52.664 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:52.664 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:52.664 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:52.664 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:52.664 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:52.664 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:52.664 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:52.664 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:52.664 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:52.664 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:52.664 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:52.664 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:52.664 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:52.664 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:52.664 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:52.664 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:52.664 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:52.664 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:52.664 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:52.664 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:52.665 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e65680 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:52.665 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:52.666 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:52.666 list of memzone associated elements. size: 602.262573 MiB 00:06:52.666 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:52.666 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:52.666 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:52.666 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:52.666 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:52.666 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59562_0 00:06:52.666 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:52.666 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59562_0 00:06:52.666 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:52.666 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59562_0 00:06:52.666 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:52.666 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:52.666 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:52.666 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:52.666 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:52.666 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59562 00:06:52.666 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:52.666 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59562 00:06:52.666 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:52.666 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59562 00:06:52.666 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:52.666 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:52.666 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:52.666 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:52.666 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:52.666 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:52.666 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:52.666 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:52.666 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:52.666 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59562 00:06:52.666 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:52.666 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59562 00:06:52.666 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:52.666 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59562 00:06:52.666 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:52.666 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59562 00:06:52.666 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:52.666 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59562 00:06:52.666 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:52.666 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:52.666 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:52.666 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:52.666 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:52.666 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:52.666 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:52.666 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59562 00:06:52.666 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:52.666 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:52.666 element at address: 0x200027e65740 with size: 0.023743 MiB 00:06:52.666 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:52.666 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:52.666 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59562 00:06:52.666 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:06:52.666 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:52.666 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:52.666 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59562 00:06:52.666 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:52.666 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59562 00:06:52.666 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:06:52.666 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:52.666 15:11:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:52.666 15:11:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59562 00:06:52.666 15:11:01 -- common/autotest_common.sh@936 -- # '[' -z 59562 ']' 00:06:52.666 15:11:01 -- common/autotest_common.sh@940 -- # kill -0 59562 00:06:52.666 15:11:01 -- common/autotest_common.sh@941 -- # uname 00:06:52.666 15:11:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:52.666 15:11:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59562 00:06:52.666 killing process with pid 59562 00:06:52.666 15:11:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:52.666 15:11:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:52.666 15:11:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59562' 00:06:52.666 15:11:01 -- common/autotest_common.sh@955 -- # kill 59562 00:06:52.666 15:11:01 -- common/autotest_common.sh@960 -- # wait 59562 00:06:53.233 00:06:53.233 real 0m1.770s 00:06:53.233 user 0m1.939s 00:06:53.233 sys 0m0.437s 00:06:53.233 ************************************ 00:06:53.233 END TEST dpdk_mem_utility 00:06:53.233 ************************************ 00:06:53.233 15:11:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:53.233 15:11:02 -- common/autotest_common.sh@10 -- # set +x 00:06:53.233 15:11:02 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:53.233 15:11:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:53.233 15:11:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.233 15:11:02 -- common/autotest_common.sh@10 -- # set +x 00:06:53.233 ************************************ 00:06:53.233 START TEST event 00:06:53.233 ************************************ 00:06:53.233 15:11:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:53.491 * Looking for test storage... 00:06:53.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:53.491 15:11:02 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:53.491 15:11:02 -- bdev/nbd_common.sh@6 -- # set -e 00:06:53.491 15:11:02 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:53.491 15:11:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:53.491 15:11:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.491 15:11:02 -- common/autotest_common.sh@10 -- # set +x 00:06:53.491 ************************************ 00:06:53.491 START TEST event_perf 00:06:53.491 ************************************ 00:06:53.491 15:11:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:53.491 Running I/O for 1 seconds...[2024-04-24 15:11:02.613026] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:53.491 [2024-04-24 15:11:02.613257] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59648 ] 00:06:53.748 [2024-04-24 15:11:02.747913] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.748 [2024-04-24 15:11:02.868805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.748 [2024-04-24 15:11:02.868955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.748 Running I/O for 1 seconds...[2024-04-24 15:11:02.869061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.748 [2024-04-24 15:11:02.869063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.121 00:06:55.121 lcore 0: 195955 00:06:55.121 lcore 1: 195955 00:06:55.121 lcore 2: 195955 00:06:55.121 lcore 3: 195956 00:06:55.121 done. 00:06:55.121 00:06:55.121 real 0m1.384s 00:06:55.121 user 0m4.203s 00:06:55.121 sys 0m0.060s 00:06:55.121 15:11:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:55.121 15:11:03 -- common/autotest_common.sh@10 -- # set +x 00:06:55.121 ************************************ 00:06:55.121 END TEST event_perf 00:06:55.121 ************************************ 00:06:55.121 15:11:04 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:55.121 15:11:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:55.121 15:11:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.121 15:11:04 -- common/autotest_common.sh@10 -- # set +x 00:06:55.121 ************************************ 00:06:55.121 START TEST event_reactor 00:06:55.121 ************************************ 00:06:55.121 15:11:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:55.121 [2024-04-24 15:11:04.113255] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:55.121 [2024-04-24 15:11:04.113479] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59691 ] 00:06:55.121 [2024-04-24 15:11:04.249067] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.121 [2024-04-24 15:11:04.356213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.494 test_start 00:06:56.494 oneshot 00:06:56.494 tick 100 00:06:56.494 tick 100 00:06:56.494 tick 250 00:06:56.494 tick 100 00:06:56.494 tick 100 00:06:56.494 tick 100 00:06:56.494 tick 250 00:06:56.494 tick 500 00:06:56.494 tick 100 00:06:56.494 tick 100 00:06:56.494 tick 250 00:06:56.494 tick 100 00:06:56.494 tick 100 00:06:56.494 test_end 00:06:56.494 00:06:56.494 real 0m1.372s 00:06:56.494 user 0m1.210s 00:06:56.494 sys 0m0.054s 00:06:56.494 15:11:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:56.494 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:06:56.494 ************************************ 00:06:56.494 END TEST event_reactor 00:06:56.494 ************************************ 00:06:56.494 15:11:05 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:56.494 15:11:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:56.494 15:11:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.494 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:06:56.494 ************************************ 00:06:56.494 START TEST event_reactor_perf 00:06:56.494 ************************************ 00:06:56.494 15:11:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:56.494 [2024-04-24 15:11:05.594980] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:56.494 [2024-04-24 15:11:05.595052] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59730 ] 00:06:56.494 [2024-04-24 15:11:05.726936] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.752 [2024-04-24 15:11:05.839029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.128 test_start 00:06:58.128 test_end 00:06:58.128 Performance: 367187 events per second 00:06:58.128 00:06:58.128 real 0m1.381s 00:06:58.128 user 0m1.215s 00:06:58.128 sys 0m0.059s 00:06:58.128 ************************************ 00:06:58.128 END TEST event_reactor_perf 00:06:58.128 ************************************ 00:06:58.128 15:11:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.128 15:11:06 -- common/autotest_common.sh@10 -- # set +x 00:06:58.128 15:11:07 -- event/event.sh@49 -- # uname -s 00:06:58.128 15:11:07 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:58.128 15:11:07 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:58.128 15:11:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.128 15:11:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.128 15:11:07 -- common/autotest_common.sh@10 -- # set +x 00:06:58.128 ************************************ 00:06:58.128 START TEST event_scheduler 00:06:58.128 ************************************ 00:06:58.128 15:11:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:58.128 * Looking for test storage... 00:06:58.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:58.128 15:11:07 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:58.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.128 15:11:07 -- scheduler/scheduler.sh@35 -- # scheduler_pid=59798 00:06:58.128 15:11:07 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:58.128 15:11:07 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:58.128 15:11:07 -- scheduler/scheduler.sh@37 -- # waitforlisten 59798 00:06:58.128 15:11:07 -- common/autotest_common.sh@817 -- # '[' -z 59798 ']' 00:06:58.128 15:11:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.128 15:11:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:58.128 15:11:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.128 15:11:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:58.128 15:11:07 -- common/autotest_common.sh@10 -- # set +x 00:06:58.128 [2024-04-24 15:11:07.214196] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:06:58.128 [2024-04-24 15:11:07.214520] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59798 ] 00:06:58.128 [2024-04-24 15:11:07.356589] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.386 [2024-04-24 15:11:07.484193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.386 [2024-04-24 15:11:07.484319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.386 [2024-04-24 15:11:07.484451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.386 [2024-04-24 15:11:07.484451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.320 15:11:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:59.320 15:11:08 -- common/autotest_common.sh@850 -- # return 0 00:06:59.320 15:11:08 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:59.320 15:11:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.320 15:11:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.320 POWER: Env isn't set yet! 00:06:59.320 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:59.320 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:59.320 POWER: Cannot set governor of lcore 0 to userspace 00:06:59.320 POWER: Attempting to initialise PSTAT power management... 00:06:59.320 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:59.320 POWER: Cannot set governor of lcore 0 to performance 00:06:59.320 POWER: Attempting to initialise AMD PSTATE power management... 00:06:59.320 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:59.320 POWER: Cannot set governor of lcore 0 to userspace 00:06:59.320 POWER: Attempting to initialise CPPC power management... 00:06:59.320 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:59.320 POWER: Cannot set governor of lcore 0 to userspace 00:06:59.320 POWER: Attempting to initialise VM power management... 00:06:59.320 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:59.320 POWER: Unable to set Power Management Environment for lcore 0 00:06:59.320 [2024-04-24 15:11:08.266287] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:59.320 [2024-04-24 15:11:08.266301] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:59.320 [2024-04-24 15:11:08.266310] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:59.320 15:11:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:59.320 15:11:08 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:59.320 15:11:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.320 15:11:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.320 [2024-04-24 15:11:08.365197] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:59.320 15:11:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:59.320 15:11:08 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:59.320 15:11:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.320 15:11:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.320 15:11:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.320 ************************************ 00:06:59.320 START TEST scheduler_create_thread 00:06:59.320 ************************************ 00:06:59.320 15:11:08 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:06:59.320 15:11:08 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:59.320 15:11:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.320 15:11:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.320 2 00:06:59.320 15:11:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:59.320 15:11:08 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:59.320 15:11:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.320 15:11:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.320 3 00:06:59.320 15:11:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:59.320 15:11:08 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:59.320 15:11:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.320 15:11:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.320 4 00:06:59.320 15:11:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:59.321 15:11:08 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:59.321 15:11:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.321 15:11:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.321 5 00:06:59.321 15:11:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:59.321 15:11:08 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:59.321 15:11:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.321 15:11:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.321 6 00:06:59.321 15:11:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:59.321 15:11:08 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:59.321 15:11:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.321 15:11:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.321 7 00:06:59.321 15:11:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:59.321 15:11:08 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:59.321 15:11:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.321 15:11:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.321 8 00:06:59.321 15:11:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:59.321 15:11:08 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:59.321 15:11:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.321 15:11:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.321 9 00:06:59.321 15:11:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:59.321 15:11:08 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:59.321 15:11:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.321 15:11:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.321 10 00:06:59.321 15:11:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:59.321 15:11:08 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:59.321 15:11:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:59.321 15:11:08 -- common/autotest_common.sh@10 -- # set +x 00:07:00.695 15:11:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.695 15:11:09 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:00.695 15:11:09 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:00.695 15:11:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.695 15:11:09 -- common/autotest_common.sh@10 -- # set +x 00:07:01.630 15:11:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.630 15:11:10 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:01.630 15:11:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.630 15:11:10 -- common/autotest_common.sh@10 -- # set +x 00:07:02.565 15:11:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.565 15:11:11 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:02.565 15:11:11 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:02.565 15:11:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.565 15:11:11 -- common/autotest_common.sh@10 -- # set +x 00:07:03.130 15:11:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.130 ************************************ 00:07:03.130 END TEST scheduler_create_thread 00:07:03.130 ************************************ 00:07:03.130 00:07:03.130 real 0m3.886s 00:07:03.130 user 0m0.018s 00:07:03.130 sys 0m0.008s 00:07:03.130 15:11:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:03.130 15:11:12 -- common/autotest_common.sh@10 -- # set +x 00:07:03.130 15:11:12 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:03.130 15:11:12 -- scheduler/scheduler.sh@46 -- # killprocess 59798 00:07:03.130 15:11:12 -- common/autotest_common.sh@936 -- # '[' -z 59798 ']' 00:07:03.130 15:11:12 -- common/autotest_common.sh@940 -- # kill -0 59798 00:07:03.130 15:11:12 -- common/autotest_common.sh@941 -- # uname 00:07:03.130 15:11:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:03.130 15:11:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59798 00:07:03.389 killing process with pid 59798 00:07:03.389 15:11:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:03.389 15:11:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:03.389 15:11:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59798' 00:07:03.389 15:11:12 -- common/autotest_common.sh@955 -- # kill 59798 00:07:03.389 15:11:12 -- common/autotest_common.sh@960 -- # wait 59798 00:07:03.646 [2024-04-24 15:11:12.704726] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:03.905 ************************************ 00:07:03.905 END TEST event_scheduler 00:07:03.905 ************************************ 00:07:03.905 00:07:03.905 real 0m5.977s 00:07:03.905 user 0m12.908s 00:07:03.905 sys 0m0.404s 00:07:03.905 15:11:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:03.905 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:07:03.905 15:11:13 -- event/event.sh@51 -- # modprobe -n nbd 00:07:03.905 15:11:13 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:03.905 15:11:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:03.905 15:11:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.905 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:07:04.163 ************************************ 00:07:04.163 START TEST app_repeat 00:07:04.163 ************************************ 00:07:04.163 15:11:13 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:07:04.163 15:11:13 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.163 15:11:13 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.163 15:11:13 -- event/event.sh@13 -- # local nbd_list 00:07:04.163 15:11:13 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.163 15:11:13 -- event/event.sh@14 -- # local bdev_list 00:07:04.163 15:11:13 -- event/event.sh@15 -- # local repeat_times=4 00:07:04.163 15:11:13 -- event/event.sh@17 -- # modprobe nbd 00:07:04.163 15:11:13 -- event/event.sh@19 -- # repeat_pid=59922 00:07:04.163 15:11:13 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.163 Process app_repeat pid: 59922 00:07:04.163 15:11:13 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59922' 00:07:04.163 15:11:13 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:04.163 spdk_app_start Round 0 00:07:04.163 15:11:13 -- event/event.sh@23 -- # for i in {0..2} 00:07:04.163 15:11:13 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:04.163 15:11:13 -- event/event.sh@25 -- # waitforlisten 59922 /var/tmp/spdk-nbd.sock 00:07:04.163 15:11:13 -- common/autotest_common.sh@817 -- # '[' -z 59922 ']' 00:07:04.163 15:11:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.163 15:11:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:04.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.163 15:11:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.163 15:11:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:04.163 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:07:04.163 [2024-04-24 15:11:13.202451] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:04.163 [2024-04-24 15:11:13.202582] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59922 ] 00:07:04.163 [2024-04-24 15:11:13.336525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.431 [2024-04-24 15:11:13.453022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.431 [2024-04-24 15:11:13.453031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.432 15:11:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:04.432 15:11:13 -- common/autotest_common.sh@850 -- # return 0 00:07:04.432 15:11:13 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.690 Malloc0 00:07:04.690 15:11:13 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.948 Malloc1 00:07:04.948 15:11:14 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@12 -- # local i 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.948 15:11:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:05.206 /dev/nbd0 00:07:05.206 15:11:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.206 15:11:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.206 15:11:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:07:05.206 15:11:14 -- common/autotest_common.sh@855 -- # local i 00:07:05.206 15:11:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:07:05.206 15:11:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:07:05.206 15:11:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:07:05.206 15:11:14 -- common/autotest_common.sh@859 -- # break 00:07:05.206 15:11:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:05.206 15:11:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:05.206 15:11:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.206 1+0 records in 00:07:05.206 1+0 records out 00:07:05.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401278 s, 10.2 MB/s 00:07:05.206 15:11:14 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.206 15:11:14 -- common/autotest_common.sh@872 -- # size=4096 00:07:05.206 15:11:14 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.206 15:11:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:07:05.206 15:11:14 -- common/autotest_common.sh@875 -- # return 0 00:07:05.206 15:11:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.206 15:11:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.206 15:11:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.463 /dev/nbd1 00:07:05.720 15:11:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.720 15:11:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.720 15:11:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:07:05.720 15:11:14 -- common/autotest_common.sh@855 -- # local i 00:07:05.720 15:11:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:07:05.720 15:11:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:07:05.720 15:11:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:07:05.720 15:11:14 -- common/autotest_common.sh@859 -- # break 00:07:05.720 15:11:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:05.720 15:11:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:05.721 15:11:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.721 1+0 records in 00:07:05.721 1+0 records out 00:07:05.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286483 s, 14.3 MB/s 00:07:05.721 15:11:14 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.721 15:11:14 -- common/autotest_common.sh@872 -- # size=4096 00:07:05.721 15:11:14 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.721 15:11:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:07:05.721 15:11:14 -- common/autotest_common.sh@875 -- # return 0 00:07:05.721 15:11:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.721 15:11:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.721 15:11:14 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.721 15:11:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.721 15:11:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.979 { 00:07:05.979 "nbd_device": "/dev/nbd0", 00:07:05.979 "bdev_name": "Malloc0" 00:07:05.979 }, 00:07:05.979 { 00:07:05.979 "nbd_device": "/dev/nbd1", 00:07:05.979 "bdev_name": "Malloc1" 00:07:05.979 } 00:07:05.979 ]' 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.979 { 00:07:05.979 "nbd_device": "/dev/nbd0", 00:07:05.979 "bdev_name": "Malloc0" 00:07:05.979 }, 00:07:05.979 { 00:07:05.979 "nbd_device": "/dev/nbd1", 00:07:05.979 "bdev_name": "Malloc1" 00:07:05.979 } 00:07:05.979 ]' 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.979 /dev/nbd1' 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.979 /dev/nbd1' 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@65 -- # count=2 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@95 -- # count=2 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:05.979 256+0 records in 00:07:05.979 256+0 records out 00:07:05.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00907174 s, 116 MB/s 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.979 256+0 records in 00:07:05.979 256+0 records out 00:07:05.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248074 s, 42.3 MB/s 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.979 256+0 records in 00:07:05.979 256+0 records out 00:07:05.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286786 s, 36.6 MB/s 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.979 15:11:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@51 -- # local i 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.980 15:11:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.238 15:11:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.238 15:11:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.238 15:11:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.238 15:11:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.238 15:11:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.238 15:11:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.238 15:11:15 -- bdev/nbd_common.sh@41 -- # break 00:07:06.238 15:11:15 -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.238 15:11:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.238 15:11:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.495 15:11:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.753 15:11:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.753 15:11:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.753 15:11:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.753 15:11:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.753 15:11:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.753 15:11:15 -- bdev/nbd_common.sh@41 -- # break 00:07:06.753 15:11:15 -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.753 15:11:15 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.753 15:11:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.753 15:11:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.011 15:11:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.011 15:11:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.011 15:11:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.011 15:11:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.011 15:11:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.011 15:11:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.011 15:11:16 -- bdev/nbd_common.sh@65 -- # true 00:07:07.011 15:11:16 -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.011 15:11:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.011 15:11:16 -- bdev/nbd_common.sh@104 -- # count=0 00:07:07.011 15:11:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:07.011 15:11:16 -- bdev/nbd_common.sh@109 -- # return 0 00:07:07.011 15:11:16 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:07.269 15:11:16 -- event/event.sh@35 -- # sleep 3 00:07:07.527 [2024-04-24 15:11:16.575879] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.527 [2024-04-24 15:11:16.689478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.527 [2024-04-24 15:11:16.689481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.527 [2024-04-24 15:11:16.743817] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.527 [2024-04-24 15:11:16.743878] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.807 spdk_app_start Round 1 00:07:10.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.807 15:11:19 -- event/event.sh@23 -- # for i in {0..2} 00:07:10.807 15:11:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:10.807 15:11:19 -- event/event.sh@25 -- # waitforlisten 59922 /var/tmp/spdk-nbd.sock 00:07:10.807 15:11:19 -- common/autotest_common.sh@817 -- # '[' -z 59922 ']' 00:07:10.807 15:11:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.807 15:11:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:10.807 15:11:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.807 15:11:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:10.807 15:11:19 -- common/autotest_common.sh@10 -- # set +x 00:07:10.807 15:11:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:10.807 15:11:19 -- common/autotest_common.sh@850 -- # return 0 00:07:10.807 15:11:19 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.807 Malloc0 00:07:10.807 15:11:19 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.119 Malloc1 00:07:11.119 15:11:20 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.119 15:11:20 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.119 15:11:20 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.119 15:11:20 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.120 15:11:20 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.120 15:11:20 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.120 15:11:20 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.120 15:11:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.120 15:11:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.120 15:11:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.120 15:11:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.120 15:11:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.120 15:11:20 -- bdev/nbd_common.sh@12 -- # local i 00:07:11.120 15:11:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.120 15:11:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.120 15:11:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:11.378 /dev/nbd0 00:07:11.378 15:11:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.378 15:11:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.378 15:11:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:07:11.378 15:11:20 -- common/autotest_common.sh@855 -- # local i 00:07:11.378 15:11:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:07:11.378 15:11:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:07:11.378 15:11:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:07:11.378 15:11:20 -- common/autotest_common.sh@859 -- # break 00:07:11.378 15:11:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:11.378 15:11:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:11.378 15:11:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.378 1+0 records in 00:07:11.378 1+0 records out 00:07:11.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287259 s, 14.3 MB/s 00:07:11.378 15:11:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.378 15:11:20 -- common/autotest_common.sh@872 -- # size=4096 00:07:11.378 15:11:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.378 15:11:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:07:11.378 15:11:20 -- common/autotest_common.sh@875 -- # return 0 00:07:11.378 15:11:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.378 15:11:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.378 15:11:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:11.637 /dev/nbd1 00:07:11.637 15:11:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:11.637 15:11:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:11.637 15:11:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:07:11.637 15:11:20 -- common/autotest_common.sh@855 -- # local i 00:07:11.637 15:11:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:07:11.637 15:11:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:07:11.637 15:11:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:07:11.637 15:11:20 -- common/autotest_common.sh@859 -- # break 00:07:11.637 15:11:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:11.637 15:11:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:11.637 15:11:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.637 1+0 records in 00:07:11.637 1+0 records out 00:07:11.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291298 s, 14.1 MB/s 00:07:11.637 15:11:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.637 15:11:20 -- common/autotest_common.sh@872 -- # size=4096 00:07:11.637 15:11:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.637 15:11:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:07:11.637 15:11:20 -- common/autotest_common.sh@875 -- # return 0 00:07:11.637 15:11:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.637 15:11:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.637 15:11:20 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.637 15:11:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.637 15:11:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.894 15:11:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:11.894 { 00:07:11.894 "nbd_device": "/dev/nbd0", 00:07:11.894 "bdev_name": "Malloc0" 00:07:11.894 }, 00:07:11.894 { 00:07:11.894 "nbd_device": "/dev/nbd1", 00:07:11.894 "bdev_name": "Malloc1" 00:07:11.894 } 00:07:11.894 ]' 00:07:11.894 15:11:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:11.894 { 00:07:11.894 "nbd_device": "/dev/nbd0", 00:07:11.894 "bdev_name": "Malloc0" 00:07:11.894 }, 00:07:11.894 { 00:07:11.894 "nbd_device": "/dev/nbd1", 00:07:11.894 "bdev_name": "Malloc1" 00:07:11.894 } 00:07:11.894 ]' 00:07:11.894 15:11:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.153 /dev/nbd1' 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.153 /dev/nbd1' 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@65 -- # count=2 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@95 -- # count=2 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:12.153 256+0 records in 00:07:12.153 256+0 records out 00:07:12.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00702611 s, 149 MB/s 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.153 256+0 records in 00:07:12.153 256+0 records out 00:07:12.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222899 s, 47.0 MB/s 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:12.153 256+0 records in 00:07:12.153 256+0 records out 00:07:12.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268856 s, 39.0 MB/s 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@51 -- # local i 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.153 15:11:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.412 15:11:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.412 15:11:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.412 15:11:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.412 15:11:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.412 15:11:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.412 15:11:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.412 15:11:21 -- bdev/nbd_common.sh@41 -- # break 00:07:12.412 15:11:21 -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.412 15:11:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.412 15:11:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.670 15:11:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.670 15:11:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.670 15:11:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.670 15:11:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.670 15:11:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.670 15:11:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.670 15:11:21 -- bdev/nbd_common.sh@41 -- # break 00:07:12.670 15:11:21 -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.670 15:11:21 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.670 15:11:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.670 15:11:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.928 15:11:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:12.928 15:11:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:12.928 15:11:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.928 15:11:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.928 15:11:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.928 15:11:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.928 15:11:22 -- bdev/nbd_common.sh@65 -- # true 00:07:12.928 15:11:22 -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.928 15:11:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.928 15:11:22 -- bdev/nbd_common.sh@104 -- # count=0 00:07:12.928 15:11:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:12.928 15:11:22 -- bdev/nbd_common.sh@109 -- # return 0 00:07:12.928 15:11:22 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:13.187 15:11:22 -- event/event.sh@35 -- # sleep 3 00:07:13.444 [2024-04-24 15:11:22.633488] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.717 [2024-04-24 15:11:22.749322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.717 [2024-04-24 15:11:22.749343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.717 [2024-04-24 15:11:22.809358] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:13.717 [2024-04-24 15:11:22.809444] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:16.292 spdk_app_start Round 2 00:07:16.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.292 15:11:25 -- event/event.sh@23 -- # for i in {0..2} 00:07:16.292 15:11:25 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:16.292 15:11:25 -- event/event.sh@25 -- # waitforlisten 59922 /var/tmp/spdk-nbd.sock 00:07:16.292 15:11:25 -- common/autotest_common.sh@817 -- # '[' -z 59922 ']' 00:07:16.292 15:11:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.292 15:11:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:16.292 15:11:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.292 15:11:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:16.292 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:07:16.549 15:11:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:16.550 15:11:25 -- common/autotest_common.sh@850 -- # return 0 00:07:16.550 15:11:25 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:16.808 Malloc0 00:07:16.808 15:11:25 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.065 Malloc1 00:07:17.065 15:11:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@12 -- # local i 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.065 15:11:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:17.323 /dev/nbd0 00:07:17.323 15:11:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.323 15:11:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.323 15:11:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:07:17.323 15:11:26 -- common/autotest_common.sh@855 -- # local i 00:07:17.323 15:11:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:07:17.323 15:11:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:07:17.323 15:11:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:07:17.323 15:11:26 -- common/autotest_common.sh@859 -- # break 00:07:17.323 15:11:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:17.323 15:11:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:17.323 15:11:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.323 1+0 records in 00:07:17.323 1+0 records out 00:07:17.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206716 s, 19.8 MB/s 00:07:17.324 15:11:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.324 15:11:26 -- common/autotest_common.sh@872 -- # size=4096 00:07:17.324 15:11:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.324 15:11:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:07:17.324 15:11:26 -- common/autotest_common.sh@875 -- # return 0 00:07:17.324 15:11:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.324 15:11:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.324 15:11:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:17.582 /dev/nbd1 00:07:17.582 15:11:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:17.582 15:11:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:17.582 15:11:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:07:17.582 15:11:26 -- common/autotest_common.sh@855 -- # local i 00:07:17.582 15:11:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:07:17.582 15:11:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:07:17.582 15:11:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:07:17.582 15:11:26 -- common/autotest_common.sh@859 -- # break 00:07:17.582 15:11:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:17.582 15:11:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:17.582 15:11:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.582 1+0 records in 00:07:17.582 1+0 records out 00:07:17.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373871 s, 11.0 MB/s 00:07:17.582 15:11:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.582 15:11:26 -- common/autotest_common.sh@872 -- # size=4096 00:07:17.582 15:11:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.582 15:11:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:07:17.582 15:11:26 -- common/autotest_common.sh@875 -- # return 0 00:07:17.582 15:11:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.582 15:11:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.582 15:11:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.582 15:11:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.582 15:11:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.149 { 00:07:18.149 "nbd_device": "/dev/nbd0", 00:07:18.149 "bdev_name": "Malloc0" 00:07:18.149 }, 00:07:18.149 { 00:07:18.149 "nbd_device": "/dev/nbd1", 00:07:18.149 "bdev_name": "Malloc1" 00:07:18.149 } 00:07:18.149 ]' 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.149 { 00:07:18.149 "nbd_device": "/dev/nbd0", 00:07:18.149 "bdev_name": "Malloc0" 00:07:18.149 }, 00:07:18.149 { 00:07:18.149 "nbd_device": "/dev/nbd1", 00:07:18.149 "bdev_name": "Malloc1" 00:07:18.149 } 00:07:18.149 ]' 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.149 /dev/nbd1' 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.149 /dev/nbd1' 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@65 -- # count=2 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@95 -- # count=2 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:18.149 256+0 records in 00:07:18.149 256+0 records out 00:07:18.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00583083 s, 180 MB/s 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.149 256+0 records in 00:07:18.149 256+0 records out 00:07:18.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211181 s, 49.7 MB/s 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.149 256+0 records in 00:07:18.149 256+0 records out 00:07:18.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283594 s, 37.0 MB/s 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@51 -- # local i 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.149 15:11:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.408 15:11:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.408 15:11:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.408 15:11:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.408 15:11:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.408 15:11:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.408 15:11:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.408 15:11:27 -- bdev/nbd_common.sh@41 -- # break 00:07:18.408 15:11:27 -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.408 15:11:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.408 15:11:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.666 15:11:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.666 15:11:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.666 15:11:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.666 15:11:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.666 15:11:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.666 15:11:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.666 15:11:27 -- bdev/nbd_common.sh@41 -- # break 00:07:18.666 15:11:27 -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.666 15:11:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.666 15:11:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.666 15:11:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.924 15:11:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.924 15:11:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.924 15:11:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.924 15:11:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.924 15:11:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.924 15:11:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.924 15:11:28 -- bdev/nbd_common.sh@65 -- # true 00:07:18.924 15:11:28 -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.924 15:11:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.924 15:11:28 -- bdev/nbd_common.sh@104 -- # count=0 00:07:18.924 15:11:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:18.924 15:11:28 -- bdev/nbd_common.sh@109 -- # return 0 00:07:18.924 15:11:28 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:19.182 15:11:28 -- event/event.sh@35 -- # sleep 3 00:07:19.440 [2024-04-24 15:11:28.605721] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.699 [2024-04-24 15:11:28.718908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.699 [2024-04-24 15:11:28.718917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.699 [2024-04-24 15:11:28.778286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:19.699 [2024-04-24 15:11:28.778349] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:22.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.291 15:11:31 -- event/event.sh@38 -- # waitforlisten 59922 /var/tmp/spdk-nbd.sock 00:07:22.291 15:11:31 -- common/autotest_common.sh@817 -- # '[' -z 59922 ']' 00:07:22.291 15:11:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.291 15:11:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:22.291 15:11:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.291 15:11:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:22.291 15:11:31 -- common/autotest_common.sh@10 -- # set +x 00:07:22.551 15:11:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:22.551 15:11:31 -- common/autotest_common.sh@850 -- # return 0 00:07:22.551 15:11:31 -- event/event.sh@39 -- # killprocess 59922 00:07:22.551 15:11:31 -- common/autotest_common.sh@936 -- # '[' -z 59922 ']' 00:07:22.551 15:11:31 -- common/autotest_common.sh@940 -- # kill -0 59922 00:07:22.551 15:11:31 -- common/autotest_common.sh@941 -- # uname 00:07:22.551 15:11:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:22.551 15:11:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59922 00:07:22.551 killing process with pid 59922 00:07:22.551 15:11:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:22.551 15:11:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:22.551 15:11:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59922' 00:07:22.551 15:11:31 -- common/autotest_common.sh@955 -- # kill 59922 00:07:22.551 15:11:31 -- common/autotest_common.sh@960 -- # wait 59922 00:07:22.809 spdk_app_start is called in Round 0. 00:07:22.809 Shutdown signal received, stop current app iteration 00:07:22.809 Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 reinitialization... 00:07:22.809 spdk_app_start is called in Round 1. 00:07:22.809 Shutdown signal received, stop current app iteration 00:07:22.809 Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 reinitialization... 00:07:22.809 spdk_app_start is called in Round 2. 00:07:22.809 Shutdown signal received, stop current app iteration 00:07:22.809 Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 reinitialization... 00:07:22.809 spdk_app_start is called in Round 3. 00:07:22.809 Shutdown signal received, stop current app iteration 00:07:22.809 15:11:31 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:22.809 15:11:31 -- event/event.sh@42 -- # return 0 00:07:22.809 00:07:22.809 real 0m18.777s 00:07:22.809 user 0m42.058s 00:07:22.809 sys 0m2.952s 00:07:22.809 15:11:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:22.809 15:11:31 -- common/autotest_common.sh@10 -- # set +x 00:07:22.809 ************************************ 00:07:22.809 END TEST app_repeat 00:07:22.809 ************************************ 00:07:22.809 15:11:31 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:22.809 15:11:31 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:22.809 15:11:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:22.809 15:11:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.809 15:11:31 -- common/autotest_common.sh@10 -- # set +x 00:07:23.068 ************************************ 00:07:23.068 START TEST cpu_locks 00:07:23.068 ************************************ 00:07:23.068 15:11:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:23.068 * Looking for test storage... 00:07:23.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:23.068 15:11:32 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:23.068 15:11:32 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:23.068 15:11:32 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:23.068 15:11:32 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:23.068 15:11:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.068 15:11:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.068 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:07:23.068 ************************************ 00:07:23.068 START TEST default_locks 00:07:23.068 ************************************ 00:07:23.068 15:11:32 -- common/autotest_common.sh@1111 -- # default_locks 00:07:23.068 15:11:32 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60358 00:07:23.068 15:11:32 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:23.068 15:11:32 -- event/cpu_locks.sh@47 -- # waitforlisten 60358 00:07:23.068 15:11:32 -- common/autotest_common.sh@817 -- # '[' -z 60358 ']' 00:07:23.068 15:11:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.068 15:11:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:23.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.068 15:11:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.068 15:11:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:23.068 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:07:23.068 [2024-04-24 15:11:32.293920] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:23.068 [2024-04-24 15:11:32.294032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60358 ] 00:07:23.327 [2024-04-24 15:11:32.432148] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.327 [2024-04-24 15:11:32.555260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.261 15:11:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:24.261 15:11:33 -- common/autotest_common.sh@850 -- # return 0 00:07:24.261 15:11:33 -- event/cpu_locks.sh@49 -- # locks_exist 60358 00:07:24.261 15:11:33 -- event/cpu_locks.sh@22 -- # lslocks -p 60358 00:07:24.261 15:11:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.519 15:11:33 -- event/cpu_locks.sh@50 -- # killprocess 60358 00:07:24.519 15:11:33 -- common/autotest_common.sh@936 -- # '[' -z 60358 ']' 00:07:24.519 15:11:33 -- common/autotest_common.sh@940 -- # kill -0 60358 00:07:24.519 15:11:33 -- common/autotest_common.sh@941 -- # uname 00:07:24.519 15:11:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:24.519 15:11:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60358 00:07:24.519 15:11:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:24.519 15:11:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:24.519 killing process with pid 60358 00:07:24.519 15:11:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60358' 00:07:24.519 15:11:33 -- common/autotest_common.sh@955 -- # kill 60358 00:07:24.519 15:11:33 -- common/autotest_common.sh@960 -- # wait 60358 00:07:25.085 15:11:34 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60358 00:07:25.085 15:11:34 -- common/autotest_common.sh@638 -- # local es=0 00:07:25.085 15:11:34 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60358 00:07:25.085 15:11:34 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:25.085 15:11:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:25.085 15:11:34 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:25.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.085 15:11:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:25.085 15:11:34 -- common/autotest_common.sh@641 -- # waitforlisten 60358 00:07:25.085 15:11:34 -- common/autotest_common.sh@817 -- # '[' -z 60358 ']' 00:07:25.085 15:11:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.085 15:11:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:25.085 15:11:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.085 15:11:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:25.085 15:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.085 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60358) - No such process 00:07:25.085 ERROR: process (pid: 60358) is no longer running 00:07:25.085 15:11:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:25.085 15:11:34 -- common/autotest_common.sh@850 -- # return 1 00:07:25.085 15:11:34 -- common/autotest_common.sh@641 -- # es=1 00:07:25.085 15:11:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:25.085 15:11:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:25.085 15:11:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:25.085 15:11:34 -- event/cpu_locks.sh@54 -- # no_locks 00:07:25.085 15:11:34 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:25.086 15:11:34 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:25.086 15:11:34 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:25.086 00:07:25.086 real 0m1.962s 00:07:25.086 user 0m2.079s 00:07:25.086 sys 0m0.586s 00:07:25.086 15:11:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:25.086 15:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.086 ************************************ 00:07:25.086 END TEST default_locks 00:07:25.086 ************************************ 00:07:25.086 15:11:34 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:25.086 15:11:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:25.086 15:11:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.086 15:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.086 ************************************ 00:07:25.086 START TEST default_locks_via_rpc 00:07:25.086 ************************************ 00:07:25.086 15:11:34 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:07:25.086 15:11:34 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60414 00:07:25.086 15:11:34 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:25.086 15:11:34 -- event/cpu_locks.sh@63 -- # waitforlisten 60414 00:07:25.086 15:11:34 -- common/autotest_common.sh@817 -- # '[' -z 60414 ']' 00:07:25.086 15:11:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.086 15:11:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:25.086 15:11:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.086 15:11:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:25.086 15:11:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.345 [2024-04-24 15:11:34.377324] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:25.345 [2024-04-24 15:11:34.377471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60414 ] 00:07:25.345 [2024-04-24 15:11:34.516739] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.603 [2024-04-24 15:11:34.646556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.171 15:11:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:26.171 15:11:35 -- common/autotest_common.sh@850 -- # return 0 00:07:26.171 15:11:35 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:26.171 15:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.171 15:11:35 -- common/autotest_common.sh@10 -- # set +x 00:07:26.171 15:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.171 15:11:35 -- event/cpu_locks.sh@67 -- # no_locks 00:07:26.171 15:11:35 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:26.171 15:11:35 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:26.171 15:11:35 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:26.171 15:11:35 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:26.171 15:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.171 15:11:35 -- common/autotest_common.sh@10 -- # set +x 00:07:26.171 15:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.171 15:11:35 -- event/cpu_locks.sh@71 -- # locks_exist 60414 00:07:26.171 15:11:35 -- event/cpu_locks.sh@22 -- # lslocks -p 60414 00:07:26.171 15:11:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:26.738 15:11:35 -- event/cpu_locks.sh@73 -- # killprocess 60414 00:07:26.738 15:11:35 -- common/autotest_common.sh@936 -- # '[' -z 60414 ']' 00:07:26.738 15:11:35 -- common/autotest_common.sh@940 -- # kill -0 60414 00:07:26.738 15:11:35 -- common/autotest_common.sh@941 -- # uname 00:07:26.738 15:11:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:26.738 15:11:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60414 00:07:26.738 15:11:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:26.738 15:11:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:26.738 killing process with pid 60414 00:07:26.738 15:11:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60414' 00:07:26.738 15:11:35 -- common/autotest_common.sh@955 -- # kill 60414 00:07:26.738 15:11:35 -- common/autotest_common.sh@960 -- # wait 60414 00:07:27.305 00:07:27.305 real 0m1.980s 00:07:27.305 user 0m2.164s 00:07:27.305 sys 0m0.565s 00:07:27.305 15:11:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:27.305 ************************************ 00:07:27.305 END TEST default_locks_via_rpc 00:07:27.305 ************************************ 00:07:27.305 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:07:27.306 15:11:36 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:27.306 15:11:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.306 15:11:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.306 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:07:27.306 ************************************ 00:07:27.306 START TEST non_locking_app_on_locked_coremask 00:07:27.306 ************************************ 00:07:27.306 15:11:36 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:07:27.306 15:11:36 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60469 00:07:27.306 15:11:36 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.306 15:11:36 -- event/cpu_locks.sh@81 -- # waitforlisten 60469 /var/tmp/spdk.sock 00:07:27.306 15:11:36 -- common/autotest_common.sh@817 -- # '[' -z 60469 ']' 00:07:27.306 15:11:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.306 15:11:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:27.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.306 15:11:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.306 15:11:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:27.306 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:07:27.306 [2024-04-24 15:11:36.478480] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:27.306 [2024-04-24 15:11:36.478579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60469 ] 00:07:27.564 [2024-04-24 15:11:36.619563] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.564 [2024-04-24 15:11:36.744136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.498 15:11:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:28.498 15:11:37 -- common/autotest_common.sh@850 -- # return 0 00:07:28.498 15:11:37 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:28.498 15:11:37 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60485 00:07:28.498 15:11:37 -- event/cpu_locks.sh@85 -- # waitforlisten 60485 /var/tmp/spdk2.sock 00:07:28.498 15:11:37 -- common/autotest_common.sh@817 -- # '[' -z 60485 ']' 00:07:28.498 15:11:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.498 15:11:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:28.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.498 15:11:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.498 15:11:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:28.498 15:11:37 -- common/autotest_common.sh@10 -- # set +x 00:07:28.498 [2024-04-24 15:11:37.474848] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:28.498 [2024-04-24 15:11:37.474966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60485 ] 00:07:28.498 [2024-04-24 15:11:37.616971] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:28.498 [2024-04-24 15:11:37.617050] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.757 [2024-04-24 15:11:37.857439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.325 15:11:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:29.325 15:11:38 -- common/autotest_common.sh@850 -- # return 0 00:07:29.325 15:11:38 -- event/cpu_locks.sh@87 -- # locks_exist 60469 00:07:29.325 15:11:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.325 15:11:38 -- event/cpu_locks.sh@22 -- # lslocks -p 60469 00:07:30.262 15:11:39 -- event/cpu_locks.sh@89 -- # killprocess 60469 00:07:30.262 15:11:39 -- common/autotest_common.sh@936 -- # '[' -z 60469 ']' 00:07:30.262 15:11:39 -- common/autotest_common.sh@940 -- # kill -0 60469 00:07:30.262 15:11:39 -- common/autotest_common.sh@941 -- # uname 00:07:30.262 15:11:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:30.262 15:11:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60469 00:07:30.262 15:11:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:30.262 killing process with pid 60469 00:07:30.262 15:11:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:30.262 15:11:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60469' 00:07:30.262 15:11:39 -- common/autotest_common.sh@955 -- # kill 60469 00:07:30.262 15:11:39 -- common/autotest_common.sh@960 -- # wait 60469 00:07:31.196 15:11:40 -- event/cpu_locks.sh@90 -- # killprocess 60485 00:07:31.196 15:11:40 -- common/autotest_common.sh@936 -- # '[' -z 60485 ']' 00:07:31.196 15:11:40 -- common/autotest_common.sh@940 -- # kill -0 60485 00:07:31.196 15:11:40 -- common/autotest_common.sh@941 -- # uname 00:07:31.196 15:11:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:31.196 15:11:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60485 00:07:31.196 15:11:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:31.196 15:11:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:31.196 15:11:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60485' 00:07:31.196 killing process with pid 60485 00:07:31.196 15:11:40 -- common/autotest_common.sh@955 -- # kill 60485 00:07:31.196 15:11:40 -- common/autotest_common.sh@960 -- # wait 60485 00:07:31.454 00:07:31.454 real 0m4.279s 00:07:31.454 user 0m4.750s 00:07:31.454 sys 0m1.144s 00:07:31.454 15:11:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.454 15:11:40 -- common/autotest_common.sh@10 -- # set +x 00:07:31.454 ************************************ 00:07:31.454 END TEST non_locking_app_on_locked_coremask 00:07:31.454 ************************************ 00:07:31.713 15:11:40 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:31.713 15:11:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:31.713 15:11:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.713 15:11:40 -- common/autotest_common.sh@10 -- # set +x 00:07:31.713 ************************************ 00:07:31.713 START TEST locking_app_on_unlocked_coremask 00:07:31.713 ************************************ 00:07:31.713 15:11:40 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:07:31.713 15:11:40 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60562 00:07:31.713 15:11:40 -- event/cpu_locks.sh@99 -- # waitforlisten 60562 /var/tmp/spdk.sock 00:07:31.713 15:11:40 -- common/autotest_common.sh@817 -- # '[' -z 60562 ']' 00:07:31.713 15:11:40 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:31.713 15:11:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.713 15:11:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:31.713 15:11:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.713 15:11:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:31.713 15:11:40 -- common/autotest_common.sh@10 -- # set +x 00:07:31.713 [2024-04-24 15:11:40.858469] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:31.713 [2024-04-24 15:11:40.858560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60562 ] 00:07:31.971 [2024-04-24 15:11:40.993697] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.971 [2024-04-24 15:11:40.993753] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.971 [2024-04-24 15:11:41.123070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.913 15:11:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:32.913 15:11:41 -- common/autotest_common.sh@850 -- # return 0 00:07:32.913 15:11:41 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60578 00:07:32.913 15:11:41 -- event/cpu_locks.sh@103 -- # waitforlisten 60578 /var/tmp/spdk2.sock 00:07:32.913 15:11:41 -- common/autotest_common.sh@817 -- # '[' -z 60578 ']' 00:07:32.913 15:11:41 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:32.913 15:11:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.913 15:11:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:32.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.913 15:11:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.913 15:11:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:32.913 15:11:41 -- common/autotest_common.sh@10 -- # set +x 00:07:32.913 [2024-04-24 15:11:41.893132] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:32.913 [2024-04-24 15:11:41.893228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60578 ] 00:07:32.913 [2024-04-24 15:11:42.034006] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.172 [2024-04-24 15:11:42.268474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.737 15:11:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:33.737 15:11:42 -- common/autotest_common.sh@850 -- # return 0 00:07:33.737 15:11:42 -- event/cpu_locks.sh@105 -- # locks_exist 60578 00:07:33.737 15:11:42 -- event/cpu_locks.sh@22 -- # lslocks -p 60578 00:07:33.737 15:11:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:34.672 15:11:43 -- event/cpu_locks.sh@107 -- # killprocess 60562 00:07:34.672 15:11:43 -- common/autotest_common.sh@936 -- # '[' -z 60562 ']' 00:07:34.672 15:11:43 -- common/autotest_common.sh@940 -- # kill -0 60562 00:07:34.672 15:11:43 -- common/autotest_common.sh@941 -- # uname 00:07:34.672 15:11:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:34.672 15:11:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60562 00:07:34.672 15:11:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:34.672 15:11:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:34.672 killing process with pid 60562 00:07:34.672 15:11:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60562' 00:07:34.672 15:11:43 -- common/autotest_common.sh@955 -- # kill 60562 00:07:34.672 15:11:43 -- common/autotest_common.sh@960 -- # wait 60562 00:07:35.616 15:11:44 -- event/cpu_locks.sh@108 -- # killprocess 60578 00:07:35.616 15:11:44 -- common/autotest_common.sh@936 -- # '[' -z 60578 ']' 00:07:35.616 15:11:44 -- common/autotest_common.sh@940 -- # kill -0 60578 00:07:35.616 15:11:44 -- common/autotest_common.sh@941 -- # uname 00:07:35.616 15:11:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:35.616 15:11:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60578 00:07:35.616 15:11:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:35.616 15:11:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:35.616 15:11:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60578' 00:07:35.616 killing process with pid 60578 00:07:35.616 15:11:44 -- common/autotest_common.sh@955 -- # kill 60578 00:07:35.616 15:11:44 -- common/autotest_common.sh@960 -- # wait 60578 00:07:35.875 00:07:35.875 real 0m4.126s 00:07:35.875 user 0m4.594s 00:07:35.875 sys 0m1.072s 00:07:35.875 15:11:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:35.875 15:11:44 -- common/autotest_common.sh@10 -- # set +x 00:07:35.875 ************************************ 00:07:35.875 END TEST locking_app_on_unlocked_coremask 00:07:35.875 ************************************ 00:07:35.875 15:11:44 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:35.875 15:11:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.875 15:11:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.875 15:11:44 -- common/autotest_common.sh@10 -- # set +x 00:07:35.875 ************************************ 00:07:35.875 START TEST locking_app_on_locked_coremask 00:07:35.875 ************************************ 00:07:35.875 15:11:45 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:07:35.875 15:11:45 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60649 00:07:35.875 15:11:45 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:35.875 15:11:45 -- event/cpu_locks.sh@116 -- # waitforlisten 60649 /var/tmp/spdk.sock 00:07:35.875 15:11:45 -- common/autotest_common.sh@817 -- # '[' -z 60649 ']' 00:07:35.875 15:11:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.875 15:11:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:35.875 15:11:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.875 15:11:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:35.875 15:11:45 -- common/autotest_common.sh@10 -- # set +x 00:07:35.875 [2024-04-24 15:11:45.104515] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:35.875 [2024-04-24 15:11:45.104626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60649 ] 00:07:36.133 [2024-04-24 15:11:45.244630] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.133 [2024-04-24 15:11:45.374733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.068 15:11:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:37.068 15:11:46 -- common/autotest_common.sh@850 -- # return 0 00:07:37.068 15:11:46 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60665 00:07:37.068 15:11:46 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:37.068 15:11:46 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60665 /var/tmp/spdk2.sock 00:07:37.068 15:11:46 -- common/autotest_common.sh@638 -- # local es=0 00:07:37.068 15:11:46 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60665 /var/tmp/spdk2.sock 00:07:37.068 15:11:46 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:37.068 15:11:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:37.068 15:11:46 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:37.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.068 15:11:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:37.068 15:11:46 -- common/autotest_common.sh@641 -- # waitforlisten 60665 /var/tmp/spdk2.sock 00:07:37.068 15:11:46 -- common/autotest_common.sh@817 -- # '[' -z 60665 ']' 00:07:37.068 15:11:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.068 15:11:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:37.068 15:11:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.068 15:11:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:37.068 15:11:46 -- common/autotest_common.sh@10 -- # set +x 00:07:37.068 [2024-04-24 15:11:46.197457] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:37.068 [2024-04-24 15:11:46.197577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60665 ] 00:07:37.326 [2024-04-24 15:11:46.351779] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60649 has claimed it. 00:07:37.326 [2024-04-24 15:11:46.351857] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:37.892 ERROR: process (pid: 60665) is no longer running 00:07:37.892 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60665) - No such process 00:07:37.892 15:11:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:37.892 15:11:46 -- common/autotest_common.sh@850 -- # return 1 00:07:37.892 15:11:46 -- common/autotest_common.sh@641 -- # es=1 00:07:37.892 15:11:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:37.892 15:11:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:37.892 15:11:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:37.892 15:11:46 -- event/cpu_locks.sh@122 -- # locks_exist 60649 00:07:37.892 15:11:46 -- event/cpu_locks.sh@22 -- # lslocks -p 60649 00:07:37.892 15:11:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.150 15:11:47 -- event/cpu_locks.sh@124 -- # killprocess 60649 00:07:38.150 15:11:47 -- common/autotest_common.sh@936 -- # '[' -z 60649 ']' 00:07:38.150 15:11:47 -- common/autotest_common.sh@940 -- # kill -0 60649 00:07:38.150 15:11:47 -- common/autotest_common.sh@941 -- # uname 00:07:38.150 15:11:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:38.150 15:11:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60649 00:07:38.150 killing process with pid 60649 00:07:38.150 15:11:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:38.150 15:11:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:38.150 15:11:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60649' 00:07:38.150 15:11:47 -- common/autotest_common.sh@955 -- # kill 60649 00:07:38.150 15:11:47 -- common/autotest_common.sh@960 -- # wait 60649 00:07:38.744 ************************************ 00:07:38.744 END TEST locking_app_on_locked_coremask 00:07:38.744 ************************************ 00:07:38.744 00:07:38.744 real 0m2.693s 00:07:38.744 user 0m3.159s 00:07:38.744 sys 0m0.636s 00:07:38.744 15:11:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.744 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:07:38.744 15:11:47 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:38.744 15:11:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.744 15:11:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.744 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:07:38.744 ************************************ 00:07:38.744 START TEST locking_overlapped_coremask 00:07:38.744 ************************************ 00:07:38.744 15:11:47 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:07:38.744 15:11:47 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60720 00:07:38.744 15:11:47 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:38.744 15:11:47 -- event/cpu_locks.sh@133 -- # waitforlisten 60720 /var/tmp/spdk.sock 00:07:38.744 15:11:47 -- common/autotest_common.sh@817 -- # '[' -z 60720 ']' 00:07:38.744 15:11:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.744 15:11:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:38.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.744 15:11:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.744 15:11:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:38.744 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:07:38.744 [2024-04-24 15:11:47.908465] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:38.744 [2024-04-24 15:11:47.908835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60720 ] 00:07:39.042 [2024-04-24 15:11:48.049683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.042 [2024-04-24 15:11:48.165589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.042 [2024-04-24 15:11:48.165675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.042 [2024-04-24 15:11:48.165677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.978 15:11:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:39.978 15:11:48 -- common/autotest_common.sh@850 -- # return 0 00:07:39.978 15:11:48 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60738 00:07:39.978 15:11:48 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:39.978 15:11:48 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60738 /var/tmp/spdk2.sock 00:07:39.978 15:11:48 -- common/autotest_common.sh@638 -- # local es=0 00:07:39.978 15:11:48 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60738 /var/tmp/spdk2.sock 00:07:39.978 15:11:48 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:39.978 15:11:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:39.978 15:11:48 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:39.978 15:11:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:39.978 15:11:48 -- common/autotest_common.sh@641 -- # waitforlisten 60738 /var/tmp/spdk2.sock 00:07:39.978 15:11:48 -- common/autotest_common.sh@817 -- # '[' -z 60738 ']' 00:07:39.978 15:11:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:39.978 15:11:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:39.978 15:11:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:39.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:39.978 15:11:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:39.978 15:11:48 -- common/autotest_common.sh@10 -- # set +x 00:07:39.978 [2024-04-24 15:11:48.982125] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:39.978 [2024-04-24 15:11:48.982470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60738 ] 00:07:39.978 [2024-04-24 15:11:49.127293] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60720 has claimed it. 00:07:39.978 [2024-04-24 15:11:49.127383] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:40.545 ERROR: process (pid: 60738) is no longer running 00:07:40.545 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60738) - No such process 00:07:40.545 15:11:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:40.545 15:11:49 -- common/autotest_common.sh@850 -- # return 1 00:07:40.545 15:11:49 -- common/autotest_common.sh@641 -- # es=1 00:07:40.545 15:11:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:40.545 15:11:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:40.545 15:11:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:40.545 15:11:49 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:40.545 15:11:49 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:40.545 15:11:49 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:40.545 15:11:49 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:40.545 15:11:49 -- event/cpu_locks.sh@141 -- # killprocess 60720 00:07:40.545 15:11:49 -- common/autotest_common.sh@936 -- # '[' -z 60720 ']' 00:07:40.545 15:11:49 -- common/autotest_common.sh@940 -- # kill -0 60720 00:07:40.545 15:11:49 -- common/autotest_common.sh@941 -- # uname 00:07:40.545 15:11:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:40.545 15:11:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60720 00:07:40.545 15:11:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:40.545 15:11:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:40.545 15:11:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60720' 00:07:40.545 killing process with pid 60720 00:07:40.545 15:11:49 -- common/autotest_common.sh@955 -- # kill 60720 00:07:40.545 15:11:49 -- common/autotest_common.sh@960 -- # wait 60720 00:07:41.111 00:07:41.111 real 0m2.327s 00:07:41.111 user 0m6.442s 00:07:41.111 sys 0m0.427s 00:07:41.111 15:11:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:41.111 15:11:50 -- common/autotest_common.sh@10 -- # set +x 00:07:41.111 ************************************ 00:07:41.111 END TEST locking_overlapped_coremask 00:07:41.111 ************************************ 00:07:41.111 15:11:50 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:41.111 15:11:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:41.111 15:11:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.111 15:11:50 -- common/autotest_common.sh@10 -- # set +x 00:07:41.111 ************************************ 00:07:41.111 START TEST locking_overlapped_coremask_via_rpc 00:07:41.111 ************************************ 00:07:41.111 15:11:50 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:07:41.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.111 15:11:50 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60783 00:07:41.111 15:11:50 -- event/cpu_locks.sh@149 -- # waitforlisten 60783 /var/tmp/spdk.sock 00:07:41.111 15:11:50 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:41.111 15:11:50 -- common/autotest_common.sh@817 -- # '[' -z 60783 ']' 00:07:41.111 15:11:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.111 15:11:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:41.111 15:11:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.111 15:11:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:41.111 15:11:50 -- common/autotest_common.sh@10 -- # set +x 00:07:41.369 [2024-04-24 15:11:50.361175] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:41.370 [2024-04-24 15:11:50.361303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60783 ] 00:07:41.370 [2024-04-24 15:11:50.507311] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.370 [2024-04-24 15:11:50.507394] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.627 [2024-04-24 15:11:50.640335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.627 [2024-04-24 15:11:50.640454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.627 [2024-04-24 15:11:50.640453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.194 15:11:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:42.194 15:11:51 -- common/autotest_common.sh@850 -- # return 0 00:07:42.194 15:11:51 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60801 00:07:42.194 15:11:51 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:42.194 15:11:51 -- event/cpu_locks.sh@153 -- # waitforlisten 60801 /var/tmp/spdk2.sock 00:07:42.194 15:11:51 -- common/autotest_common.sh@817 -- # '[' -z 60801 ']' 00:07:42.194 15:11:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.194 15:11:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:42.194 15:11:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.194 15:11:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:42.194 15:11:51 -- common/autotest_common.sh@10 -- # set +x 00:07:42.194 [2024-04-24 15:11:51.424644] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:42.194 [2024-04-24 15:11:51.425014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60801 ] 00:07:42.452 [2024-04-24 15:11:51.576708] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:42.452 [2024-04-24 15:11:51.576795] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.708 [2024-04-24 15:11:51.811587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.708 [2024-04-24 15:11:51.815508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:42.708 [2024-04-24 15:11:51.815511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.274 15:11:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:43.274 15:11:52 -- common/autotest_common.sh@850 -- # return 0 00:07:43.274 15:11:52 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:43.274 15:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.274 15:11:52 -- common/autotest_common.sh@10 -- # set +x 00:07:43.274 15:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.274 15:11:52 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:43.274 15:11:52 -- common/autotest_common.sh@638 -- # local es=0 00:07:43.274 15:11:52 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:43.274 15:11:52 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:07:43.274 15:11:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:43.274 15:11:52 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:07:43.274 15:11:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:43.274 15:11:52 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:43.274 15:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.274 15:11:52 -- common/autotest_common.sh@10 -- # set +x 00:07:43.274 [2024-04-24 15:11:52.427586] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60783 has claimed it. 00:07:43.274 request: 00:07:43.274 { 00:07:43.274 "method": "framework_enable_cpumask_locks", 00:07:43.274 "req_id": 1 00:07:43.274 } 00:07:43.274 Got JSON-RPC error response 00:07:43.274 response: 00:07:43.274 { 00:07:43.274 "code": -32603, 00:07:43.274 "message": "Failed to claim CPU core: 2" 00:07:43.274 } 00:07:43.274 15:11:52 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:07:43.274 15:11:52 -- common/autotest_common.sh@641 -- # es=1 00:07:43.274 15:11:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:43.274 15:11:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:43.274 15:11:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:43.274 15:11:52 -- event/cpu_locks.sh@158 -- # waitforlisten 60783 /var/tmp/spdk.sock 00:07:43.274 15:11:52 -- common/autotest_common.sh@817 -- # '[' -z 60783 ']' 00:07:43.274 15:11:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.274 15:11:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:43.274 15:11:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.274 15:11:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:43.274 15:11:52 -- common/autotest_common.sh@10 -- # set +x 00:07:43.532 15:11:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:43.532 15:11:52 -- common/autotest_common.sh@850 -- # return 0 00:07:43.532 15:11:52 -- event/cpu_locks.sh@159 -- # waitforlisten 60801 /var/tmp/spdk2.sock 00:07:43.532 15:11:52 -- common/autotest_common.sh@817 -- # '[' -z 60801 ']' 00:07:43.532 15:11:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:43.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:43.532 15:11:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:43.532 15:11:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:43.532 15:11:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:43.532 15:11:52 -- common/autotest_common.sh@10 -- # set +x 00:07:44.099 ************************************ 00:07:44.099 END TEST locking_overlapped_coremask_via_rpc 00:07:44.099 ************************************ 00:07:44.099 15:11:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:44.099 15:11:53 -- common/autotest_common.sh@850 -- # return 0 00:07:44.099 15:11:53 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:44.099 15:11:53 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:44.099 15:11:53 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:44.099 15:11:53 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:44.099 00:07:44.099 real 0m2.855s 00:07:44.099 user 0m1.543s 00:07:44.099 sys 0m0.224s 00:07:44.099 15:11:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:44.099 15:11:53 -- common/autotest_common.sh@10 -- # set +x 00:07:44.099 15:11:53 -- event/cpu_locks.sh@174 -- # cleanup 00:07:44.099 15:11:53 -- event/cpu_locks.sh@15 -- # [[ -z 60783 ]] 00:07:44.099 15:11:53 -- event/cpu_locks.sh@15 -- # killprocess 60783 00:07:44.099 15:11:53 -- common/autotest_common.sh@936 -- # '[' -z 60783 ']' 00:07:44.099 15:11:53 -- common/autotest_common.sh@940 -- # kill -0 60783 00:07:44.099 15:11:53 -- common/autotest_common.sh@941 -- # uname 00:07:44.099 15:11:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:44.099 15:11:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60783 00:07:44.099 killing process with pid 60783 00:07:44.099 15:11:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:44.099 15:11:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:44.099 15:11:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60783' 00:07:44.099 15:11:53 -- common/autotest_common.sh@955 -- # kill 60783 00:07:44.099 15:11:53 -- common/autotest_common.sh@960 -- # wait 60783 00:07:44.694 15:11:53 -- event/cpu_locks.sh@16 -- # [[ -z 60801 ]] 00:07:44.694 15:11:53 -- event/cpu_locks.sh@16 -- # killprocess 60801 00:07:44.694 15:11:53 -- common/autotest_common.sh@936 -- # '[' -z 60801 ']' 00:07:44.694 15:11:53 -- common/autotest_common.sh@940 -- # kill -0 60801 00:07:44.694 15:11:53 -- common/autotest_common.sh@941 -- # uname 00:07:44.694 15:11:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:44.694 15:11:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60801 00:07:44.694 killing process with pid 60801 00:07:44.694 15:11:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:44.694 15:11:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:44.694 15:11:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60801' 00:07:44.694 15:11:53 -- common/autotest_common.sh@955 -- # kill 60801 00:07:44.694 15:11:53 -- common/autotest_common.sh@960 -- # wait 60801 00:07:44.952 15:11:54 -- event/cpu_locks.sh@18 -- # rm -f 00:07:44.952 Process with pid 60783 is not found 00:07:44.952 Process with pid 60801 is not found 00:07:44.952 15:11:54 -- event/cpu_locks.sh@1 -- # cleanup 00:07:44.952 15:11:54 -- event/cpu_locks.sh@15 -- # [[ -z 60783 ]] 00:07:44.952 15:11:54 -- event/cpu_locks.sh@15 -- # killprocess 60783 00:07:44.952 15:11:54 -- common/autotest_common.sh@936 -- # '[' -z 60783 ']' 00:07:44.952 15:11:54 -- common/autotest_common.sh@940 -- # kill -0 60783 00:07:44.952 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (60783) - No such process 00:07:44.952 15:11:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 60783 is not found' 00:07:44.952 15:11:54 -- event/cpu_locks.sh@16 -- # [[ -z 60801 ]] 00:07:44.952 15:11:54 -- event/cpu_locks.sh@16 -- # killprocess 60801 00:07:44.952 15:11:54 -- common/autotest_common.sh@936 -- # '[' -z 60801 ']' 00:07:44.952 15:11:54 -- common/autotest_common.sh@940 -- # kill -0 60801 00:07:44.952 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (60801) - No such process 00:07:44.952 15:11:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 60801 is not found' 00:07:44.952 15:11:54 -- event/cpu_locks.sh@18 -- # rm -f 00:07:44.952 00:07:44.952 real 0m22.021s 00:07:44.952 user 0m38.297s 00:07:44.952 sys 0m5.734s 00:07:44.952 15:11:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:44.952 ************************************ 00:07:44.952 END TEST cpu_locks 00:07:44.952 ************************************ 00:07:44.952 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:07:44.952 ************************************ 00:07:44.952 END TEST event 00:07:44.952 ************************************ 00:07:44.952 00:07:44.952 real 0m51.690s 00:07:44.952 user 1m40.144s 00:07:44.952 sys 0m9.683s 00:07:44.952 15:11:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:44.952 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:07:44.952 15:11:54 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:44.952 15:11:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:44.952 15:11:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.952 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:07:45.209 ************************************ 00:07:45.209 START TEST thread 00:07:45.209 ************************************ 00:07:45.209 15:11:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:45.209 * Looking for test storage... 00:07:45.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:45.209 15:11:54 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:45.209 15:11:54 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:45.209 15:11:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.209 15:11:54 -- common/autotest_common.sh@10 -- # set +x 00:07:45.209 ************************************ 00:07:45.209 START TEST thread_poller_perf 00:07:45.209 ************************************ 00:07:45.209 15:11:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:45.209 [2024-04-24 15:11:54.409831] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:45.209 [2024-04-24 15:11:54.409979] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60938 ] 00:07:45.467 [2024-04-24 15:11:54.553400] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.468 [2024-04-24 15:11:54.673347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.468 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:46.842 ====================================== 00:07:46.842 busy:2211688103 (cyc) 00:07:46.842 total_run_count: 315000 00:07:46.842 tsc_hz: 2200000000 (cyc) 00:07:46.842 ====================================== 00:07:46.842 poller_cost: 7021 (cyc), 3191 (nsec) 00:07:46.842 00:07:46.842 real 0m1.408s 00:07:46.842 user 0m1.239s 00:07:46.842 sys 0m0.060s 00:07:46.842 15:11:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:46.842 15:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:46.842 ************************************ 00:07:46.842 END TEST thread_poller_perf 00:07:46.842 ************************************ 00:07:46.842 15:11:55 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:46.842 15:11:55 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:46.842 15:11:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.842 15:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:46.842 ************************************ 00:07:46.842 START TEST thread_poller_perf 00:07:46.842 ************************************ 00:07:46.842 15:11:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:46.842 [2024-04-24 15:11:55.924142] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:46.842 [2024-04-24 15:11:55.924265] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60978 ] 00:07:46.842 [2024-04-24 15:11:56.056380] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.101 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:47.101 [2024-04-24 15:11:56.186539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.478 ====================================== 00:07:48.478 busy:2201929846 (cyc) 00:07:48.478 total_run_count: 4089000 00:07:48.478 tsc_hz: 2200000000 (cyc) 00:07:48.478 ====================================== 00:07:48.478 poller_cost: 538 (cyc), 244 (nsec) 00:07:48.478 00:07:48.478 real 0m1.398s 00:07:48.478 user 0m1.237s 00:07:48.478 sys 0m0.051s 00:07:48.478 15:11:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:48.478 ************************************ 00:07:48.478 END TEST thread_poller_perf 00:07:48.478 ************************************ 00:07:48.478 15:11:57 -- common/autotest_common.sh@10 -- # set +x 00:07:48.478 15:11:57 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:48.478 ************************************ 00:07:48.478 END TEST thread 00:07:48.478 ************************************ 00:07:48.478 00:07:48.478 real 0m3.104s 00:07:48.478 user 0m2.583s 00:07:48.478 sys 0m0.275s 00:07:48.478 15:11:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:48.478 15:11:57 -- common/autotest_common.sh@10 -- # set +x 00:07:48.478 15:11:57 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:48.478 15:11:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:48.478 15:11:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.478 15:11:57 -- common/autotest_common.sh@10 -- # set +x 00:07:48.478 ************************************ 00:07:48.478 START TEST accel 00:07:48.478 ************************************ 00:07:48.478 15:11:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:48.478 * Looking for test storage... 00:07:48.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:48.478 15:11:57 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:48.478 15:11:57 -- accel/accel.sh@82 -- # get_expected_opcs 00:07:48.478 15:11:57 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:48.478 15:11:57 -- accel/accel.sh@62 -- # spdk_tgt_pid=61058 00:07:48.478 15:11:57 -- accel/accel.sh@63 -- # waitforlisten 61058 00:07:48.478 15:11:57 -- common/autotest_common.sh@817 -- # '[' -z 61058 ']' 00:07:48.478 15:11:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.478 15:11:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:48.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.478 15:11:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.478 15:11:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:48.478 15:11:57 -- common/autotest_common.sh@10 -- # set +x 00:07:48.478 15:11:57 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:48.478 15:11:57 -- accel/accel.sh@61 -- # build_accel_config 00:07:48.478 15:11:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.478 15:11:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.478 15:11:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.478 15:11:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.478 15:11:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.478 15:11:57 -- accel/accel.sh@40 -- # local IFS=, 00:07:48.478 15:11:57 -- accel/accel.sh@41 -- # jq -r . 00:07:48.478 [2024-04-24 15:11:57.608604] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:48.478 [2024-04-24 15:11:57.608702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61058 ] 00:07:48.736 [2024-04-24 15:11:57.743708] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.736 [2024-04-24 15:11:57.886036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.301 15:11:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:49.301 15:11:58 -- common/autotest_common.sh@850 -- # return 0 00:07:49.301 15:11:58 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:49.301 15:11:58 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:49.301 15:11:58 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:49.301 15:11:58 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:49.301 15:11:58 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:49.301 15:11:58 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:49.301 15:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:49.301 15:11:58 -- common/autotest_common.sh@10 -- # set +x 00:07:49.301 15:11:58 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:49.301 15:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:49.559 15:11:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # IFS== 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # read -r opc module 00:07:49.559 15:11:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:49.559 15:11:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # IFS== 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # read -r opc module 00:07:49.559 15:11:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:49.559 15:11:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # IFS== 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # read -r opc module 00:07:49.559 15:11:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:49.559 15:11:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # IFS== 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # read -r opc module 00:07:49.559 15:11:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:49.559 15:11:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # IFS== 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # read -r opc module 00:07:49.559 15:11:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:49.559 15:11:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # IFS== 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # read -r opc module 00:07:49.559 15:11:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:49.559 15:11:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # IFS== 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # read -r opc module 00:07:49.559 15:11:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:49.559 15:11:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # IFS== 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # read -r opc module 00:07:49.559 15:11:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:49.559 15:11:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # IFS== 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # read -r opc module 00:07:49.559 15:11:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:49.559 15:11:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # IFS== 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # read -r opc module 00:07:49.559 15:11:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:49.559 15:11:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # IFS== 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # read -r opc module 00:07:49.559 15:11:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:49.559 15:11:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # IFS== 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # read -r opc module 00:07:49.559 15:11:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:49.559 15:11:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # IFS== 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # read -r opc module 00:07:49.559 15:11:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:49.559 15:11:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # IFS== 00:07:49.559 15:11:58 -- accel/accel.sh@72 -- # read -r opc module 00:07:49.559 15:11:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:49.559 15:11:58 -- accel/accel.sh@75 -- # killprocess 61058 00:07:49.559 15:11:58 -- common/autotest_common.sh@936 -- # '[' -z 61058 ']' 00:07:49.559 15:11:58 -- common/autotest_common.sh@940 -- # kill -0 61058 00:07:49.559 15:11:58 -- common/autotest_common.sh@941 -- # uname 00:07:49.559 15:11:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:49.559 15:11:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61058 00:07:49.559 15:11:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:49.559 15:11:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:49.559 15:11:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61058' 00:07:49.559 killing process with pid 61058 00:07:49.559 15:11:58 -- common/autotest_common.sh@955 -- # kill 61058 00:07:49.559 15:11:58 -- common/autotest_common.sh@960 -- # wait 61058 00:07:49.817 15:11:59 -- accel/accel.sh@76 -- # trap - ERR 00:07:49.817 15:11:59 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:49.817 15:11:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:49.817 15:11:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.817 15:11:59 -- common/autotest_common.sh@10 -- # set +x 00:07:50.086 15:11:59 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:07:50.086 15:11:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:50.086 15:11:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.086 15:11:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.086 15:11:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.086 15:11:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.086 15:11:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.086 15:11:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.086 15:11:59 -- accel/accel.sh@40 -- # local IFS=, 00:07:50.086 15:11:59 -- accel/accel.sh@41 -- # jq -r . 00:07:50.086 15:11:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:50.086 15:11:59 -- common/autotest_common.sh@10 -- # set +x 00:07:50.086 15:11:59 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:50.086 15:11:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:50.086 15:11:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.086 15:11:59 -- common/autotest_common.sh@10 -- # set +x 00:07:50.086 ************************************ 00:07:50.086 START TEST accel_missing_filename 00:07:50.086 ************************************ 00:07:50.086 15:11:59 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:07:50.086 15:11:59 -- common/autotest_common.sh@638 -- # local es=0 00:07:50.086 15:11:59 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:50.086 15:11:59 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:50.086 15:11:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:50.086 15:11:59 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:50.086 15:11:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:50.086 15:11:59 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:07:50.086 15:11:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:50.086 15:11:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.086 15:11:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.086 15:11:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.086 15:11:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.086 15:11:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.086 15:11:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.086 15:11:59 -- accel/accel.sh@40 -- # local IFS=, 00:07:50.087 15:11:59 -- accel/accel.sh@41 -- # jq -r . 00:07:50.087 [2024-04-24 15:11:59.261995] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:50.087 [2024-04-24 15:11:59.262077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61124 ] 00:07:50.344 [2024-04-24 15:11:59.395344] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.344 [2024-04-24 15:11:59.516245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.344 [2024-04-24 15:11:59.572837] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.602 [2024-04-24 15:11:59.648490] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:50.602 A filename is required. 00:07:50.602 15:11:59 -- common/autotest_common.sh@641 -- # es=234 00:07:50.602 15:11:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:50.602 15:11:59 -- common/autotest_common.sh@650 -- # es=106 00:07:50.602 15:11:59 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:50.602 15:11:59 -- common/autotest_common.sh@658 -- # es=1 00:07:50.602 15:11:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:50.602 00:07:50.602 real 0m0.521s 00:07:50.602 user 0m0.354s 00:07:50.602 sys 0m0.110s 00:07:50.602 15:11:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:50.602 ************************************ 00:07:50.602 END TEST accel_missing_filename 00:07:50.602 ************************************ 00:07:50.602 15:11:59 -- common/autotest_common.sh@10 -- # set +x 00:07:50.602 15:11:59 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:50.602 15:11:59 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:50.602 15:11:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.602 15:11:59 -- common/autotest_common.sh@10 -- # set +x 00:07:50.860 ************************************ 00:07:50.860 START TEST accel_compress_verify 00:07:50.860 ************************************ 00:07:50.860 15:11:59 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:50.860 15:11:59 -- common/autotest_common.sh@638 -- # local es=0 00:07:50.860 15:11:59 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:50.860 15:11:59 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:50.860 15:11:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:50.860 15:11:59 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:50.860 15:11:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:50.860 15:11:59 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:50.860 15:11:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:50.860 15:11:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.860 15:11:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.860 15:11:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.860 15:11:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.860 15:11:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.860 15:11:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.860 15:11:59 -- accel/accel.sh@40 -- # local IFS=, 00:07:50.860 15:11:59 -- accel/accel.sh@41 -- # jq -r . 00:07:50.860 [2024-04-24 15:11:59.894406] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:50.860 [2024-04-24 15:11:59.894510] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61152 ] 00:07:50.860 [2024-04-24 15:12:00.027270] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.117 [2024-04-24 15:12:00.147981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.117 [2024-04-24 15:12:00.203823] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.117 [2024-04-24 15:12:00.279383] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:51.375 00:07:51.375 Compression does not support the verify option, aborting. 00:07:51.375 15:12:00 -- common/autotest_common.sh@641 -- # es=161 00:07:51.375 15:12:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:51.375 15:12:00 -- common/autotest_common.sh@650 -- # es=33 00:07:51.375 15:12:00 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:51.375 15:12:00 -- common/autotest_common.sh@658 -- # es=1 00:07:51.375 15:12:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:51.375 00:07:51.375 real 0m0.526s 00:07:51.375 user 0m0.355s 00:07:51.375 sys 0m0.109s 00:07:51.375 15:12:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:51.375 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:07:51.375 ************************************ 00:07:51.375 END TEST accel_compress_verify 00:07:51.375 ************************************ 00:07:51.375 15:12:00 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:51.375 15:12:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:51.375 15:12:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.375 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:07:51.375 ************************************ 00:07:51.375 START TEST accel_wrong_workload 00:07:51.375 ************************************ 00:07:51.375 15:12:00 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:07:51.375 15:12:00 -- common/autotest_common.sh@638 -- # local es=0 00:07:51.375 15:12:00 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:51.375 15:12:00 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:51.375 15:12:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:51.375 15:12:00 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:51.375 15:12:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:51.375 15:12:00 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:07:51.375 15:12:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:51.375 15:12:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.375 15:12:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.375 15:12:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.375 15:12:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.375 15:12:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.375 15:12:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.375 15:12:00 -- accel/accel.sh@40 -- # local IFS=, 00:07:51.375 15:12:00 -- accel/accel.sh@41 -- # jq -r . 00:07:51.375 Unsupported workload type: foobar 00:07:51.375 [2024-04-24 15:12:00.531590] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:51.375 accel_perf options: 00:07:51.375 [-h help message] 00:07:51.375 [-q queue depth per core] 00:07:51.375 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:51.375 [-T number of threads per core 00:07:51.375 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:51.375 [-t time in seconds] 00:07:51.375 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:51.375 [ dif_verify, , dif_generate, dif_generate_copy 00:07:51.375 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:51.375 [-l for compress/decompress workloads, name of uncompressed input file 00:07:51.375 [-S for crc32c workload, use this seed value (default 0) 00:07:51.375 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:51.375 [-f for fill workload, use this BYTE value (default 255) 00:07:51.375 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:51.375 [-y verify result if this switch is on] 00:07:51.375 [-a tasks to allocate per core (default: same value as -q)] 00:07:51.375 Can be used to spread operations across a wider range of memory. 00:07:51.375 ************************************ 00:07:51.375 END TEST accel_wrong_workload 00:07:51.375 ************************************ 00:07:51.375 15:12:00 -- common/autotest_common.sh@641 -- # es=1 00:07:51.375 15:12:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:51.375 15:12:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:51.375 15:12:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:51.375 00:07:51.375 real 0m0.028s 00:07:51.375 user 0m0.016s 00:07:51.375 sys 0m0.012s 00:07:51.375 15:12:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:51.375 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:07:51.375 15:12:00 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:51.375 15:12:00 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:51.375 15:12:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.375 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:07:51.633 ************************************ 00:07:51.633 START TEST accel_negative_buffers 00:07:51.633 ************************************ 00:07:51.633 15:12:00 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:51.633 15:12:00 -- common/autotest_common.sh@638 -- # local es=0 00:07:51.633 15:12:00 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:51.633 15:12:00 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:51.633 15:12:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:51.633 15:12:00 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:51.633 15:12:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:51.633 15:12:00 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:07:51.633 15:12:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:51.633 15:12:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.633 15:12:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.633 15:12:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.633 15:12:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.633 15:12:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.633 15:12:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.633 15:12:00 -- accel/accel.sh@40 -- # local IFS=, 00:07:51.633 15:12:00 -- accel/accel.sh@41 -- # jq -r . 00:07:51.633 -x option must be non-negative. 00:07:51.633 [2024-04-24 15:12:00.677703] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:51.633 accel_perf options: 00:07:51.633 [-h help message] 00:07:51.633 [-q queue depth per core] 00:07:51.633 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:51.633 [-T number of threads per core 00:07:51.633 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:51.633 [-t time in seconds] 00:07:51.633 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:51.633 [ dif_verify, , dif_generate, dif_generate_copy 00:07:51.633 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:51.633 [-l for compress/decompress workloads, name of uncompressed input file 00:07:51.633 [-S for crc32c workload, use this seed value (default 0) 00:07:51.633 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:51.633 [-f for fill workload, use this BYTE value (default 255) 00:07:51.633 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:51.633 [-y verify result if this switch is on] 00:07:51.633 [-a tasks to allocate per core (default: same value as -q)] 00:07:51.633 Can be used to spread operations across a wider range of memory. 00:07:51.633 15:12:00 -- common/autotest_common.sh@641 -- # es=1 00:07:51.633 15:12:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:51.633 15:12:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:51.633 15:12:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:51.633 00:07:51.633 real 0m0.032s 00:07:51.633 user 0m0.020s 00:07:51.633 sys 0m0.012s 00:07:51.633 ************************************ 00:07:51.633 END TEST accel_negative_buffers 00:07:51.633 ************************************ 00:07:51.633 15:12:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:51.633 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:07:51.633 15:12:00 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:51.633 15:12:00 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:51.633 15:12:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.633 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:07:51.633 ************************************ 00:07:51.633 START TEST accel_crc32c 00:07:51.633 ************************************ 00:07:51.633 15:12:00 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:51.633 15:12:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.633 15:12:00 -- accel/accel.sh@17 -- # local accel_module 00:07:51.633 15:12:00 -- accel/accel.sh@19 -- # IFS=: 00:07:51.633 15:12:00 -- accel/accel.sh@19 -- # read -r var val 00:07:51.633 15:12:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:51.633 15:12:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:51.633 15:12:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.633 15:12:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.633 15:12:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.633 15:12:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.633 15:12:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.633 15:12:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.633 15:12:00 -- accel/accel.sh@40 -- # local IFS=, 00:07:51.633 15:12:00 -- accel/accel.sh@41 -- # jq -r . 00:07:51.633 [2024-04-24 15:12:00.829045] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:51.633 [2024-04-24 15:12:00.829192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61223 ] 00:07:51.891 [2024-04-24 15:12:00.979202] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.891 [2024-04-24 15:12:01.113838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val= 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val= 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val=0x1 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val= 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val= 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val=crc32c 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val=32 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val= 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val=software 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@22 -- # accel_module=software 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val=32 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val=32 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val=1 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val=Yes 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val= 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:52.149 15:12:01 -- accel/accel.sh@20 -- # val= 00:07:52.149 15:12:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # IFS=: 00:07:52.149 15:12:01 -- accel/accel.sh@19 -- # read -r var val 00:07:53.524 15:12:02 -- accel/accel.sh@20 -- # val= 00:07:53.524 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.524 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.524 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.524 15:12:02 -- accel/accel.sh@20 -- # val= 00:07:53.524 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.524 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.524 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.524 15:12:02 -- accel/accel.sh@20 -- # val= 00:07:53.524 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.524 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.524 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.524 15:12:02 -- accel/accel.sh@20 -- # val= 00:07:53.524 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.524 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.524 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.524 15:12:02 -- accel/accel.sh@20 -- # val= 00:07:53.524 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.524 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.524 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.524 15:12:02 -- accel/accel.sh@20 -- # val= 00:07:53.524 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.524 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.524 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.524 15:12:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.524 15:12:02 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:53.524 15:12:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.524 00:07:53.524 real 0m1.573s 00:07:53.524 user 0m1.336s 00:07:53.524 sys 0m0.138s 00:07:53.524 15:12:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:53.524 ************************************ 00:07:53.524 END TEST accel_crc32c 00:07:53.524 ************************************ 00:07:53.524 15:12:02 -- common/autotest_common.sh@10 -- # set +x 00:07:53.524 15:12:02 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:53.524 15:12:02 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:53.524 15:12:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.524 15:12:02 -- common/autotest_common.sh@10 -- # set +x 00:07:53.524 ************************************ 00:07:53.524 START TEST accel_crc32c_C2 00:07:53.524 ************************************ 00:07:53.524 15:12:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:53.524 15:12:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:53.524 15:12:02 -- accel/accel.sh@17 -- # local accel_module 00:07:53.524 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.524 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.524 15:12:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:53.524 15:12:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:53.524 15:12:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.524 15:12:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.524 15:12:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.524 15:12:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.524 15:12:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.524 15:12:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.524 15:12:02 -- accel/accel.sh@40 -- # local IFS=, 00:07:53.524 15:12:02 -- accel/accel.sh@41 -- # jq -r . 00:07:53.524 [2024-04-24 15:12:02.503048] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:53.524 [2024-04-24 15:12:02.503144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61268 ] 00:07:53.524 [2024-04-24 15:12:02.636066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.524 [2024-04-24 15:12:02.754726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val= 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val= 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val=0x1 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val= 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val= 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val=crc32c 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val=0 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val= 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val=software 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@22 -- # accel_module=software 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val=32 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val=32 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val=1 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val=Yes 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val= 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:53.782 15:12:02 -- accel/accel.sh@20 -- # val= 00:07:53.782 15:12:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # IFS=: 00:07:53.782 15:12:02 -- accel/accel.sh@19 -- # read -r var val 00:07:55.157 15:12:04 -- accel/accel.sh@20 -- # val= 00:07:55.157 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.157 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.157 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.157 15:12:04 -- accel/accel.sh@20 -- # val= 00:07:55.157 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.157 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.157 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.157 15:12:04 -- accel/accel.sh@20 -- # val= 00:07:55.157 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.157 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.157 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.157 15:12:04 -- accel/accel.sh@20 -- # val= 00:07:55.157 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.157 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.157 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.157 15:12:04 -- accel/accel.sh@20 -- # val= 00:07:55.157 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.157 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.158 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.158 15:12:04 -- accel/accel.sh@20 -- # val= 00:07:55.158 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.158 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.158 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.158 15:12:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.158 15:12:04 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:55.158 15:12:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.158 00:07:55.158 real 0m1.521s 00:07:55.158 user 0m1.322s 00:07:55.158 sys 0m0.103s 00:07:55.158 ************************************ 00:07:55.158 END TEST accel_crc32c_C2 00:07:55.158 ************************************ 00:07:55.158 15:12:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:55.158 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:07:55.158 15:12:04 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:55.158 15:12:04 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:55.158 15:12:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.158 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:07:55.158 ************************************ 00:07:55.158 START TEST accel_copy 00:07:55.158 ************************************ 00:07:55.158 15:12:04 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:07:55.158 15:12:04 -- accel/accel.sh@16 -- # local accel_opc 00:07:55.158 15:12:04 -- accel/accel.sh@17 -- # local accel_module 00:07:55.158 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.158 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.158 15:12:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:55.158 15:12:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:55.158 15:12:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.158 15:12:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.158 15:12:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.158 15:12:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.158 15:12:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.158 15:12:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.158 15:12:04 -- accel/accel.sh@40 -- # local IFS=, 00:07:55.158 15:12:04 -- accel/accel.sh@41 -- # jq -r . 00:07:55.158 [2024-04-24 15:12:04.114962] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:55.158 [2024-04-24 15:12:04.115053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61306 ] 00:07:55.158 [2024-04-24 15:12:04.252357] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.158 [2024-04-24 15:12:04.383074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.416 15:12:04 -- accel/accel.sh@20 -- # val= 00:07:55.416 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.416 15:12:04 -- accel/accel.sh@20 -- # val= 00:07:55.416 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.416 15:12:04 -- accel/accel.sh@20 -- # val=0x1 00:07:55.416 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.416 15:12:04 -- accel/accel.sh@20 -- # val= 00:07:55.416 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.416 15:12:04 -- accel/accel.sh@20 -- # val= 00:07:55.416 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.416 15:12:04 -- accel/accel.sh@20 -- # val=copy 00:07:55.416 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.416 15:12:04 -- accel/accel.sh@23 -- # accel_opc=copy 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.416 15:12:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.416 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.416 15:12:04 -- accel/accel.sh@20 -- # val= 00:07:55.416 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.416 15:12:04 -- accel/accel.sh@20 -- # val=software 00:07:55.416 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.416 15:12:04 -- accel/accel.sh@22 -- # accel_module=software 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.416 15:12:04 -- accel/accel.sh@20 -- # val=32 00:07:55.416 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.416 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.416 15:12:04 -- accel/accel.sh@20 -- # val=32 00:07:55.416 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.417 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.417 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.417 15:12:04 -- accel/accel.sh@20 -- # val=1 00:07:55.417 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.417 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.417 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.417 15:12:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.417 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.417 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.417 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.417 15:12:04 -- accel/accel.sh@20 -- # val=Yes 00:07:55.417 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.417 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.417 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.417 15:12:04 -- accel/accel.sh@20 -- # val= 00:07:55.417 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.417 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.417 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:55.417 15:12:04 -- accel/accel.sh@20 -- # val= 00:07:55.417 15:12:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.417 15:12:04 -- accel/accel.sh@19 -- # IFS=: 00:07:55.417 15:12:04 -- accel/accel.sh@19 -- # read -r var val 00:07:56.793 15:12:05 -- accel/accel.sh@20 -- # val= 00:07:56.793 15:12:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.793 15:12:05 -- accel/accel.sh@19 -- # IFS=: 00:07:56.793 15:12:05 -- accel/accel.sh@19 -- # read -r var val 00:07:56.793 15:12:05 -- accel/accel.sh@20 -- # val= 00:07:56.793 15:12:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.793 15:12:05 -- accel/accel.sh@19 -- # IFS=: 00:07:56.793 15:12:05 -- accel/accel.sh@19 -- # read -r var val 00:07:56.793 15:12:05 -- accel/accel.sh@20 -- # val= 00:07:56.793 15:12:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.793 15:12:05 -- accel/accel.sh@19 -- # IFS=: 00:07:56.793 15:12:05 -- accel/accel.sh@19 -- # read -r var val 00:07:56.793 15:12:05 -- accel/accel.sh@20 -- # val= 00:07:56.793 15:12:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.793 15:12:05 -- accel/accel.sh@19 -- # IFS=: 00:07:56.793 15:12:05 -- accel/accel.sh@19 -- # read -r var val 00:07:56.793 15:12:05 -- accel/accel.sh@20 -- # val= 00:07:56.793 15:12:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.793 15:12:05 -- accel/accel.sh@19 -- # IFS=: 00:07:56.793 15:12:05 -- accel/accel.sh@19 -- # read -r var val 00:07:56.793 15:12:05 -- accel/accel.sh@20 -- # val= 00:07:56.793 15:12:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.793 15:12:05 -- accel/accel.sh@19 -- # IFS=: 00:07:56.793 15:12:05 -- accel/accel.sh@19 -- # read -r var val 00:07:56.793 15:12:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.793 15:12:05 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:56.793 15:12:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.793 ************************************ 00:07:56.793 END TEST accel_copy 00:07:56.793 ************************************ 00:07:56.793 00:07:56.793 real 0m1.548s 00:07:56.793 user 0m1.328s 00:07:56.793 sys 0m0.117s 00:07:56.793 15:12:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.793 15:12:05 -- common/autotest_common.sh@10 -- # set +x 00:07:56.793 15:12:05 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:56.793 15:12:05 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:56.793 15:12:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.793 15:12:05 -- common/autotest_common.sh@10 -- # set +x 00:07:56.793 ************************************ 00:07:56.793 START TEST accel_fill 00:07:56.793 ************************************ 00:07:56.793 15:12:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:56.793 15:12:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:56.793 15:12:05 -- accel/accel.sh@17 -- # local accel_module 00:07:56.793 15:12:05 -- accel/accel.sh@19 -- # IFS=: 00:07:56.793 15:12:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:56.793 15:12:05 -- accel/accel.sh@19 -- # read -r var val 00:07:56.793 15:12:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:56.793 15:12:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.793 15:12:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.793 15:12:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.794 15:12:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.794 15:12:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.794 15:12:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.794 15:12:05 -- accel/accel.sh@40 -- # local IFS=, 00:07:56.794 15:12:05 -- accel/accel.sh@41 -- # jq -r . 00:07:56.794 [2024-04-24 15:12:05.780384] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:56.794 [2024-04-24 15:12:05.780510] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61345 ] 00:07:56.794 [2024-04-24 15:12:05.919896] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.052 [2024-04-24 15:12:06.036844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.052 15:12:06 -- accel/accel.sh@20 -- # val= 00:07:57.052 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.052 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.052 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.052 15:12:06 -- accel/accel.sh@20 -- # val= 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val=0x1 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val= 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val= 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val=fill 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@23 -- # accel_opc=fill 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val=0x80 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val= 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val=software 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@22 -- # accel_module=software 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val=64 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val=64 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val=1 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val=Yes 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val= 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:57.053 15:12:06 -- accel/accel.sh@20 -- # val= 00:07:57.053 15:12:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # IFS=: 00:07:57.053 15:12:06 -- accel/accel.sh@19 -- # read -r var val 00:07:58.430 15:12:07 -- accel/accel.sh@20 -- # val= 00:07:58.430 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.430 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.430 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.430 15:12:07 -- accel/accel.sh@20 -- # val= 00:07:58.430 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.430 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.430 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.430 15:12:07 -- accel/accel.sh@20 -- # val= 00:07:58.430 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.430 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.430 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.430 15:12:07 -- accel/accel.sh@20 -- # val= 00:07:58.430 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.430 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.430 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.430 15:12:07 -- accel/accel.sh@20 -- # val= 00:07:58.430 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.430 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.430 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.430 15:12:07 -- accel/accel.sh@20 -- # val= 00:07:58.430 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.430 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.430 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.430 15:12:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.430 15:12:07 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:58.430 15:12:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.430 00:07:58.430 real 0m1.548s 00:07:58.430 user 0m1.335s 00:07:58.430 sys 0m0.115s 00:07:58.430 15:12:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:58.430 15:12:07 -- common/autotest_common.sh@10 -- # set +x 00:07:58.430 ************************************ 00:07:58.430 END TEST accel_fill 00:07:58.430 ************************************ 00:07:58.430 15:12:07 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:58.430 15:12:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:58.430 15:12:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.430 15:12:07 -- common/autotest_common.sh@10 -- # set +x 00:07:58.430 ************************************ 00:07:58.430 START TEST accel_copy_crc32c 00:07:58.430 ************************************ 00:07:58.430 15:12:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:07:58.430 15:12:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:58.430 15:12:07 -- accel/accel.sh@17 -- # local accel_module 00:07:58.430 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.430 15:12:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:58.430 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.430 15:12:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.430 15:12:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:58.430 15:12:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.430 15:12:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.430 15:12:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.430 15:12:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.430 15:12:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.430 15:12:07 -- accel/accel.sh@40 -- # local IFS=, 00:07:58.430 15:12:07 -- accel/accel.sh@41 -- # jq -r . 00:07:58.430 [2024-04-24 15:12:07.441216] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:07:58.430 [2024-04-24 15:12:07.441301] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61390 ] 00:07:58.430 [2024-04-24 15:12:07.580979] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.689 [2024-04-24 15:12:07.718457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.689 15:12:07 -- accel/accel.sh@20 -- # val= 00:07:58.689 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.689 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.689 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.689 15:12:07 -- accel/accel.sh@20 -- # val= 00:07:58.689 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.689 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.689 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.689 15:12:07 -- accel/accel.sh@20 -- # val=0x1 00:07:58.689 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.689 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.689 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.689 15:12:07 -- accel/accel.sh@20 -- # val= 00:07:58.689 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.689 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.690 15:12:07 -- accel/accel.sh@20 -- # val= 00:07:58.690 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.690 15:12:07 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:58.690 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.690 15:12:07 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.690 15:12:07 -- accel/accel.sh@20 -- # val=0 00:07:58.690 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.690 15:12:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.690 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.690 15:12:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.690 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.690 15:12:07 -- accel/accel.sh@20 -- # val= 00:07:58.690 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.690 15:12:07 -- accel/accel.sh@20 -- # val=software 00:07:58.690 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.690 15:12:07 -- accel/accel.sh@22 -- # accel_module=software 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.690 15:12:07 -- accel/accel.sh@20 -- # val=32 00:07:58.690 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.690 15:12:07 -- accel/accel.sh@20 -- # val=32 00:07:58.690 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.690 15:12:07 -- accel/accel.sh@20 -- # val=1 00:07:58.690 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.690 15:12:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.690 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.690 15:12:07 -- accel/accel.sh@20 -- # val=Yes 00:07:58.690 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.690 15:12:07 -- accel/accel.sh@20 -- # val= 00:07:58.690 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:07:58.690 15:12:07 -- accel/accel.sh@20 -- # val= 00:07:58.690 15:12:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # IFS=: 00:07:58.690 15:12:07 -- accel/accel.sh@19 -- # read -r var val 00:08:00.069 15:12:08 -- accel/accel.sh@20 -- # val= 00:08:00.069 15:12:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.069 15:12:08 -- accel/accel.sh@19 -- # IFS=: 00:08:00.069 15:12:08 -- accel/accel.sh@19 -- # read -r var val 00:08:00.069 15:12:08 -- accel/accel.sh@20 -- # val= 00:08:00.069 15:12:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.069 15:12:08 -- accel/accel.sh@19 -- # IFS=: 00:08:00.069 15:12:08 -- accel/accel.sh@19 -- # read -r var val 00:08:00.069 15:12:08 -- accel/accel.sh@20 -- # val= 00:08:00.069 15:12:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.069 15:12:08 -- accel/accel.sh@19 -- # IFS=: 00:08:00.069 15:12:08 -- accel/accel.sh@19 -- # read -r var val 00:08:00.069 15:12:08 -- accel/accel.sh@20 -- # val= 00:08:00.069 15:12:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.069 15:12:08 -- accel/accel.sh@19 -- # IFS=: 00:08:00.069 15:12:08 -- accel/accel.sh@19 -- # read -r var val 00:08:00.069 15:12:08 -- accel/accel.sh@20 -- # val= 00:08:00.069 15:12:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.069 15:12:08 -- accel/accel.sh@19 -- # IFS=: 00:08:00.069 15:12:08 -- accel/accel.sh@19 -- # read -r var val 00:08:00.069 15:12:08 -- accel/accel.sh@20 -- # val= 00:08:00.069 15:12:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.069 15:12:08 -- accel/accel.sh@19 -- # IFS=: 00:08:00.069 15:12:08 -- accel/accel.sh@19 -- # read -r var val 00:08:00.069 15:12:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.069 15:12:08 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:00.069 15:12:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.069 00:08:00.069 real 0m1.563s 00:08:00.069 user 0m1.344s 00:08:00.069 sys 0m0.123s 00:08:00.069 15:12:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:00.069 ************************************ 00:08:00.069 15:12:08 -- common/autotest_common.sh@10 -- # set +x 00:08:00.069 END TEST accel_copy_crc32c 00:08:00.069 ************************************ 00:08:00.069 15:12:09 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:00.069 15:12:09 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:00.069 15:12:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.069 15:12:09 -- common/autotest_common.sh@10 -- # set +x 00:08:00.069 ************************************ 00:08:00.069 START TEST accel_copy_crc32c_C2 00:08:00.069 ************************************ 00:08:00.069 15:12:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:00.069 15:12:09 -- accel/accel.sh@16 -- # local accel_opc 00:08:00.069 15:12:09 -- accel/accel.sh@17 -- # local accel_module 00:08:00.069 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.069 15:12:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:00.069 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.069 15:12:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:00.069 15:12:09 -- accel/accel.sh@12 -- # build_accel_config 00:08:00.069 15:12:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.069 15:12:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.069 15:12:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.069 15:12:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.069 15:12:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.069 15:12:09 -- accel/accel.sh@40 -- # local IFS=, 00:08:00.069 15:12:09 -- accel/accel.sh@41 -- # jq -r . 00:08:00.069 [2024-04-24 15:12:09.126934] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:00.069 [2024-04-24 15:12:09.127054] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61424 ] 00:08:00.069 [2024-04-24 15:12:09.264007] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.329 [2024-04-24 15:12:09.384383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.329 15:12:09 -- accel/accel.sh@20 -- # val= 00:08:00.329 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.329 15:12:09 -- accel/accel.sh@20 -- # val= 00:08:00.329 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.329 15:12:09 -- accel/accel.sh@20 -- # val=0x1 00:08:00.329 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.329 15:12:09 -- accel/accel.sh@20 -- # val= 00:08:00.329 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.329 15:12:09 -- accel/accel.sh@20 -- # val= 00:08:00.329 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.329 15:12:09 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:00.329 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.329 15:12:09 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.329 15:12:09 -- accel/accel.sh@20 -- # val=0 00:08:00.329 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.329 15:12:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.329 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.329 15:12:09 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:00.329 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.329 15:12:09 -- accel/accel.sh@20 -- # val= 00:08:00.329 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.329 15:12:09 -- accel/accel.sh@20 -- # val=software 00:08:00.329 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.329 15:12:09 -- accel/accel.sh@22 -- # accel_module=software 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.329 15:12:09 -- accel/accel.sh@20 -- # val=32 00:08:00.329 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.329 15:12:09 -- accel/accel.sh@20 -- # val=32 00:08:00.329 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.329 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.330 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.330 15:12:09 -- accel/accel.sh@20 -- # val=1 00:08:00.330 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.330 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.330 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.330 15:12:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.330 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.330 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.330 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.330 15:12:09 -- accel/accel.sh@20 -- # val=Yes 00:08:00.330 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.330 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.330 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.330 15:12:09 -- accel/accel.sh@20 -- # val= 00:08:00.330 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.330 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.330 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:00.330 15:12:09 -- accel/accel.sh@20 -- # val= 00:08:00.330 15:12:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.330 15:12:09 -- accel/accel.sh@19 -- # IFS=: 00:08:00.330 15:12:09 -- accel/accel.sh@19 -- # read -r var val 00:08:01.748 15:12:10 -- accel/accel.sh@20 -- # val= 00:08:01.748 15:12:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.748 15:12:10 -- accel/accel.sh@19 -- # IFS=: 00:08:01.748 15:12:10 -- accel/accel.sh@19 -- # read -r var val 00:08:01.748 15:12:10 -- accel/accel.sh@20 -- # val= 00:08:01.748 15:12:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.748 15:12:10 -- accel/accel.sh@19 -- # IFS=: 00:08:01.748 15:12:10 -- accel/accel.sh@19 -- # read -r var val 00:08:01.748 15:12:10 -- accel/accel.sh@20 -- # val= 00:08:01.748 15:12:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.748 15:12:10 -- accel/accel.sh@19 -- # IFS=: 00:08:01.748 15:12:10 -- accel/accel.sh@19 -- # read -r var val 00:08:01.748 15:12:10 -- accel/accel.sh@20 -- # val= 00:08:01.749 15:12:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.749 15:12:10 -- accel/accel.sh@19 -- # IFS=: 00:08:01.749 15:12:10 -- accel/accel.sh@19 -- # read -r var val 00:08:01.749 15:12:10 -- accel/accel.sh@20 -- # val= 00:08:01.749 15:12:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.749 15:12:10 -- accel/accel.sh@19 -- # IFS=: 00:08:01.749 15:12:10 -- accel/accel.sh@19 -- # read -r var val 00:08:01.749 15:12:10 -- accel/accel.sh@20 -- # val= 00:08:01.749 15:12:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.749 15:12:10 -- accel/accel.sh@19 -- # IFS=: 00:08:01.749 15:12:10 -- accel/accel.sh@19 -- # read -r var val 00:08:01.749 15:12:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:01.749 15:12:10 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:01.749 15:12:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.749 00:08:01.749 real 0m1.541s 00:08:01.749 user 0m1.334s 00:08:01.749 sys 0m0.116s 00:08:01.749 15:12:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:01.749 15:12:10 -- common/autotest_common.sh@10 -- # set +x 00:08:01.749 ************************************ 00:08:01.749 END TEST accel_copy_crc32c_C2 00:08:01.749 ************************************ 00:08:01.749 15:12:10 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:01.749 15:12:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:01.749 15:12:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.749 15:12:10 -- common/autotest_common.sh@10 -- # set +x 00:08:01.749 ************************************ 00:08:01.749 START TEST accel_dualcast 00:08:01.749 ************************************ 00:08:01.749 15:12:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:08:01.749 15:12:10 -- accel/accel.sh@16 -- # local accel_opc 00:08:01.749 15:12:10 -- accel/accel.sh@17 -- # local accel_module 00:08:01.749 15:12:10 -- accel/accel.sh@19 -- # IFS=: 00:08:01.749 15:12:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:01.749 15:12:10 -- accel/accel.sh@19 -- # read -r var val 00:08:01.749 15:12:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:01.749 15:12:10 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.749 15:12:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.749 15:12:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.749 15:12:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.749 15:12:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.749 15:12:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.749 15:12:10 -- accel/accel.sh@40 -- # local IFS=, 00:08:01.749 15:12:10 -- accel/accel.sh@41 -- # jq -r . 00:08:01.749 [2024-04-24 15:12:10.778623] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:01.749 [2024-04-24 15:12:10.778706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61468 ] 00:08:01.749 [2024-04-24 15:12:10.913087] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.008 [2024-04-24 15:12:11.027990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val= 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val= 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val=0x1 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val= 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val= 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val=dualcast 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val= 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val=software 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@22 -- # accel_module=software 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val=32 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val=32 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val=1 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val=Yes 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val= 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:02.008 15:12:11 -- accel/accel.sh@20 -- # val= 00:08:02.008 15:12:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # IFS=: 00:08:02.008 15:12:11 -- accel/accel.sh@19 -- # read -r var val 00:08:03.383 15:12:12 -- accel/accel.sh@20 -- # val= 00:08:03.383 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.383 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.383 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.383 15:12:12 -- accel/accel.sh@20 -- # val= 00:08:03.383 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.383 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.383 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.383 15:12:12 -- accel/accel.sh@20 -- # val= 00:08:03.383 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.383 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.383 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.383 15:12:12 -- accel/accel.sh@20 -- # val= 00:08:03.383 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.383 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.383 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.383 15:12:12 -- accel/accel.sh@20 -- # val= 00:08:03.383 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.383 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.383 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.383 15:12:12 -- accel/accel.sh@20 -- # val= 00:08:03.383 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.383 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.383 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.383 15:12:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.383 15:12:12 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:03.383 15:12:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.383 00:08:03.383 real 0m1.531s 00:08:03.383 user 0m1.327s 00:08:03.383 sys 0m0.107s 00:08:03.383 15:12:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:03.383 15:12:12 -- common/autotest_common.sh@10 -- # set +x 00:08:03.383 ************************************ 00:08:03.383 END TEST accel_dualcast 00:08:03.383 ************************************ 00:08:03.383 15:12:12 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:03.383 15:12:12 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:03.383 15:12:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.383 15:12:12 -- common/autotest_common.sh@10 -- # set +x 00:08:03.383 ************************************ 00:08:03.383 START TEST accel_compare 00:08:03.383 ************************************ 00:08:03.383 15:12:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:08:03.383 15:12:12 -- accel/accel.sh@16 -- # local accel_opc 00:08:03.383 15:12:12 -- accel/accel.sh@17 -- # local accel_module 00:08:03.383 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.383 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.383 15:12:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:03.383 15:12:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:03.383 15:12:12 -- accel/accel.sh@12 -- # build_accel_config 00:08:03.383 15:12:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.383 15:12:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.383 15:12:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.383 15:12:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.383 15:12:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.383 15:12:12 -- accel/accel.sh@40 -- # local IFS=, 00:08:03.383 15:12:12 -- accel/accel.sh@41 -- # jq -r . 00:08:03.383 [2024-04-24 15:12:12.425832] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:03.383 [2024-04-24 15:12:12.425955] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61507 ] 00:08:03.383 [2024-04-24 15:12:12.564963] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.641 [2024-04-24 15:12:12.683965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val= 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val= 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val=0x1 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val= 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val= 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val=compare 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@23 -- # accel_opc=compare 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val= 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val=software 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@22 -- # accel_module=software 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val=32 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val=32 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val=1 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val=Yes 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val= 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:03.641 15:12:12 -- accel/accel.sh@20 -- # val= 00:08:03.641 15:12:12 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # IFS=: 00:08:03.641 15:12:12 -- accel/accel.sh@19 -- # read -r var val 00:08:05.016 15:12:13 -- accel/accel.sh@20 -- # val= 00:08:05.016 15:12:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.016 15:12:13 -- accel/accel.sh@19 -- # IFS=: 00:08:05.016 15:12:13 -- accel/accel.sh@19 -- # read -r var val 00:08:05.016 15:12:13 -- accel/accel.sh@20 -- # val= 00:08:05.016 15:12:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.016 15:12:13 -- accel/accel.sh@19 -- # IFS=: 00:08:05.016 15:12:13 -- accel/accel.sh@19 -- # read -r var val 00:08:05.016 15:12:13 -- accel/accel.sh@20 -- # val= 00:08:05.016 15:12:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.016 15:12:13 -- accel/accel.sh@19 -- # IFS=: 00:08:05.016 15:12:13 -- accel/accel.sh@19 -- # read -r var val 00:08:05.016 15:12:13 -- accel/accel.sh@20 -- # val= 00:08:05.016 15:12:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.016 15:12:13 -- accel/accel.sh@19 -- # IFS=: 00:08:05.016 15:12:13 -- accel/accel.sh@19 -- # read -r var val 00:08:05.016 15:12:13 -- accel/accel.sh@20 -- # val= 00:08:05.016 15:12:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.016 15:12:13 -- accel/accel.sh@19 -- # IFS=: 00:08:05.016 15:12:13 -- accel/accel.sh@19 -- # read -r var val 00:08:05.016 15:12:13 -- accel/accel.sh@20 -- # val= 00:08:05.016 15:12:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.016 15:12:13 -- accel/accel.sh@19 -- # IFS=: 00:08:05.016 15:12:13 -- accel/accel.sh@19 -- # read -r var val 00:08:05.016 15:12:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:05.016 15:12:13 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:05.016 15:12:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.016 00:08:05.016 real 0m1.539s 00:08:05.016 user 0m1.334s 00:08:05.016 sys 0m0.111s 00:08:05.016 15:12:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.016 15:12:13 -- common/autotest_common.sh@10 -- # set +x 00:08:05.016 ************************************ 00:08:05.016 END TEST accel_compare 00:08:05.016 ************************************ 00:08:05.016 15:12:13 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:05.016 15:12:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:05.016 15:12:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.016 15:12:13 -- common/autotest_common.sh@10 -- # set +x 00:08:05.016 ************************************ 00:08:05.016 START TEST accel_xor 00:08:05.016 ************************************ 00:08:05.016 15:12:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:08:05.016 15:12:14 -- accel/accel.sh@16 -- # local accel_opc 00:08:05.016 15:12:14 -- accel/accel.sh@17 -- # local accel_module 00:08:05.016 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.016 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.016 15:12:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:05.016 15:12:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:05.016 15:12:14 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.016 15:12:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.016 15:12:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.016 15:12:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.016 15:12:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.016 15:12:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.016 15:12:14 -- accel/accel.sh@40 -- # local IFS=, 00:08:05.016 15:12:14 -- accel/accel.sh@41 -- # jq -r . 00:08:05.016 [2024-04-24 15:12:14.083351] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:05.016 [2024-04-24 15:12:14.083495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61545 ] 00:08:05.016 [2024-04-24 15:12:14.219715] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.275 [2024-04-24 15:12:14.336003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val= 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val= 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val=0x1 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val= 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val= 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val=xor 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@23 -- # accel_opc=xor 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val=2 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val= 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val=software 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@22 -- # accel_module=software 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val=32 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val=32 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val=1 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val=Yes 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val= 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:05.275 15:12:14 -- accel/accel.sh@20 -- # val= 00:08:05.275 15:12:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # IFS=: 00:08:05.275 15:12:14 -- accel/accel.sh@19 -- # read -r var val 00:08:06.651 15:12:15 -- accel/accel.sh@20 -- # val= 00:08:06.651 15:12:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.651 15:12:15 -- accel/accel.sh@19 -- # IFS=: 00:08:06.651 15:12:15 -- accel/accel.sh@19 -- # read -r var val 00:08:06.651 15:12:15 -- accel/accel.sh@20 -- # val= 00:08:06.651 15:12:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.651 15:12:15 -- accel/accel.sh@19 -- # IFS=: 00:08:06.651 15:12:15 -- accel/accel.sh@19 -- # read -r var val 00:08:06.651 15:12:15 -- accel/accel.sh@20 -- # val= 00:08:06.651 15:12:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.651 15:12:15 -- accel/accel.sh@19 -- # IFS=: 00:08:06.651 15:12:15 -- accel/accel.sh@19 -- # read -r var val 00:08:06.651 15:12:15 -- accel/accel.sh@20 -- # val= 00:08:06.651 15:12:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.651 15:12:15 -- accel/accel.sh@19 -- # IFS=: 00:08:06.651 15:12:15 -- accel/accel.sh@19 -- # read -r var val 00:08:06.651 15:12:15 -- accel/accel.sh@20 -- # val= 00:08:06.651 15:12:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.651 15:12:15 -- accel/accel.sh@19 -- # IFS=: 00:08:06.651 15:12:15 -- accel/accel.sh@19 -- # read -r var val 00:08:06.651 15:12:15 -- accel/accel.sh@20 -- # val= 00:08:06.651 15:12:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.651 15:12:15 -- accel/accel.sh@19 -- # IFS=: 00:08:06.651 15:12:15 -- accel/accel.sh@19 -- # read -r var val 00:08:06.651 15:12:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.651 15:12:15 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:06.651 15:12:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.651 00:08:06.652 real 0m1.554s 00:08:06.652 user 0m1.337s 00:08:06.652 sys 0m0.116s 00:08:06.652 15:12:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:06.652 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:08:06.652 ************************************ 00:08:06.652 END TEST accel_xor 00:08:06.652 ************************************ 00:08:06.652 15:12:15 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:06.652 15:12:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:06.652 15:12:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.652 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:08:06.652 ************************************ 00:08:06.652 START TEST accel_xor 00:08:06.652 ************************************ 00:08:06.652 15:12:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:08:06.652 15:12:15 -- accel/accel.sh@16 -- # local accel_opc 00:08:06.652 15:12:15 -- accel/accel.sh@17 -- # local accel_module 00:08:06.652 15:12:15 -- accel/accel.sh@19 -- # IFS=: 00:08:06.652 15:12:15 -- accel/accel.sh@19 -- # read -r var val 00:08:06.652 15:12:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:06.652 15:12:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:06.652 15:12:15 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.652 15:12:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.652 15:12:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.652 15:12:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.652 15:12:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.652 15:12:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.652 15:12:15 -- accel/accel.sh@40 -- # local IFS=, 00:08:06.652 15:12:15 -- accel/accel.sh@41 -- # jq -r . 00:08:06.652 [2024-04-24 15:12:15.744289] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:06.652 [2024-04-24 15:12:15.744369] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61591 ] 00:08:06.652 [2024-04-24 15:12:15.883902] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.911 [2024-04-24 15:12:16.007805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val= 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val= 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val=0x1 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val= 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val= 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val=xor 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@23 -- # accel_opc=xor 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val=3 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val= 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val=software 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@22 -- # accel_module=software 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val=32 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val=32 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val=1 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val=Yes 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val= 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:06.911 15:12:16 -- accel/accel.sh@20 -- # val= 00:08:06.911 15:12:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # IFS=: 00:08:06.911 15:12:16 -- accel/accel.sh@19 -- # read -r var val 00:08:08.289 15:12:17 -- accel/accel.sh@20 -- # val= 00:08:08.289 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.289 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.289 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.289 15:12:17 -- accel/accel.sh@20 -- # val= 00:08:08.289 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.289 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.289 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.289 15:12:17 -- accel/accel.sh@20 -- # val= 00:08:08.289 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.289 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.289 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.289 15:12:17 -- accel/accel.sh@20 -- # val= 00:08:08.289 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.289 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.289 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.289 15:12:17 -- accel/accel.sh@20 -- # val= 00:08:08.289 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.289 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.289 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.289 15:12:17 -- accel/accel.sh@20 -- # val= 00:08:08.289 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.289 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.289 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.289 15:12:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.289 15:12:17 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:08.289 15:12:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.289 00:08:08.289 real 0m1.545s 00:08:08.289 user 0m1.332s 00:08:08.289 sys 0m0.117s 00:08:08.289 15:12:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.289 15:12:17 -- common/autotest_common.sh@10 -- # set +x 00:08:08.289 ************************************ 00:08:08.289 END TEST accel_xor 00:08:08.289 ************************************ 00:08:08.289 15:12:17 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:08.289 15:12:17 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:08.289 15:12:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.289 15:12:17 -- common/autotest_common.sh@10 -- # set +x 00:08:08.289 ************************************ 00:08:08.289 START TEST accel_dif_verify 00:08:08.289 ************************************ 00:08:08.289 15:12:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:08:08.289 15:12:17 -- accel/accel.sh@16 -- # local accel_opc 00:08:08.289 15:12:17 -- accel/accel.sh@17 -- # local accel_module 00:08:08.289 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.289 15:12:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:08.289 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.289 15:12:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:08.289 15:12:17 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.289 15:12:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.289 15:12:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.289 15:12:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.289 15:12:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.289 15:12:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.289 15:12:17 -- accel/accel.sh@40 -- # local IFS=, 00:08:08.289 15:12:17 -- accel/accel.sh@41 -- # jq -r . 00:08:08.289 [2024-04-24 15:12:17.398977] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:08.289 [2024-04-24 15:12:17.399549] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61629 ] 00:08:08.548 [2024-04-24 15:12:17.537953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.548 [2024-04-24 15:12:17.644326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val= 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val= 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val=0x1 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val= 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val= 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val=dif_verify 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val='512 bytes' 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val='8 bytes' 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val= 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val=software 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@22 -- # accel_module=software 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val=32 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val=32 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val=1 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val=No 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val= 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:08.548 15:12:17 -- accel/accel.sh@20 -- # val= 00:08:08.548 15:12:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # IFS=: 00:08:08.548 15:12:17 -- accel/accel.sh@19 -- # read -r var val 00:08:09.924 15:12:18 -- accel/accel.sh@20 -- # val= 00:08:09.924 15:12:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.924 15:12:18 -- accel/accel.sh@19 -- # IFS=: 00:08:09.924 15:12:18 -- accel/accel.sh@19 -- # read -r var val 00:08:09.924 15:12:18 -- accel/accel.sh@20 -- # val= 00:08:09.924 15:12:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.924 15:12:18 -- accel/accel.sh@19 -- # IFS=: 00:08:09.924 15:12:18 -- accel/accel.sh@19 -- # read -r var val 00:08:09.924 15:12:18 -- accel/accel.sh@20 -- # val= 00:08:09.924 15:12:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.924 15:12:18 -- accel/accel.sh@19 -- # IFS=: 00:08:09.924 15:12:18 -- accel/accel.sh@19 -- # read -r var val 00:08:09.924 15:12:18 -- accel/accel.sh@20 -- # val= 00:08:09.924 15:12:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.924 15:12:18 -- accel/accel.sh@19 -- # IFS=: 00:08:09.924 15:12:18 -- accel/accel.sh@19 -- # read -r var val 00:08:09.924 15:12:18 -- accel/accel.sh@20 -- # val= 00:08:09.924 15:12:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.924 15:12:18 -- accel/accel.sh@19 -- # IFS=: 00:08:09.924 15:12:18 -- accel/accel.sh@19 -- # read -r var val 00:08:09.924 15:12:18 -- accel/accel.sh@20 -- # val= 00:08:09.924 15:12:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.924 15:12:18 -- accel/accel.sh@19 -- # IFS=: 00:08:09.924 15:12:18 -- accel/accel.sh@19 -- # read -r var val 00:08:09.924 15:12:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.924 15:12:18 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:09.924 15:12:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.924 00:08:09.924 real 0m1.520s 00:08:09.924 user 0m1.323s 00:08:09.924 sys 0m0.106s 00:08:09.924 15:12:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:09.924 ************************************ 00:08:09.924 END TEST accel_dif_verify 00:08:09.924 15:12:18 -- common/autotest_common.sh@10 -- # set +x 00:08:09.924 ************************************ 00:08:09.924 15:12:18 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:09.924 15:12:18 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:09.924 15:12:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.924 15:12:18 -- common/autotest_common.sh@10 -- # set +x 00:08:09.924 ************************************ 00:08:09.924 START TEST accel_dif_generate 00:08:09.924 ************************************ 00:08:09.924 15:12:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:08:09.924 15:12:19 -- accel/accel.sh@16 -- # local accel_opc 00:08:09.924 15:12:19 -- accel/accel.sh@17 -- # local accel_module 00:08:09.924 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:09.924 15:12:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:09.924 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:09.924 15:12:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:09.924 15:12:19 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.924 15:12:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.924 15:12:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.924 15:12:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.924 15:12:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.924 15:12:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.924 15:12:19 -- accel/accel.sh@40 -- # local IFS=, 00:08:09.924 15:12:19 -- accel/accel.sh@41 -- # jq -r . 00:08:09.924 [2024-04-24 15:12:19.031560] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:09.924 [2024-04-24 15:12:19.031646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61668 ] 00:08:10.183 [2024-04-24 15:12:19.167810] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.183 [2024-04-24 15:12:19.295920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val= 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val= 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val=0x1 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val= 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val= 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val=dif_generate 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val='512 bytes' 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val='8 bytes' 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val= 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val=software 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@22 -- # accel_module=software 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val=32 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val=32 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val=1 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val=No 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val= 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.183 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:10.183 15:12:19 -- accel/accel.sh@20 -- # val= 00:08:10.183 15:12:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.184 15:12:19 -- accel/accel.sh@19 -- # IFS=: 00:08:10.184 15:12:19 -- accel/accel.sh@19 -- # read -r var val 00:08:11.557 15:12:20 -- accel/accel.sh@20 -- # val= 00:08:11.557 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.557 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.557 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.557 15:12:20 -- accel/accel.sh@20 -- # val= 00:08:11.557 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.557 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.557 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.557 15:12:20 -- accel/accel.sh@20 -- # val= 00:08:11.557 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.557 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.557 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.557 15:12:20 -- accel/accel.sh@20 -- # val= 00:08:11.557 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.557 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.557 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.557 15:12:20 -- accel/accel.sh@20 -- # val= 00:08:11.558 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.558 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.558 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.558 15:12:20 -- accel/accel.sh@20 -- # val= 00:08:11.558 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.558 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.558 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.558 15:12:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.558 15:12:20 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:11.558 15:12:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.558 00:08:11.558 real 0m1.538s 00:08:11.558 user 0m1.342s 00:08:11.558 sys 0m0.104s 00:08:11.558 15:12:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:11.558 15:12:20 -- common/autotest_common.sh@10 -- # set +x 00:08:11.558 ************************************ 00:08:11.558 END TEST accel_dif_generate 00:08:11.558 ************************************ 00:08:11.558 15:12:20 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:11.558 15:12:20 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:11.558 15:12:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.558 15:12:20 -- common/autotest_common.sh@10 -- # set +x 00:08:11.558 ************************************ 00:08:11.558 START TEST accel_dif_generate_copy 00:08:11.558 ************************************ 00:08:11.558 15:12:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:08:11.558 15:12:20 -- accel/accel.sh@16 -- # local accel_opc 00:08:11.558 15:12:20 -- accel/accel.sh@17 -- # local accel_module 00:08:11.558 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.558 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.558 15:12:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:11.558 15:12:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:11.558 15:12:20 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.558 15:12:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.558 15:12:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.558 15:12:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.558 15:12:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.558 15:12:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.558 15:12:20 -- accel/accel.sh@40 -- # local IFS=, 00:08:11.558 15:12:20 -- accel/accel.sh@41 -- # jq -r . 00:08:11.558 [2024-04-24 15:12:20.670476] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:11.558 [2024-04-24 15:12:20.670577] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61708 ] 00:08:11.816 [2024-04-24 15:12:20.805109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.816 [2024-04-24 15:12:20.936934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.816 15:12:20 -- accel/accel.sh@20 -- # val= 00:08:11.816 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:20 -- accel/accel.sh@20 -- # val= 00:08:11.816 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:20 -- accel/accel.sh@20 -- # val=0x1 00:08:11.816 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:20 -- accel/accel.sh@20 -- # val= 00:08:11.816 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:20 -- accel/accel.sh@20 -- # val= 00:08:11.816 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:20 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:11.816 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:20 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.816 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.816 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:20 -- accel/accel.sh@20 -- # val= 00:08:11.816 15:12:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:20 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:21 -- accel/accel.sh@20 -- # val=software 00:08:11.816 15:12:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:21 -- accel/accel.sh@22 -- # accel_module=software 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:21 -- accel/accel.sh@20 -- # val=32 00:08:11.816 15:12:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:21 -- accel/accel.sh@20 -- # val=32 00:08:11.816 15:12:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:21 -- accel/accel.sh@20 -- # val=1 00:08:11.816 15:12:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.816 15:12:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:21 -- accel/accel.sh@20 -- # val=No 00:08:11.816 15:12:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:21 -- accel/accel.sh@20 -- # val= 00:08:11.816 15:12:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # read -r var val 00:08:11.816 15:12:21 -- accel/accel.sh@20 -- # val= 00:08:11.816 15:12:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # IFS=: 00:08:11.816 15:12:21 -- accel/accel.sh@19 -- # read -r var val 00:08:13.191 15:12:22 -- accel/accel.sh@20 -- # val= 00:08:13.191 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.191 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.191 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.191 15:12:22 -- accel/accel.sh@20 -- # val= 00:08:13.191 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.191 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.191 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.191 15:12:22 -- accel/accel.sh@20 -- # val= 00:08:13.191 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.191 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.191 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.191 15:12:22 -- accel/accel.sh@20 -- # val= 00:08:13.191 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.191 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.191 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.191 15:12:22 -- accel/accel.sh@20 -- # val= 00:08:13.191 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.191 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.191 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.191 15:12:22 -- accel/accel.sh@20 -- # val= 00:08:13.191 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.191 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.191 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.191 15:12:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.191 15:12:22 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:13.191 15:12:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.191 00:08:13.191 real 0m1.543s 00:08:13.191 user 0m1.334s 00:08:13.191 sys 0m0.116s 00:08:13.191 15:12:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:13.191 15:12:22 -- common/autotest_common.sh@10 -- # set +x 00:08:13.191 ************************************ 00:08:13.191 END TEST accel_dif_generate_copy 00:08:13.191 ************************************ 00:08:13.191 15:12:22 -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:13.191 15:12:22 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:13.191 15:12:22 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:13.191 15:12:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.191 15:12:22 -- common/autotest_common.sh@10 -- # set +x 00:08:13.191 ************************************ 00:08:13.191 START TEST accel_comp 00:08:13.191 ************************************ 00:08:13.191 15:12:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:13.191 15:12:22 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.191 15:12:22 -- accel/accel.sh@17 -- # local accel_module 00:08:13.191 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.191 15:12:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:13.191 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.191 15:12:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:13.191 15:12:22 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.191 15:12:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.191 15:12:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.191 15:12:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.191 15:12:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.191 15:12:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.191 15:12:22 -- accel/accel.sh@40 -- # local IFS=, 00:08:13.191 15:12:22 -- accel/accel.sh@41 -- # jq -r . 00:08:13.191 [2024-04-24 15:12:22.326861] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:13.191 [2024-04-24 15:12:22.326938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61746 ] 00:08:13.467 [2024-04-24 15:12:22.461370] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.467 [2024-04-24 15:12:22.579997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val= 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val= 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val= 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val=0x1 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val= 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val= 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val=compress 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@23 -- # accel_opc=compress 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val= 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val=software 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@22 -- # accel_module=software 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val=32 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val=32 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val=1 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val=No 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val= 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:13.467 15:12:22 -- accel/accel.sh@20 -- # val= 00:08:13.467 15:12:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # IFS=: 00:08:13.467 15:12:22 -- accel/accel.sh@19 -- # read -r var val 00:08:14.841 15:12:23 -- accel/accel.sh@20 -- # val= 00:08:14.841 15:12:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.841 15:12:23 -- accel/accel.sh@19 -- # IFS=: 00:08:14.841 15:12:23 -- accel/accel.sh@19 -- # read -r var val 00:08:14.841 15:12:23 -- accel/accel.sh@20 -- # val= 00:08:14.841 15:12:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.841 15:12:23 -- accel/accel.sh@19 -- # IFS=: 00:08:14.841 15:12:23 -- accel/accel.sh@19 -- # read -r var val 00:08:14.841 15:12:23 -- accel/accel.sh@20 -- # val= 00:08:14.841 15:12:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.841 15:12:23 -- accel/accel.sh@19 -- # IFS=: 00:08:14.841 15:12:23 -- accel/accel.sh@19 -- # read -r var val 00:08:14.841 15:12:23 -- accel/accel.sh@20 -- # val= 00:08:14.841 15:12:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.841 15:12:23 -- accel/accel.sh@19 -- # IFS=: 00:08:14.841 15:12:23 -- accel/accel.sh@19 -- # read -r var val 00:08:14.841 15:12:23 -- accel/accel.sh@20 -- # val= 00:08:14.841 15:12:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.841 15:12:23 -- accel/accel.sh@19 -- # IFS=: 00:08:14.841 15:12:23 -- accel/accel.sh@19 -- # read -r var val 00:08:14.841 15:12:23 -- accel/accel.sh@20 -- # val= 00:08:14.841 15:12:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.841 15:12:23 -- accel/accel.sh@19 -- # IFS=: 00:08:14.841 15:12:23 -- accel/accel.sh@19 -- # read -r var val 00:08:14.841 15:12:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:14.841 15:12:23 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:14.841 15:12:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.841 00:08:14.841 real 0m1.536s 00:08:14.841 user 0m1.333s 00:08:14.841 sys 0m0.110s 00:08:14.841 15:12:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:14.841 15:12:23 -- common/autotest_common.sh@10 -- # set +x 00:08:14.841 ************************************ 00:08:14.841 END TEST accel_comp 00:08:14.841 ************************************ 00:08:14.841 15:12:23 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:14.841 15:12:23 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:14.841 15:12:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.841 15:12:23 -- common/autotest_common.sh@10 -- # set +x 00:08:14.841 ************************************ 00:08:14.841 START TEST accel_decomp 00:08:14.841 ************************************ 00:08:14.841 15:12:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:14.841 15:12:23 -- accel/accel.sh@16 -- # local accel_opc 00:08:14.841 15:12:23 -- accel/accel.sh@17 -- # local accel_module 00:08:14.841 15:12:23 -- accel/accel.sh@19 -- # IFS=: 00:08:14.841 15:12:23 -- accel/accel.sh@19 -- # read -r var val 00:08:14.841 15:12:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:14.841 15:12:23 -- accel/accel.sh@12 -- # build_accel_config 00:08:14.841 15:12:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:14.841 15:12:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.841 15:12:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.841 15:12:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.841 15:12:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.841 15:12:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.841 15:12:23 -- accel/accel.sh@40 -- # local IFS=, 00:08:14.841 15:12:23 -- accel/accel.sh@41 -- # jq -r . 00:08:14.841 [2024-04-24 15:12:23.977005] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:14.841 [2024-04-24 15:12:23.977112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61785 ] 00:08:15.100 [2024-04-24 15:12:24.110169] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.100 [2024-04-24 15:12:24.258662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val= 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val= 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val= 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val=0x1 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val= 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val= 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val=decompress 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val= 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val=software 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@22 -- # accel_module=software 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val=32 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val=32 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val=1 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val=Yes 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val= 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:15.100 15:12:24 -- accel/accel.sh@20 -- # val= 00:08:15.100 15:12:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # IFS=: 00:08:15.100 15:12:24 -- accel/accel.sh@19 -- # read -r var val 00:08:16.496 15:12:25 -- accel/accel.sh@20 -- # val= 00:08:16.496 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.496 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.496 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.496 15:12:25 -- accel/accel.sh@20 -- # val= 00:08:16.496 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.496 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.496 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.496 15:12:25 -- accel/accel.sh@20 -- # val= 00:08:16.496 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.496 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.496 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.496 15:12:25 -- accel/accel.sh@20 -- # val= 00:08:16.496 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.496 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.496 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.496 15:12:25 -- accel/accel.sh@20 -- # val= 00:08:16.496 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.496 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.496 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.496 15:12:25 -- accel/accel.sh@20 -- # val= 00:08:16.496 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.496 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.496 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.496 15:12:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.496 15:12:25 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:16.496 15:12:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.496 00:08:16.496 real 0m1.564s 00:08:16.496 user 0m1.342s 00:08:16.496 sys 0m0.127s 00:08:16.496 15:12:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:16.496 ************************************ 00:08:16.496 END TEST accel_decomp 00:08:16.496 15:12:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.496 ************************************ 00:08:16.496 15:12:25 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:16.496 15:12:25 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:16.496 15:12:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.496 15:12:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.496 ************************************ 00:08:16.496 START TEST accel_decmop_full 00:08:16.496 ************************************ 00:08:16.496 15:12:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:16.496 15:12:25 -- accel/accel.sh@16 -- # local accel_opc 00:08:16.496 15:12:25 -- accel/accel.sh@17 -- # local accel_module 00:08:16.496 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.496 15:12:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:16.496 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.496 15:12:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:16.496 15:12:25 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.496 15:12:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.496 15:12:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.496 15:12:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.496 15:12:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.496 15:12:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.496 15:12:25 -- accel/accel.sh@40 -- # local IFS=, 00:08:16.496 15:12:25 -- accel/accel.sh@41 -- # jq -r . 00:08:16.496 [2024-04-24 15:12:25.648852] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:16.496 [2024-04-24 15:12:25.648984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61830 ] 00:08:16.754 [2024-04-24 15:12:25.792681] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.754 [2024-04-24 15:12:25.922462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.754 15:12:25 -- accel/accel.sh@20 -- # val= 00:08:16.754 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.754 15:12:25 -- accel/accel.sh@20 -- # val= 00:08:16.754 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.754 15:12:25 -- accel/accel.sh@20 -- # val= 00:08:16.754 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.754 15:12:25 -- accel/accel.sh@20 -- # val=0x1 00:08:16.754 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.754 15:12:25 -- accel/accel.sh@20 -- # val= 00:08:16.754 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.754 15:12:25 -- accel/accel.sh@20 -- # val= 00:08:16.754 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.754 15:12:25 -- accel/accel.sh@20 -- # val=decompress 00:08:16.754 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.754 15:12:25 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.754 15:12:25 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:16.754 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.754 15:12:25 -- accel/accel.sh@20 -- # val= 00:08:16.754 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.754 15:12:25 -- accel/accel.sh@20 -- # val=software 00:08:16.754 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.754 15:12:25 -- accel/accel.sh@22 -- # accel_module=software 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.754 15:12:25 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:16.754 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.754 15:12:25 -- accel/accel.sh@20 -- # val=32 00:08:16.754 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.754 15:12:25 -- accel/accel.sh@20 -- # val=32 00:08:16.754 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.754 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.755 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.755 15:12:25 -- accel/accel.sh@20 -- # val=1 00:08:16.755 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.755 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.755 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.755 15:12:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.755 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.755 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.755 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.755 15:12:25 -- accel/accel.sh@20 -- # val=Yes 00:08:16.755 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.755 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.755 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.755 15:12:25 -- accel/accel.sh@20 -- # val= 00:08:16.755 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.755 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.755 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:16.755 15:12:25 -- accel/accel.sh@20 -- # val= 00:08:16.755 15:12:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.755 15:12:25 -- accel/accel.sh@19 -- # IFS=: 00:08:16.755 15:12:25 -- accel/accel.sh@19 -- # read -r var val 00:08:18.128 15:12:27 -- accel/accel.sh@20 -- # val= 00:08:18.128 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.128 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.128 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.128 15:12:27 -- accel/accel.sh@20 -- # val= 00:08:18.128 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.128 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.128 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.128 15:12:27 -- accel/accel.sh@20 -- # val= 00:08:18.128 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.128 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.128 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.128 15:12:27 -- accel/accel.sh@20 -- # val= 00:08:18.128 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.128 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.128 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.128 15:12:27 -- accel/accel.sh@20 -- # val= 00:08:18.128 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.128 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.128 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.128 15:12:27 -- accel/accel.sh@20 -- # val= 00:08:18.128 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.128 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.128 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.128 15:12:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.128 15:12:27 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:18.128 15:12:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.128 00:08:18.128 real 0m1.567s 00:08:18.128 user 0m1.344s 00:08:18.128 sys 0m0.126s 00:08:18.128 ************************************ 00:08:18.128 END TEST accel_decmop_full 00:08:18.128 15:12:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:18.128 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:08:18.128 ************************************ 00:08:18.128 15:12:27 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:18.128 15:12:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:18.128 15:12:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.128 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:08:18.128 ************************************ 00:08:18.128 START TEST accel_decomp_mcore 00:08:18.128 ************************************ 00:08:18.128 15:12:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:18.128 15:12:27 -- accel/accel.sh@16 -- # local accel_opc 00:08:18.128 15:12:27 -- accel/accel.sh@17 -- # local accel_module 00:08:18.128 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.128 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.128 15:12:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:18.128 15:12:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:18.128 15:12:27 -- accel/accel.sh@12 -- # build_accel_config 00:08:18.128 15:12:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.128 15:12:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.128 15:12:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.128 15:12:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.128 15:12:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.128 15:12:27 -- accel/accel.sh@40 -- # local IFS=, 00:08:18.128 15:12:27 -- accel/accel.sh@41 -- # jq -r . 00:08:18.128 [2024-04-24 15:12:27.305843] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:18.128 [2024-04-24 15:12:27.305949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61869 ] 00:08:18.386 [2024-04-24 15:12:27.443126] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.386 [2024-04-24 15:12:27.564166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.386 [2024-04-24 15:12:27.564272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.386 [2024-04-24 15:12:27.564376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.386 [2024-04-24 15:12:27.564382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.386 15:12:27 -- accel/accel.sh@20 -- # val= 00:08:18.386 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.386 15:12:27 -- accel/accel.sh@20 -- # val= 00:08:18.386 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.386 15:12:27 -- accel/accel.sh@20 -- # val= 00:08:18.386 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.386 15:12:27 -- accel/accel.sh@20 -- # val=0xf 00:08:18.386 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.386 15:12:27 -- accel/accel.sh@20 -- # val= 00:08:18.386 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.386 15:12:27 -- accel/accel.sh@20 -- # val= 00:08:18.386 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.386 15:12:27 -- accel/accel.sh@20 -- # val=decompress 00:08:18.386 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.386 15:12:27 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.386 15:12:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.386 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.386 15:12:27 -- accel/accel.sh@20 -- # val= 00:08:18.386 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.386 15:12:27 -- accel/accel.sh@20 -- # val=software 00:08:18.386 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.386 15:12:27 -- accel/accel.sh@22 -- # accel_module=software 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.386 15:12:27 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:18.386 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.386 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.386 15:12:27 -- accel/accel.sh@20 -- # val=32 00:08:18.645 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.645 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.645 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.645 15:12:27 -- accel/accel.sh@20 -- # val=32 00:08:18.645 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.645 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.645 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.645 15:12:27 -- accel/accel.sh@20 -- # val=1 00:08:18.645 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.645 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.645 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.645 15:12:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.645 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.645 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.645 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.645 15:12:27 -- accel/accel.sh@20 -- # val=Yes 00:08:18.645 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.645 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.645 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.645 15:12:27 -- accel/accel.sh@20 -- # val= 00:08:18.645 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.645 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.645 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:18.645 15:12:27 -- accel/accel.sh@20 -- # val= 00:08:18.645 15:12:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.645 15:12:27 -- accel/accel.sh@19 -- # IFS=: 00:08:18.645 15:12:27 -- accel/accel.sh@19 -- # read -r var val 00:08:19.645 15:12:28 -- accel/accel.sh@20 -- # val= 00:08:19.645 15:12:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # IFS=: 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # read -r var val 00:08:19.645 15:12:28 -- accel/accel.sh@20 -- # val= 00:08:19.645 15:12:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # IFS=: 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # read -r var val 00:08:19.645 15:12:28 -- accel/accel.sh@20 -- # val= 00:08:19.645 15:12:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # IFS=: 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # read -r var val 00:08:19.645 15:12:28 -- accel/accel.sh@20 -- # val= 00:08:19.645 15:12:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # IFS=: 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # read -r var val 00:08:19.645 15:12:28 -- accel/accel.sh@20 -- # val= 00:08:19.645 15:12:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # IFS=: 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # read -r var val 00:08:19.645 15:12:28 -- accel/accel.sh@20 -- # val= 00:08:19.645 15:12:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # IFS=: 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # read -r var val 00:08:19.645 15:12:28 -- accel/accel.sh@20 -- # val= 00:08:19.645 15:12:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # IFS=: 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # read -r var val 00:08:19.645 15:12:28 -- accel/accel.sh@20 -- # val= 00:08:19.645 15:12:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # IFS=: 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # read -r var val 00:08:19.645 15:12:28 -- accel/accel.sh@20 -- # val= 00:08:19.645 15:12:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # IFS=: 00:08:19.645 15:12:28 -- accel/accel.sh@19 -- # read -r var val 00:08:19.645 15:12:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.645 ************************************ 00:08:19.645 END TEST accel_decomp_mcore 00:08:19.645 ************************************ 00:08:19.645 15:12:28 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:19.645 15:12:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.645 00:08:19.645 real 0m1.544s 00:08:19.645 user 0m4.723s 00:08:19.645 sys 0m0.121s 00:08:19.645 15:12:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:19.645 15:12:28 -- common/autotest_common.sh@10 -- # set +x 00:08:19.645 15:12:28 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:19.645 15:12:28 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:19.645 15:12:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.645 15:12:28 -- common/autotest_common.sh@10 -- # set +x 00:08:19.937 ************************************ 00:08:19.937 START TEST accel_decomp_full_mcore 00:08:19.937 ************************************ 00:08:19.937 15:12:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:19.937 15:12:28 -- accel/accel.sh@16 -- # local accel_opc 00:08:19.937 15:12:28 -- accel/accel.sh@17 -- # local accel_module 00:08:19.937 15:12:28 -- accel/accel.sh@19 -- # IFS=: 00:08:19.937 15:12:28 -- accel/accel.sh@19 -- # read -r var val 00:08:19.937 15:12:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:19.937 15:12:28 -- accel/accel.sh@12 -- # build_accel_config 00:08:19.937 15:12:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:19.937 15:12:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.937 15:12:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.937 15:12:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.937 15:12:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.937 15:12:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.937 15:12:28 -- accel/accel.sh@40 -- # local IFS=, 00:08:19.937 15:12:28 -- accel/accel.sh@41 -- # jq -r . 00:08:19.937 [2024-04-24 15:12:28.964665] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:19.937 [2024-04-24 15:12:28.964746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61911 ] 00:08:19.937 [2024-04-24 15:12:29.103168] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.196 [2024-04-24 15:12:29.224129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.196 [2024-04-24 15:12:29.224250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.196 [2024-04-24 15:12:29.224381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.196 [2024-04-24 15:12:29.224381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val= 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val= 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val= 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val=0xf 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val= 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val= 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val=decompress 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val= 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val=software 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@22 -- # accel_module=software 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val=32 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val=32 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val=1 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val=Yes 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val= 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:20.196 15:12:29 -- accel/accel.sh@20 -- # val= 00:08:20.196 15:12:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # IFS=: 00:08:20.196 15:12:29 -- accel/accel.sh@19 -- # read -r var val 00:08:21.571 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.571 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.571 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.571 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.571 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.571 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.571 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.571 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.571 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.571 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.571 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.571 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.571 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.571 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.571 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.571 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.571 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.571 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.571 15:12:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.571 15:12:30 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:21.571 15:12:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.571 00:08:21.571 real 0m1.553s 00:08:21.571 user 0m4.756s 00:08:21.571 sys 0m0.128s 00:08:21.571 15:12:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:21.571 15:12:30 -- common/autotest_common.sh@10 -- # set +x 00:08:21.571 ************************************ 00:08:21.571 END TEST accel_decomp_full_mcore 00:08:21.571 ************************************ 00:08:21.571 15:12:30 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:21.571 15:12:30 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:21.571 15:12:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.571 15:12:30 -- common/autotest_common.sh@10 -- # set +x 00:08:21.571 ************************************ 00:08:21.571 START TEST accel_decomp_mthread 00:08:21.571 ************************************ 00:08:21.571 15:12:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:21.571 15:12:30 -- accel/accel.sh@16 -- # local accel_opc 00:08:21.571 15:12:30 -- accel/accel.sh@17 -- # local accel_module 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.571 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.571 15:12:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:21.571 15:12:30 -- accel/accel.sh@12 -- # build_accel_config 00:08:21.571 15:12:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:21.571 15:12:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.571 15:12:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.571 15:12:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.571 15:12:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.571 15:12:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.571 15:12:30 -- accel/accel.sh@40 -- # local IFS=, 00:08:21.571 15:12:30 -- accel/accel.sh@41 -- # jq -r . 00:08:21.571 [2024-04-24 15:12:30.628221] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:21.571 [2024-04-24 15:12:30.628314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61952 ] 00:08:21.571 [2024-04-24 15:12:30.765285] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.830 [2024-04-24 15:12:30.883202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val=0x1 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val=decompress 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val=software 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@22 -- # accel_module=software 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val=32 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val=32 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val=2 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val=Yes 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:21.830 15:12:30 -- accel/accel.sh@20 -- # val= 00:08:21.830 15:12:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # IFS=: 00:08:21.830 15:12:30 -- accel/accel.sh@19 -- # read -r var val 00:08:23.206 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.206 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.206 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.206 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.206 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.206 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.206 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.206 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.206 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.206 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.206 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.206 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.206 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.206 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.206 ************************************ 00:08:23.206 END TEST accel_decomp_mthread 00:08:23.206 ************************************ 00:08:23.206 15:12:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.206 15:12:32 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:23.206 15:12:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.206 00:08:23.206 real 0m1.547s 00:08:23.206 user 0m1.339s 00:08:23.206 sys 0m0.110s 00:08:23.206 15:12:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:23.206 15:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:23.206 15:12:32 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:23.206 15:12:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:23.206 15:12:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.206 15:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:23.206 ************************************ 00:08:23.206 START TEST accel_deomp_full_mthread 00:08:23.206 ************************************ 00:08:23.206 15:12:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:23.206 15:12:32 -- accel/accel.sh@16 -- # local accel_opc 00:08:23.206 15:12:32 -- accel/accel.sh@17 -- # local accel_module 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.206 15:12:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:23.206 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.206 15:12:32 -- accel/accel.sh@12 -- # build_accel_config 00:08:23.206 15:12:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:23.206 15:12:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.206 15:12:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.206 15:12:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.206 15:12:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.206 15:12:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.206 15:12:32 -- accel/accel.sh@40 -- # local IFS=, 00:08:23.206 15:12:32 -- accel/accel.sh@41 -- # jq -r . 00:08:23.206 [2024-04-24 15:12:32.290256] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:23.206 [2024-04-24 15:12:32.290597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61991 ] 00:08:23.206 [2024-04-24 15:12:32.419561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.465 [2024-04-24 15:12:32.566608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val=0x1 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val=decompress 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val=software 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@22 -- # accel_module=software 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val=32 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val=32 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val=2 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val=Yes 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:23.465 15:12:32 -- accel/accel.sh@20 -- # val= 00:08:23.465 15:12:32 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # IFS=: 00:08:23.465 15:12:32 -- accel/accel.sh@19 -- # read -r var val 00:08:24.852 15:12:33 -- accel/accel.sh@20 -- # val= 00:08:24.852 15:12:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.852 15:12:33 -- accel/accel.sh@19 -- # IFS=: 00:08:24.852 15:12:33 -- accel/accel.sh@19 -- # read -r var val 00:08:24.852 15:12:33 -- accel/accel.sh@20 -- # val= 00:08:24.852 15:12:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.852 15:12:33 -- accel/accel.sh@19 -- # IFS=: 00:08:24.852 15:12:33 -- accel/accel.sh@19 -- # read -r var val 00:08:24.852 15:12:33 -- accel/accel.sh@20 -- # val= 00:08:24.852 15:12:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.852 15:12:33 -- accel/accel.sh@19 -- # IFS=: 00:08:24.852 15:12:33 -- accel/accel.sh@19 -- # read -r var val 00:08:24.852 15:12:33 -- accel/accel.sh@20 -- # val= 00:08:24.852 15:12:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.852 15:12:33 -- accel/accel.sh@19 -- # IFS=: 00:08:24.852 15:12:33 -- accel/accel.sh@19 -- # read -r var val 00:08:24.852 15:12:33 -- accel/accel.sh@20 -- # val= 00:08:24.852 15:12:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.852 15:12:33 -- accel/accel.sh@19 -- # IFS=: 00:08:24.852 15:12:33 -- accel/accel.sh@19 -- # read -r var val 00:08:24.852 15:12:33 -- accel/accel.sh@20 -- # val= 00:08:24.852 15:12:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.852 15:12:33 -- accel/accel.sh@19 -- # IFS=: 00:08:24.852 15:12:33 -- accel/accel.sh@19 -- # read -r var val 00:08:24.852 15:12:33 -- accel/accel.sh@20 -- # val= 00:08:24.852 15:12:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.852 15:12:33 -- accel/accel.sh@19 -- # IFS=: 00:08:24.852 15:12:33 -- accel/accel.sh@19 -- # read -r var val 00:08:24.852 15:12:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.852 15:12:33 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:24.852 15:12:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.852 00:08:24.852 real 0m1.582s 00:08:24.852 user 0m1.374s 00:08:24.852 sys 0m0.113s 00:08:24.852 15:12:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:24.852 15:12:33 -- common/autotest_common.sh@10 -- # set +x 00:08:24.852 ************************************ 00:08:24.852 END TEST accel_deomp_full_mthread 00:08:24.852 ************************************ 00:08:24.852 15:12:33 -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:24.852 15:12:33 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:24.852 15:12:33 -- accel/accel.sh@137 -- # build_accel_config 00:08:24.852 15:12:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:24.852 15:12:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.852 15:12:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.852 15:12:33 -- common/autotest_common.sh@10 -- # set +x 00:08:24.852 15:12:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.852 15:12:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.852 15:12:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.852 15:12:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.852 15:12:33 -- accel/accel.sh@40 -- # local IFS=, 00:08:24.852 15:12:33 -- accel/accel.sh@41 -- # jq -r . 00:08:24.852 ************************************ 00:08:24.852 START TEST accel_dif_functional_tests 00:08:24.852 ************************************ 00:08:24.852 15:12:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:24.852 [2024-04-24 15:12:34.005623] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:24.852 [2024-04-24 15:12:34.005740] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62036 ] 00:08:25.146 [2024-04-24 15:12:34.148369] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:25.146 [2024-04-24 15:12:34.266346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.146 [2024-04-24 15:12:34.266476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.146 [2024-04-24 15:12:34.266476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.146 00:08:25.146 00:08:25.146 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.146 http://cunit.sourceforge.net/ 00:08:25.146 00:08:25.146 00:08:25.146 Suite: accel_dif 00:08:25.146 Test: verify: DIF generated, GUARD check ...passed 00:08:25.146 Test: verify: DIF generated, APPTAG check ...passed 00:08:25.146 Test: verify: DIF generated, REFTAG check ...passed 00:08:25.146 Test: verify: DIF not generated, GUARD check ...[2024-04-24 15:12:34.359382] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:25.146 [2024-04-24 15:12:34.359506] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:25.146 passed 00:08:25.146 Test: verify: DIF not generated, APPTAG check ...passed 00:08:25.146 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 15:12:34.359624] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:25.146 [2024-04-24 15:12:34.359673] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:25.146 [2024-04-24 15:12:34.359703] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:25.146 passed 00:08:25.146 Test: verify: APPTAG correct, APPTAG check ...[2024-04-24 15:12:34.359816] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:25.146 passed 00:08:25.146 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:08:25.146 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:25.146 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-04-24 15:12:34.359897] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:25.146 passed 00:08:25.146 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:25.146 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 15:12:34.360270] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:25.146 passed 00:08:25.146 Test: generate copy: DIF generated, GUARD check ...passed 00:08:25.146 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:25.146 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:25.146 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:25.146 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:25.146 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:25.146 Test: generate copy: iovecs-len validate ...[2024-04-24 15:12:34.360722] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:25.146 passed 00:08:25.146 Test: generate copy: buffer alignment validate ...passed 00:08:25.146 00:08:25.146 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.146 suites 1 1 n/a 0 0 00:08:25.146 tests 20 20 20 0 0 00:08:25.146 asserts 204 204 204 0 n/a 00:08:25.146 00:08:25.146 Elapsed time = 0.004 seconds 00:08:25.405 00:08:25.405 real 0m0.654s 00:08:25.405 user 0m0.814s 00:08:25.405 sys 0m0.147s 00:08:25.405 15:12:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:25.405 ************************************ 00:08:25.405 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:08:25.405 END TEST accel_dif_functional_tests 00:08:25.405 ************************************ 00:08:25.405 00:08:25.405 real 0m37.196s 00:08:25.405 user 0m37.965s 00:08:25.405 sys 0m4.555s 00:08:25.405 15:12:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:25.405 ************************************ 00:08:25.405 END TEST accel 00:08:25.405 ************************************ 00:08:25.405 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:08:25.664 15:12:34 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:25.664 15:12:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:25.664 15:12:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.664 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:08:25.664 ************************************ 00:08:25.664 START TEST accel_rpc 00:08:25.664 ************************************ 00:08:25.664 15:12:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:25.664 * Looking for test storage... 00:08:25.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:25.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.664 15:12:34 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:25.664 15:12:34 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62111 00:08:25.664 15:12:34 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:25.664 15:12:34 -- accel/accel_rpc.sh@15 -- # waitforlisten 62111 00:08:25.664 15:12:34 -- common/autotest_common.sh@817 -- # '[' -z 62111 ']' 00:08:25.664 15:12:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.664 15:12:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:25.664 15:12:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.664 15:12:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:25.664 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:08:25.664 [2024-04-24 15:12:34.894826] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:25.664 [2024-04-24 15:12:34.895497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62111 ] 00:08:25.927 [2024-04-24 15:12:35.041297] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.927 [2024-04-24 15:12:35.158910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.864 15:12:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:26.864 15:12:35 -- common/autotest_common.sh@850 -- # return 0 00:08:26.864 15:12:35 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:26.864 15:12:35 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:26.864 15:12:35 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:26.864 15:12:35 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:26.864 15:12:35 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:26.864 15:12:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:26.864 15:12:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.864 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:08:26.864 ************************************ 00:08:26.864 START TEST accel_assign_opcode 00:08:26.864 ************************************ 00:08:26.864 15:12:35 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:08:26.864 15:12:35 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:26.864 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.864 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:08:26.864 [2024-04-24 15:12:35.979759] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:26.864 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.864 15:12:35 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:26.864 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.864 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:08:26.864 [2024-04-24 15:12:35.987765] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:26.864 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:26.864 15:12:35 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:26.864 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:26.864 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:08:27.124 15:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.124 15:12:36 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:27.124 15:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.124 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:27.124 15:12:36 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:27.124 15:12:36 -- accel/accel_rpc.sh@42 -- # grep software 00:08:27.124 15:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.124 software 00:08:27.124 00:08:27.124 real 0m0.300s 00:08:27.124 user 0m0.055s 00:08:27.124 sys 0m0.009s 00:08:27.124 ************************************ 00:08:27.124 END TEST accel_assign_opcode 00:08:27.124 ************************************ 00:08:27.124 15:12:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:27.124 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:27.124 15:12:36 -- accel/accel_rpc.sh@55 -- # killprocess 62111 00:08:27.124 15:12:36 -- common/autotest_common.sh@936 -- # '[' -z 62111 ']' 00:08:27.124 15:12:36 -- common/autotest_common.sh@940 -- # kill -0 62111 00:08:27.124 15:12:36 -- common/autotest_common.sh@941 -- # uname 00:08:27.124 15:12:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:27.124 15:12:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62111 00:08:27.124 killing process with pid 62111 00:08:27.124 15:12:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:27.124 15:12:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:27.124 15:12:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62111' 00:08:27.124 15:12:36 -- common/autotest_common.sh@955 -- # kill 62111 00:08:27.124 15:12:36 -- common/autotest_common.sh@960 -- # wait 62111 00:08:27.691 ************************************ 00:08:27.691 END TEST accel_rpc 00:08:27.691 00:08:27.691 real 0m2.014s 00:08:27.691 user 0m2.189s 00:08:27.691 sys 0m0.454s 00:08:27.691 15:12:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:27.691 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:27.691 ************************************ 00:08:27.691 15:12:36 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:27.691 15:12:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:27.691 15:12:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.691 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:27.691 ************************************ 00:08:27.691 START TEST app_cmdline 00:08:27.691 ************************************ 00:08:27.691 15:12:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:27.951 * Looking for test storage... 00:08:27.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:27.951 15:12:36 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:27.951 15:12:36 -- app/cmdline.sh@17 -- # spdk_tgt_pid=62214 00:08:27.951 15:12:36 -- app/cmdline.sh@18 -- # waitforlisten 62214 00:08:27.951 15:12:36 -- common/autotest_common.sh@817 -- # '[' -z 62214 ']' 00:08:27.951 15:12:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.951 15:12:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:27.951 15:12:36 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:27.951 15:12:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.951 15:12:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:27.951 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:08:27.951 [2024-04-24 15:12:36.998686] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:27.951 [2024-04-24 15:12:36.998776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62214 ] 00:08:27.951 [2024-04-24 15:12:37.132477] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.211 [2024-04-24 15:12:37.248026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.780 15:12:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:28.780 15:12:37 -- common/autotest_common.sh@850 -- # return 0 00:08:28.780 15:12:37 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:29.039 { 00:08:29.039 "version": "SPDK v24.05-pre git sha1 0d1f30fbf", 00:08:29.039 "fields": { 00:08:29.039 "major": 24, 00:08:29.039 "minor": 5, 00:08:29.039 "patch": 0, 00:08:29.039 "suffix": "-pre", 00:08:29.039 "commit": "0d1f30fbf" 00:08:29.039 } 00:08:29.039 } 00:08:29.039 15:12:38 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:29.039 15:12:38 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:29.039 15:12:38 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:29.039 15:12:38 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:29.320 15:12:38 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:29.321 15:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.321 15:12:38 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:29.321 15:12:38 -- common/autotest_common.sh@10 -- # set +x 00:08:29.321 15:12:38 -- app/cmdline.sh@26 -- # sort 00:08:29.321 15:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.321 15:12:38 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:29.321 15:12:38 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:29.321 15:12:38 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:29.321 15:12:38 -- common/autotest_common.sh@638 -- # local es=0 00:08:29.321 15:12:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:29.321 15:12:38 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.321 15:12:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:29.321 15:12:38 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.321 15:12:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:29.321 15:12:38 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.321 15:12:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:29.321 15:12:38 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.321 15:12:38 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:29.321 15:12:38 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:29.579 request: 00:08:29.579 { 00:08:29.579 "method": "env_dpdk_get_mem_stats", 00:08:29.579 "req_id": 1 00:08:29.579 } 00:08:29.579 Got JSON-RPC error response 00:08:29.579 response: 00:08:29.579 { 00:08:29.579 "code": -32601, 00:08:29.579 "message": "Method not found" 00:08:29.579 } 00:08:29.579 15:12:38 -- common/autotest_common.sh@641 -- # es=1 00:08:29.579 15:12:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:29.579 15:12:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:29.579 15:12:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:29.579 15:12:38 -- app/cmdline.sh@1 -- # killprocess 62214 00:08:29.579 15:12:38 -- common/autotest_common.sh@936 -- # '[' -z 62214 ']' 00:08:29.579 15:12:38 -- common/autotest_common.sh@940 -- # kill -0 62214 00:08:29.579 15:12:38 -- common/autotest_common.sh@941 -- # uname 00:08:29.579 15:12:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:29.579 15:12:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62214 00:08:29.579 killing process with pid 62214 00:08:29.579 15:12:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:29.579 15:12:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:29.579 15:12:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62214' 00:08:29.579 15:12:38 -- common/autotest_common.sh@955 -- # kill 62214 00:08:29.579 15:12:38 -- common/autotest_common.sh@960 -- # wait 62214 00:08:30.147 00:08:30.147 real 0m2.230s 00:08:30.147 user 0m2.849s 00:08:30.147 sys 0m0.466s 00:08:30.147 15:12:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:30.147 15:12:39 -- common/autotest_common.sh@10 -- # set +x 00:08:30.147 ************************************ 00:08:30.147 END TEST app_cmdline 00:08:30.147 ************************************ 00:08:30.147 15:12:39 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:30.147 15:12:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:30.147 15:12:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.147 15:12:39 -- common/autotest_common.sh@10 -- # set +x 00:08:30.147 ************************************ 00:08:30.147 START TEST version 00:08:30.147 ************************************ 00:08:30.147 15:12:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:30.147 * Looking for test storage... 00:08:30.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:30.147 15:12:39 -- app/version.sh@17 -- # get_header_version major 00:08:30.147 15:12:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:30.147 15:12:39 -- app/version.sh@14 -- # cut -f2 00:08:30.147 15:12:39 -- app/version.sh@14 -- # tr -d '"' 00:08:30.147 15:12:39 -- app/version.sh@17 -- # major=24 00:08:30.147 15:12:39 -- app/version.sh@18 -- # get_header_version minor 00:08:30.147 15:12:39 -- app/version.sh@14 -- # cut -f2 00:08:30.147 15:12:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:30.147 15:12:39 -- app/version.sh@14 -- # tr -d '"' 00:08:30.147 15:12:39 -- app/version.sh@18 -- # minor=5 00:08:30.147 15:12:39 -- app/version.sh@19 -- # get_header_version patch 00:08:30.147 15:12:39 -- app/version.sh@14 -- # cut -f2 00:08:30.147 15:12:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:30.147 15:12:39 -- app/version.sh@14 -- # tr -d '"' 00:08:30.147 15:12:39 -- app/version.sh@19 -- # patch=0 00:08:30.147 15:12:39 -- app/version.sh@20 -- # get_header_version suffix 00:08:30.147 15:12:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:30.147 15:12:39 -- app/version.sh@14 -- # cut -f2 00:08:30.147 15:12:39 -- app/version.sh@14 -- # tr -d '"' 00:08:30.147 15:12:39 -- app/version.sh@20 -- # suffix=-pre 00:08:30.147 15:12:39 -- app/version.sh@22 -- # version=24.5 00:08:30.147 15:12:39 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:30.147 15:12:39 -- app/version.sh@28 -- # version=24.5rc0 00:08:30.147 15:12:39 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:30.147 15:12:39 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:30.147 15:12:39 -- app/version.sh@30 -- # py_version=24.5rc0 00:08:30.147 15:12:39 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:08:30.147 00:08:30.147 real 0m0.146s 00:08:30.147 user 0m0.095s 00:08:30.147 sys 0m0.081s 00:08:30.147 ************************************ 00:08:30.147 END TEST version 00:08:30.147 ************************************ 00:08:30.147 15:12:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:30.147 15:12:39 -- common/autotest_common.sh@10 -- # set +x 00:08:30.405 15:12:39 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:08:30.405 15:12:39 -- spdk/autotest.sh@194 -- # uname -s 00:08:30.405 15:12:39 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:30.405 15:12:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:30.405 15:12:39 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:08:30.405 15:12:39 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:08:30.405 15:12:39 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:30.405 15:12:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:30.405 15:12:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.405 15:12:39 -- common/autotest_common.sh@10 -- # set +x 00:08:30.405 ************************************ 00:08:30.405 START TEST spdk_dd 00:08:30.405 ************************************ 00:08:30.405 15:12:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:30.405 * Looking for test storage... 00:08:30.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:30.405 15:12:39 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.405 15:12:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.405 15:12:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.405 15:12:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.405 15:12:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.405 15:12:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.405 15:12:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.405 15:12:39 -- paths/export.sh@5 -- # export PATH 00:08:30.405 15:12:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.405 15:12:39 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:30.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:30.924 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:30.924 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:30.924 15:12:39 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:30.924 15:12:39 -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:30.924 15:12:39 -- scripts/common.sh@309 -- # local bdf bdfs 00:08:30.924 15:12:39 -- scripts/common.sh@310 -- # local nvmes 00:08:30.924 15:12:39 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:08:30.924 15:12:39 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:30.924 15:12:39 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:08:30.924 15:12:39 -- scripts/common.sh@295 -- # local bdf= 00:08:30.924 15:12:39 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:08:30.924 15:12:39 -- scripts/common.sh@230 -- # local class 00:08:30.924 15:12:39 -- scripts/common.sh@231 -- # local subclass 00:08:30.924 15:12:39 -- scripts/common.sh@232 -- # local progif 00:08:30.924 15:12:39 -- scripts/common.sh@233 -- # printf %02x 1 00:08:30.924 15:12:39 -- scripts/common.sh@233 -- # class=01 00:08:30.924 15:12:39 -- scripts/common.sh@234 -- # printf %02x 8 00:08:30.924 15:12:39 -- scripts/common.sh@234 -- # subclass=08 00:08:30.924 15:12:39 -- scripts/common.sh@235 -- # printf %02x 2 00:08:30.924 15:12:39 -- scripts/common.sh@235 -- # progif=02 00:08:30.924 15:12:39 -- scripts/common.sh@237 -- # hash lspci 00:08:30.924 15:12:39 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:08:30.924 15:12:39 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:08:30.924 15:12:39 -- scripts/common.sh@240 -- # grep -i -- -p02 00:08:30.924 15:12:39 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:30.924 15:12:39 -- scripts/common.sh@242 -- # tr -d '"' 00:08:30.924 15:12:39 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:30.925 15:12:39 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:08:30.925 15:12:39 -- scripts/common.sh@15 -- # local i 00:08:30.925 15:12:39 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:08:30.925 15:12:39 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:30.925 15:12:39 -- scripts/common.sh@24 -- # return 0 00:08:30.925 15:12:39 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:08:30.925 15:12:39 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:30.925 15:12:39 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:08:30.925 15:12:39 -- scripts/common.sh@15 -- # local i 00:08:30.925 15:12:39 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:08:30.925 15:12:39 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:30.925 15:12:39 -- scripts/common.sh@24 -- # return 0 00:08:30.925 15:12:39 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:08:30.925 15:12:39 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:08:30.925 15:12:39 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:30.925 15:12:39 -- scripts/common.sh@320 -- # uname -s 00:08:30.925 15:12:39 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:08:30.925 15:12:39 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:08:30.925 15:12:39 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:08:30.925 15:12:39 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:30.925 15:12:39 -- scripts/common.sh@320 -- # uname -s 00:08:30.925 15:12:39 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:08:30.925 15:12:39 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:08:30.925 15:12:39 -- scripts/common.sh@325 -- # (( 2 )) 00:08:30.925 15:12:39 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:30.925 15:12:39 -- dd/dd.sh@13 -- # check_liburing 00:08:30.925 15:12:39 -- dd/common.sh@139 -- # local lib so 00:08:30.925 15:12:39 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:30.925 15:12:39 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:39 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:08:30.925 15:12:39 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_event.so.13.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_sock.so.9.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:08:30.925 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.925 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_util.so.9.0 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:30.926 15:12:40 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:30.926 15:12:40 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:30.926 * spdk_dd linked to liburing 00:08:30.926 15:12:40 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:30.926 15:12:40 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:30.926 15:12:40 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:30.926 15:12:40 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:30.926 15:12:40 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:30.926 15:12:40 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:30.926 15:12:40 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:30.926 15:12:40 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:30.926 15:12:40 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:30.926 15:12:40 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:30.926 15:12:40 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:30.926 15:12:40 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:30.926 15:12:40 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:30.926 15:12:40 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:30.926 15:12:40 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:30.926 15:12:40 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:30.926 15:12:40 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:30.926 15:12:40 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:30.926 15:12:40 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:30.926 15:12:40 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:30.926 15:12:40 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:30.926 15:12:40 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:30.926 15:12:40 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:30.926 15:12:40 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:30.926 15:12:40 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:30.926 15:12:40 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:30.926 15:12:40 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:30.926 15:12:40 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:30.926 15:12:40 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:30.926 15:12:40 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:30.926 15:12:40 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:30.926 15:12:40 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:30.926 15:12:40 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:30.926 15:12:40 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:30.926 15:12:40 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:30.926 15:12:40 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:30.926 15:12:40 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:30.926 15:12:40 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:30.926 15:12:40 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:30.926 15:12:40 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:30.926 15:12:40 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:30.926 15:12:40 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:30.926 15:12:40 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:30.926 15:12:40 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:30.926 15:12:40 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:30.926 15:12:40 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:30.926 15:12:40 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:30.926 15:12:40 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:30.926 15:12:40 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:30.926 15:12:40 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:30.926 15:12:40 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:30.926 15:12:40 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:30.926 15:12:40 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:30.926 15:12:40 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:30.926 15:12:40 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:08:30.926 15:12:40 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=y 00:08:30.926 15:12:40 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:08:30.926 15:12:40 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:08:30.926 15:12:40 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:08:30.926 15:12:40 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:08:30.926 15:12:40 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:08:30.926 15:12:40 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:08:30.926 15:12:40 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:08:30.926 15:12:40 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:08:30.926 15:12:40 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:08:30.926 15:12:40 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:08:30.926 15:12:40 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:08:30.926 15:12:40 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:08:30.926 15:12:40 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:08:30.926 15:12:40 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:30.926 15:12:40 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:08:30.926 15:12:40 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:08:30.926 15:12:40 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:08:30.926 15:12:40 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:08:30.926 15:12:40 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:08:30.926 15:12:40 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:08:30.926 15:12:40 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:08:30.926 15:12:40 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:08:30.926 15:12:40 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:08:30.926 15:12:40 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:08:30.926 15:12:40 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:08:30.926 15:12:40 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:30.926 15:12:40 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:08:30.926 15:12:40 -- common/build_config.sh@82 -- # CONFIG_URING=y 00:08:30.926 15:12:40 -- dd/common.sh@149 -- # [[ y != y ]] 00:08:30.926 15:12:40 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:08:30.926 15:12:40 -- dd/common.sh@156 -- # export liburing_in_use=1 00:08:30.926 15:12:40 -- dd/common.sh@156 -- # liburing_in_use=1 00:08:30.926 15:12:40 -- dd/common.sh@157 -- # return 0 00:08:30.926 15:12:40 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:30.926 15:12:40 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:30.926 15:12:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:30.926 15:12:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.926 15:12:40 -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 ************************************ 00:08:30.926 START TEST spdk_dd_basic_rw 00:08:30.926 ************************************ 00:08:30.926 15:12:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:31.187 * Looking for test storage... 00:08:31.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:31.187 15:12:40 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.187 15:12:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.187 15:12:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.187 15:12:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.187 15:12:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.187 15:12:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.187 15:12:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.187 15:12:40 -- paths/export.sh@5 -- # export PATH 00:08:31.187 15:12:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.187 15:12:40 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:31.187 15:12:40 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:31.187 15:12:40 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:31.187 15:12:40 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:08:31.187 15:12:40 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:31.187 15:12:40 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:31.187 15:12:40 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:31.187 15:12:40 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.187 15:12:40 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.187 15:12:40 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:08:31.187 15:12:40 -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:08:31.187 15:12:40 -- dd/common.sh@126 -- # mapfile -t id 00:08:31.187 15:12:40 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:08:31.188 15:12:40 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:31.188 15:12:40 -- dd/common.sh@130 -- # lbaf=04 00:08:31.189 15:12:40 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:31.189 15:12:40 -- dd/common.sh@132 -- # lbaf=4096 00:08:31.189 15:12:40 -- dd/common.sh@134 -- # echo 4096 00:08:31.189 15:12:40 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:31.189 15:12:40 -- dd/basic_rw.sh@96 -- # : 00:08:31.189 15:12:40 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:31.189 15:12:40 -- dd/basic_rw.sh@96 -- # gen_conf 00:08:31.189 15:12:40 -- dd/common.sh@31 -- # xtrace_disable 00:08:31.189 15:12:40 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:31.189 15:12:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.189 15:12:40 -- common/autotest_common.sh@10 -- # set +x 00:08:31.189 15:12:40 -- common/autotest_common.sh@10 -- # set +x 00:08:31.447 { 00:08:31.447 "subsystems": [ 00:08:31.447 { 00:08:31.447 "subsystem": "bdev", 00:08:31.447 "config": [ 00:08:31.447 { 00:08:31.447 "params": { 00:08:31.447 "trtype": "pcie", 00:08:31.447 "traddr": "0000:00:10.0", 00:08:31.447 "name": "Nvme0" 00:08:31.447 }, 00:08:31.447 "method": "bdev_nvme_attach_controller" 00:08:31.447 }, 00:08:31.447 { 00:08:31.447 "method": "bdev_wait_for_examine" 00:08:31.447 } 00:08:31.447 ] 00:08:31.447 } 00:08:31.447 ] 00:08:31.447 } 00:08:31.447 ************************************ 00:08:31.447 START TEST dd_bs_lt_native_bs 00:08:31.447 ************************************ 00:08:31.447 15:12:40 -- common/autotest_common.sh@1111 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:31.447 15:12:40 -- common/autotest_common.sh@638 -- # local es=0 00:08:31.447 15:12:40 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:31.447 15:12:40 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.447 15:12:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:31.447 15:12:40 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.447 15:12:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:31.447 15:12:40 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.447 15:12:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:31.447 15:12:40 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.447 15:12:40 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.447 15:12:40 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:31.447 [2024-04-24 15:12:40.533641] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:31.447 [2024-04-24 15:12:40.533744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62555 ] 00:08:31.447 [2024-04-24 15:12:40.674485] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.706 [2024-04-24 15:12:40.803819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.965 [2024-04-24 15:12:40.972054] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:31.965 [2024-04-24 15:12:40.972129] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.965 [2024-04-24 15:12:41.101190] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:32.224 15:12:41 -- common/autotest_common.sh@641 -- # es=234 00:08:32.224 15:12:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:32.224 15:12:41 -- common/autotest_common.sh@650 -- # es=106 00:08:32.224 ************************************ 00:08:32.224 END TEST dd_bs_lt_native_bs 00:08:32.224 ************************************ 00:08:32.224 15:12:41 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:32.224 15:12:41 -- common/autotest_common.sh@658 -- # es=1 00:08:32.224 15:12:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:32.224 00:08:32.224 real 0m0.749s 00:08:32.224 user 0m0.491s 00:08:32.224 sys 0m0.153s 00:08:32.224 15:12:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:32.224 15:12:41 -- common/autotest_common.sh@10 -- # set +x 00:08:32.224 15:12:41 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:32.224 15:12:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:32.224 15:12:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.224 15:12:41 -- common/autotest_common.sh@10 -- # set +x 00:08:32.224 ************************************ 00:08:32.224 START TEST dd_rw 00:08:32.224 ************************************ 00:08:32.224 15:12:41 -- common/autotest_common.sh@1111 -- # basic_rw 4096 00:08:32.224 15:12:41 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:32.224 15:12:41 -- dd/basic_rw.sh@12 -- # local count size 00:08:32.224 15:12:41 -- dd/basic_rw.sh@13 -- # local qds bss 00:08:32.224 15:12:41 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:32.224 15:12:41 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:32.224 15:12:41 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:32.224 15:12:41 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:32.224 15:12:41 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:32.224 15:12:41 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:32.224 15:12:41 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:32.224 15:12:41 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:32.224 15:12:41 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:32.224 15:12:41 -- dd/basic_rw.sh@23 -- # count=15 00:08:32.224 15:12:41 -- dd/basic_rw.sh@24 -- # count=15 00:08:32.224 15:12:41 -- dd/basic_rw.sh@25 -- # size=61440 00:08:32.224 15:12:41 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:32.224 15:12:41 -- dd/common.sh@98 -- # xtrace_disable 00:08:32.224 15:12:41 -- common/autotest_common.sh@10 -- # set +x 00:08:33.158 15:12:42 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:33.158 15:12:42 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:33.158 15:12:42 -- dd/common.sh@31 -- # xtrace_disable 00:08:33.158 15:12:42 -- common/autotest_common.sh@10 -- # set +x 00:08:33.158 [2024-04-24 15:12:42.091448] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:33.158 [2024-04-24 15:12:42.091583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62601 ] 00:08:33.158 { 00:08:33.158 "subsystems": [ 00:08:33.158 { 00:08:33.158 "subsystem": "bdev", 00:08:33.158 "config": [ 00:08:33.158 { 00:08:33.158 "params": { 00:08:33.158 "trtype": "pcie", 00:08:33.158 "traddr": "0000:00:10.0", 00:08:33.158 "name": "Nvme0" 00:08:33.158 }, 00:08:33.158 "method": "bdev_nvme_attach_controller" 00:08:33.158 }, 00:08:33.158 { 00:08:33.158 "method": "bdev_wait_for_examine" 00:08:33.158 } 00:08:33.158 ] 00:08:33.158 } 00:08:33.158 ] 00:08:33.158 } 00:08:33.158 [2024-04-24 15:12:42.233514] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.158 [2024-04-24 15:12:42.377267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.678  Copying: 60/60 [kB] (average 29 MBps) 00:08:33.678 00:08:33.678 15:12:42 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:33.678 15:12:42 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:33.678 15:12:42 -- dd/common.sh@31 -- # xtrace_disable 00:08:33.678 15:12:42 -- common/autotest_common.sh@10 -- # set +x 00:08:33.678 [2024-04-24 15:12:42.843333] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:33.678 [2024-04-24 15:12:42.843443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62609 ] 00:08:33.678 { 00:08:33.678 "subsystems": [ 00:08:33.678 { 00:08:33.678 "subsystem": "bdev", 00:08:33.678 "config": [ 00:08:33.678 { 00:08:33.678 "params": { 00:08:33.678 "trtype": "pcie", 00:08:33.678 "traddr": "0000:00:10.0", 00:08:33.678 "name": "Nvme0" 00:08:33.678 }, 00:08:33.678 "method": "bdev_nvme_attach_controller" 00:08:33.678 }, 00:08:33.678 { 00:08:33.678 "method": "bdev_wait_for_examine" 00:08:33.678 } 00:08:33.678 ] 00:08:33.678 } 00:08:33.678 ] 00:08:33.678 } 00:08:33.936 [2024-04-24 15:12:42.976964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.936 [2024-04-24 15:12:43.093578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.453  Copying: 60/60 [kB] (average 29 MBps) 00:08:34.453 00:08:34.453 15:12:43 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:34.453 15:12:43 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:34.453 15:12:43 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:34.453 15:12:43 -- dd/common.sh@11 -- # local nvme_ref= 00:08:34.453 15:12:43 -- dd/common.sh@12 -- # local size=61440 00:08:34.453 15:12:43 -- dd/common.sh@14 -- # local bs=1048576 00:08:34.453 15:12:43 -- dd/common.sh@15 -- # local count=1 00:08:34.453 15:12:43 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:34.453 15:12:43 -- dd/common.sh@18 -- # gen_conf 00:08:34.453 15:12:43 -- dd/common.sh@31 -- # xtrace_disable 00:08:34.453 15:12:43 -- common/autotest_common.sh@10 -- # set +x 00:08:34.453 [2024-04-24 15:12:43.556615] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:34.453 [2024-04-24 15:12:43.556713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62630 ] 00:08:34.453 { 00:08:34.453 "subsystems": [ 00:08:34.453 { 00:08:34.453 "subsystem": "bdev", 00:08:34.453 "config": [ 00:08:34.453 { 00:08:34.453 "params": { 00:08:34.453 "trtype": "pcie", 00:08:34.453 "traddr": "0000:00:10.0", 00:08:34.453 "name": "Nvme0" 00:08:34.453 }, 00:08:34.453 "method": "bdev_nvme_attach_controller" 00:08:34.453 }, 00:08:34.453 { 00:08:34.453 "method": "bdev_wait_for_examine" 00:08:34.453 } 00:08:34.453 ] 00:08:34.453 } 00:08:34.453 ] 00:08:34.453 } 00:08:34.453 [2024-04-24 15:12:43.692551] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.713 [2024-04-24 15:12:43.809883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.232  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:35.232 00:08:35.232 15:12:44 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:35.232 15:12:44 -- dd/basic_rw.sh@23 -- # count=15 00:08:35.232 15:12:44 -- dd/basic_rw.sh@24 -- # count=15 00:08:35.232 15:12:44 -- dd/basic_rw.sh@25 -- # size=61440 00:08:35.232 15:12:44 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:35.232 15:12:44 -- dd/common.sh@98 -- # xtrace_disable 00:08:35.232 15:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.799 15:12:44 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:35.799 15:12:44 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:35.799 15:12:44 -- dd/common.sh@31 -- # xtrace_disable 00:08:35.799 15:12:44 -- common/autotest_common.sh@10 -- # set +x 00:08:35.799 [2024-04-24 15:12:44.863979] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:35.799 [2024-04-24 15:12:44.864075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62649 ] 00:08:35.799 { 00:08:35.799 "subsystems": [ 00:08:35.799 { 00:08:35.799 "subsystem": "bdev", 00:08:35.799 "config": [ 00:08:35.799 { 00:08:35.799 "params": { 00:08:35.799 "trtype": "pcie", 00:08:35.799 "traddr": "0000:00:10.0", 00:08:35.799 "name": "Nvme0" 00:08:35.799 }, 00:08:35.799 "method": "bdev_nvme_attach_controller" 00:08:35.799 }, 00:08:35.799 { 00:08:35.799 "method": "bdev_wait_for_examine" 00:08:35.799 } 00:08:35.799 ] 00:08:35.799 } 00:08:35.799 ] 00:08:35.799 } 00:08:35.799 [2024-04-24 15:12:45.001168] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.057 [2024-04-24 15:12:45.119151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.315  Copying: 60/60 [kB] (average 58 MBps) 00:08:36.316 00:08:36.316 15:12:45 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:36.316 15:12:45 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:36.316 15:12:45 -- dd/common.sh@31 -- # xtrace_disable 00:08:36.316 15:12:45 -- common/autotest_common.sh@10 -- # set +x 00:08:36.575 [2024-04-24 15:12:45.578517] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:36.575 [2024-04-24 15:12:45.578626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62668 ] 00:08:36.575 { 00:08:36.575 "subsystems": [ 00:08:36.575 { 00:08:36.575 "subsystem": "bdev", 00:08:36.575 "config": [ 00:08:36.575 { 00:08:36.575 "params": { 00:08:36.575 "trtype": "pcie", 00:08:36.575 "traddr": "0000:00:10.0", 00:08:36.575 "name": "Nvme0" 00:08:36.575 }, 00:08:36.575 "method": "bdev_nvme_attach_controller" 00:08:36.575 }, 00:08:36.575 { 00:08:36.575 "method": "bdev_wait_for_examine" 00:08:36.575 } 00:08:36.575 ] 00:08:36.575 } 00:08:36.575 ] 00:08:36.575 } 00:08:36.575 [2024-04-24 15:12:45.712470] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.833 [2024-04-24 15:12:45.832742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.115  Copying: 60/60 [kB] (average 58 MBps) 00:08:37.115 00:08:37.115 15:12:46 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:37.115 15:12:46 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:37.115 15:12:46 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:37.115 15:12:46 -- dd/common.sh@11 -- # local nvme_ref= 00:08:37.115 15:12:46 -- dd/common.sh@12 -- # local size=61440 00:08:37.115 15:12:46 -- dd/common.sh@14 -- # local bs=1048576 00:08:37.115 15:12:46 -- dd/common.sh@15 -- # local count=1 00:08:37.115 15:12:46 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:37.115 15:12:46 -- dd/common.sh@18 -- # gen_conf 00:08:37.115 15:12:46 -- dd/common.sh@31 -- # xtrace_disable 00:08:37.115 15:12:46 -- common/autotest_common.sh@10 -- # set +x 00:08:37.115 [2024-04-24 15:12:46.300303] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:37.115 [2024-04-24 15:12:46.300402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62688 ] 00:08:37.115 { 00:08:37.115 "subsystems": [ 00:08:37.115 { 00:08:37.115 "subsystem": "bdev", 00:08:37.115 "config": [ 00:08:37.115 { 00:08:37.115 "params": { 00:08:37.115 "trtype": "pcie", 00:08:37.115 "traddr": "0000:00:10.0", 00:08:37.115 "name": "Nvme0" 00:08:37.115 }, 00:08:37.115 "method": "bdev_nvme_attach_controller" 00:08:37.115 }, 00:08:37.115 { 00:08:37.115 "method": "bdev_wait_for_examine" 00:08:37.115 } 00:08:37.115 ] 00:08:37.115 } 00:08:37.115 ] 00:08:37.115 } 00:08:37.391 [2024-04-24 15:12:46.431956] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.391 [2024-04-24 15:12:46.552341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.909  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:37.909 00:08:37.909 15:12:46 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:37.909 15:12:46 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:37.909 15:12:46 -- dd/basic_rw.sh@23 -- # count=7 00:08:37.909 15:12:46 -- dd/basic_rw.sh@24 -- # count=7 00:08:37.909 15:12:46 -- dd/basic_rw.sh@25 -- # size=57344 00:08:37.909 15:12:46 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:37.909 15:12:46 -- dd/common.sh@98 -- # xtrace_disable 00:08:37.909 15:12:46 -- common/autotest_common.sh@10 -- # set +x 00:08:38.476 15:12:47 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:38.476 15:12:47 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:38.476 15:12:47 -- dd/common.sh@31 -- # xtrace_disable 00:08:38.476 15:12:47 -- common/autotest_common.sh@10 -- # set +x 00:08:38.476 [2024-04-24 15:12:47.570730] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:38.476 [2024-04-24 15:12:47.570837] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62708 ] 00:08:38.476 { 00:08:38.476 "subsystems": [ 00:08:38.476 { 00:08:38.476 "subsystem": "bdev", 00:08:38.476 "config": [ 00:08:38.476 { 00:08:38.477 "params": { 00:08:38.477 "trtype": "pcie", 00:08:38.477 "traddr": "0000:00:10.0", 00:08:38.477 "name": "Nvme0" 00:08:38.477 }, 00:08:38.477 "method": "bdev_nvme_attach_controller" 00:08:38.477 }, 00:08:38.477 { 00:08:38.477 "method": "bdev_wait_for_examine" 00:08:38.477 } 00:08:38.477 ] 00:08:38.477 } 00:08:38.477 ] 00:08:38.477 } 00:08:38.477 [2024-04-24 15:12:47.709463] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.735 [2024-04-24 15:12:47.827071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.253  Copying: 56/56 [kB] (average 54 MBps) 00:08:39.253 00:08:39.253 15:12:48 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:39.253 15:12:48 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:39.253 15:12:48 -- dd/common.sh@31 -- # xtrace_disable 00:08:39.253 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:08:39.253 [2024-04-24 15:12:48.298452] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:39.253 [2024-04-24 15:12:48.298559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62722 ] 00:08:39.253 { 00:08:39.253 "subsystems": [ 00:08:39.253 { 00:08:39.253 "subsystem": "bdev", 00:08:39.253 "config": [ 00:08:39.253 { 00:08:39.253 "params": { 00:08:39.253 "trtype": "pcie", 00:08:39.253 "traddr": "0000:00:10.0", 00:08:39.253 "name": "Nvme0" 00:08:39.253 }, 00:08:39.253 "method": "bdev_nvme_attach_controller" 00:08:39.253 }, 00:08:39.253 { 00:08:39.253 "method": "bdev_wait_for_examine" 00:08:39.253 } 00:08:39.253 ] 00:08:39.253 } 00:08:39.253 ] 00:08:39.253 } 00:08:39.253 [2024-04-24 15:12:48.431077] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.511 [2024-04-24 15:12:48.549260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.768  Copying: 56/56 [kB] (average 18 MBps) 00:08:39.768 00:08:39.768 15:12:48 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:39.768 15:12:48 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:39.768 15:12:48 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:39.768 15:12:48 -- dd/common.sh@11 -- # local nvme_ref= 00:08:39.768 15:12:48 -- dd/common.sh@12 -- # local size=57344 00:08:39.768 15:12:48 -- dd/common.sh@14 -- # local bs=1048576 00:08:39.768 15:12:48 -- dd/common.sh@15 -- # local count=1 00:08:39.768 15:12:48 -- dd/common.sh@18 -- # gen_conf 00:08:39.768 15:12:48 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:39.768 15:12:48 -- dd/common.sh@31 -- # xtrace_disable 00:08:39.768 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:08:40.026 [2024-04-24 15:12:49.022656] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:40.026 [2024-04-24 15:12:49.022796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62743 ] 00:08:40.026 { 00:08:40.026 "subsystems": [ 00:08:40.026 { 00:08:40.026 "subsystem": "bdev", 00:08:40.026 "config": [ 00:08:40.026 { 00:08:40.026 "params": { 00:08:40.026 "trtype": "pcie", 00:08:40.026 "traddr": "0000:00:10.0", 00:08:40.026 "name": "Nvme0" 00:08:40.026 }, 00:08:40.026 "method": "bdev_nvme_attach_controller" 00:08:40.026 }, 00:08:40.026 { 00:08:40.026 "method": "bdev_wait_for_examine" 00:08:40.026 } 00:08:40.026 ] 00:08:40.026 } 00:08:40.026 ] 00:08:40.026 } 00:08:40.026 [2024-04-24 15:12:49.164033] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.285 [2024-04-24 15:12:49.281453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.542  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:40.542 00:08:40.542 15:12:49 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:40.542 15:12:49 -- dd/basic_rw.sh@23 -- # count=7 00:08:40.542 15:12:49 -- dd/basic_rw.sh@24 -- # count=7 00:08:40.542 15:12:49 -- dd/basic_rw.sh@25 -- # size=57344 00:08:40.542 15:12:49 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:40.542 15:12:49 -- dd/common.sh@98 -- # xtrace_disable 00:08:40.542 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:08:41.110 15:12:50 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:41.110 15:12:50 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:41.110 15:12:50 -- dd/common.sh@31 -- # xtrace_disable 00:08:41.110 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:08:41.368 [2024-04-24 15:12:50.354025] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:41.368 [2024-04-24 15:12:50.354121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62762 ] 00:08:41.368 { 00:08:41.368 "subsystems": [ 00:08:41.368 { 00:08:41.368 "subsystem": "bdev", 00:08:41.368 "config": [ 00:08:41.368 { 00:08:41.368 "params": { 00:08:41.368 "trtype": "pcie", 00:08:41.368 "traddr": "0000:00:10.0", 00:08:41.368 "name": "Nvme0" 00:08:41.368 }, 00:08:41.368 "method": "bdev_nvme_attach_controller" 00:08:41.368 }, 00:08:41.368 { 00:08:41.368 "method": "bdev_wait_for_examine" 00:08:41.368 } 00:08:41.368 ] 00:08:41.368 } 00:08:41.368 ] 00:08:41.368 } 00:08:41.368 [2024-04-24 15:12:50.487462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.368 [2024-04-24 15:12:50.603812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.885  Copying: 56/56 [kB] (average 54 MBps) 00:08:41.885 00:08:41.885 15:12:51 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:41.885 15:12:51 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:41.885 15:12:51 -- dd/common.sh@31 -- # xtrace_disable 00:08:41.885 15:12:51 -- common/autotest_common.sh@10 -- # set +x 00:08:41.885 [2024-04-24 15:12:51.060297] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:41.885 [2024-04-24 15:12:51.060861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62781 ] 00:08:41.885 { 00:08:41.885 "subsystems": [ 00:08:41.885 { 00:08:41.885 "subsystem": "bdev", 00:08:41.885 "config": [ 00:08:41.885 { 00:08:41.885 "params": { 00:08:41.885 "trtype": "pcie", 00:08:41.885 "traddr": "0000:00:10.0", 00:08:41.885 "name": "Nvme0" 00:08:41.885 }, 00:08:41.885 "method": "bdev_nvme_attach_controller" 00:08:41.885 }, 00:08:41.885 { 00:08:41.885 "method": "bdev_wait_for_examine" 00:08:41.885 } 00:08:41.885 ] 00:08:41.885 } 00:08:41.885 ] 00:08:41.885 } 00:08:42.144 [2024-04-24 15:12:51.197561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.144 [2024-04-24 15:12:51.316559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.660  Copying: 56/56 [kB] (average 54 MBps) 00:08:42.660 00:08:42.660 15:12:51 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:42.660 15:12:51 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:42.660 15:12:51 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:42.660 15:12:51 -- dd/common.sh@11 -- # local nvme_ref= 00:08:42.660 15:12:51 -- dd/common.sh@12 -- # local size=57344 00:08:42.660 15:12:51 -- dd/common.sh@14 -- # local bs=1048576 00:08:42.660 15:12:51 -- dd/common.sh@15 -- # local count=1 00:08:42.660 15:12:51 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:42.660 15:12:51 -- dd/common.sh@18 -- # gen_conf 00:08:42.660 15:12:51 -- dd/common.sh@31 -- # xtrace_disable 00:08:42.660 15:12:51 -- common/autotest_common.sh@10 -- # set +x 00:08:42.660 [2024-04-24 15:12:51.797268] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:42.660 [2024-04-24 15:12:51.797410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62796 ] 00:08:42.660 { 00:08:42.660 "subsystems": [ 00:08:42.660 { 00:08:42.660 "subsystem": "bdev", 00:08:42.660 "config": [ 00:08:42.660 { 00:08:42.660 "params": { 00:08:42.660 "trtype": "pcie", 00:08:42.660 "traddr": "0000:00:10.0", 00:08:42.660 "name": "Nvme0" 00:08:42.660 }, 00:08:42.660 "method": "bdev_nvme_attach_controller" 00:08:42.660 }, 00:08:42.660 { 00:08:42.660 "method": "bdev_wait_for_examine" 00:08:42.660 } 00:08:42.660 ] 00:08:42.660 } 00:08:42.660 ] 00:08:42.660 } 00:08:42.918 [2024-04-24 15:12:51.941748] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.918 [2024-04-24 15:12:52.066131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.435  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:43.435 00:08:43.435 15:12:52 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:43.435 15:12:52 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:43.435 15:12:52 -- dd/basic_rw.sh@23 -- # count=3 00:08:43.435 15:12:52 -- dd/basic_rw.sh@24 -- # count=3 00:08:43.435 15:12:52 -- dd/basic_rw.sh@25 -- # size=49152 00:08:43.435 15:12:52 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:43.435 15:12:52 -- dd/common.sh@98 -- # xtrace_disable 00:08:43.435 15:12:52 -- common/autotest_common.sh@10 -- # set +x 00:08:44.012 15:12:53 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:44.012 15:12:53 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:44.012 15:12:53 -- dd/common.sh@31 -- # xtrace_disable 00:08:44.012 15:12:53 -- common/autotest_common.sh@10 -- # set +x 00:08:44.012 [2024-04-24 15:12:53.138341] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:44.012 [2024-04-24 15:12:53.138492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62821 ] 00:08:44.012 { 00:08:44.012 "subsystems": [ 00:08:44.012 { 00:08:44.012 "subsystem": "bdev", 00:08:44.012 "config": [ 00:08:44.012 { 00:08:44.012 "params": { 00:08:44.012 "trtype": "pcie", 00:08:44.012 "traddr": "0000:00:10.0", 00:08:44.012 "name": "Nvme0" 00:08:44.012 }, 00:08:44.012 "method": "bdev_nvme_attach_controller" 00:08:44.012 }, 00:08:44.012 { 00:08:44.012 "method": "bdev_wait_for_examine" 00:08:44.012 } 00:08:44.012 ] 00:08:44.012 } 00:08:44.012 ] 00:08:44.012 } 00:08:44.270 [2024-04-24 15:12:53.269746] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.270 [2024-04-24 15:12:53.413525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.786  Copying: 48/48 [kB] (average 46 MBps) 00:08:44.786 00:08:44.786 15:12:53 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:44.786 15:12:53 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:44.786 15:12:53 -- dd/common.sh@31 -- # xtrace_disable 00:08:44.786 15:12:53 -- common/autotest_common.sh@10 -- # set +x 00:08:44.786 [2024-04-24 15:12:53.879885] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:44.786 [2024-04-24 15:12:53.880016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62834 ] 00:08:44.786 { 00:08:44.786 "subsystems": [ 00:08:44.786 { 00:08:44.786 "subsystem": "bdev", 00:08:44.786 "config": [ 00:08:44.786 { 00:08:44.786 "params": { 00:08:44.786 "trtype": "pcie", 00:08:44.786 "traddr": "0000:00:10.0", 00:08:44.786 "name": "Nvme0" 00:08:44.786 }, 00:08:44.786 "method": "bdev_nvme_attach_controller" 00:08:44.786 }, 00:08:44.786 { 00:08:44.786 "method": "bdev_wait_for_examine" 00:08:44.786 } 00:08:44.786 ] 00:08:44.786 } 00:08:44.786 ] 00:08:44.786 } 00:08:44.786 [2024-04-24 15:12:54.024505] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.045 [2024-04-24 15:12:54.141810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.563  Copying: 48/48 [kB] (average 46 MBps) 00:08:45.563 00:08:45.563 15:12:54 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:45.563 15:12:54 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:45.563 15:12:54 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:45.563 15:12:54 -- dd/common.sh@11 -- # local nvme_ref= 00:08:45.563 15:12:54 -- dd/common.sh@12 -- # local size=49152 00:08:45.563 15:12:54 -- dd/common.sh@14 -- # local bs=1048576 00:08:45.563 15:12:54 -- dd/common.sh@15 -- # local count=1 00:08:45.563 15:12:54 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:45.563 15:12:54 -- dd/common.sh@18 -- # gen_conf 00:08:45.563 15:12:54 -- dd/common.sh@31 -- # xtrace_disable 00:08:45.563 15:12:54 -- common/autotest_common.sh@10 -- # set +x 00:08:45.563 { 00:08:45.563 "subsystems": [ 00:08:45.563 { 00:08:45.563 "subsystem": "bdev", 00:08:45.563 "config": [ 00:08:45.563 { 00:08:45.563 "params": { 00:08:45.563 "trtype": "pcie", 00:08:45.563 "traddr": "0000:00:10.0", 00:08:45.563 "name": "Nvme0" 00:08:45.563 }, 00:08:45.563 "method": "bdev_nvme_attach_controller" 00:08:45.563 }, 00:08:45.563 { 00:08:45.563 "method": "bdev_wait_for_examine" 00:08:45.563 } 00:08:45.563 ] 00:08:45.563 } 00:08:45.563 ] 00:08:45.563 } 00:08:45.563 [2024-04-24 15:12:54.618330] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:45.563 [2024-04-24 15:12:54.618478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62851 ] 00:08:45.563 [2024-04-24 15:12:54.762356] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.877 [2024-04-24 15:12:54.890796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.134  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:46.135 00:08:46.135 15:12:55 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:46.135 15:12:55 -- dd/basic_rw.sh@23 -- # count=3 00:08:46.135 15:12:55 -- dd/basic_rw.sh@24 -- # count=3 00:08:46.135 15:12:55 -- dd/basic_rw.sh@25 -- # size=49152 00:08:46.135 15:12:55 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:46.135 15:12:55 -- dd/common.sh@98 -- # xtrace_disable 00:08:46.135 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:08:46.701 15:12:55 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:46.701 15:12:55 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:46.701 15:12:55 -- dd/common.sh@31 -- # xtrace_disable 00:08:46.701 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:08:46.701 { 00:08:46.701 "subsystems": [ 00:08:46.701 { 00:08:46.701 "subsystem": "bdev", 00:08:46.701 "config": [ 00:08:46.701 { 00:08:46.702 "params": { 00:08:46.702 "trtype": "pcie", 00:08:46.702 "traddr": "0000:00:10.0", 00:08:46.702 "name": "Nvme0" 00:08:46.702 }, 00:08:46.702 "method": "bdev_nvme_attach_controller" 00:08:46.702 }, 00:08:46.702 { 00:08:46.702 "method": "bdev_wait_for_examine" 00:08:46.702 } 00:08:46.702 ] 00:08:46.702 } 00:08:46.702 ] 00:08:46.702 } 00:08:46.702 [2024-04-24 15:12:55.841131] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:46.702 [2024-04-24 15:12:55.841243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62870 ] 00:08:46.960 [2024-04-24 15:12:55.985537] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.960 [2024-04-24 15:12:56.104555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.478  Copying: 48/48 [kB] (average 46 MBps) 00:08:47.478 00:08:47.478 15:12:56 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:47.478 15:12:56 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:47.478 15:12:56 -- dd/common.sh@31 -- # xtrace_disable 00:08:47.478 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:08:47.478 [2024-04-24 15:12:56.589724] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:47.478 [2024-04-24 15:12:56.589853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62889 ] 00:08:47.478 { 00:08:47.478 "subsystems": [ 00:08:47.478 { 00:08:47.478 "subsystem": "bdev", 00:08:47.478 "config": [ 00:08:47.478 { 00:08:47.478 "params": { 00:08:47.478 "trtype": "pcie", 00:08:47.478 "traddr": "0000:00:10.0", 00:08:47.478 "name": "Nvme0" 00:08:47.478 }, 00:08:47.478 "method": "bdev_nvme_attach_controller" 00:08:47.478 }, 00:08:47.478 { 00:08:47.478 "method": "bdev_wait_for_examine" 00:08:47.478 } 00:08:47.479 ] 00:08:47.479 } 00:08:47.479 ] 00:08:47.479 } 00:08:47.737 [2024-04-24 15:12:56.733508] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.737 [2024-04-24 15:12:56.854525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.255  Copying: 48/48 [kB] (average 46 MBps) 00:08:48.255 00:08:48.255 15:12:57 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:48.255 15:12:57 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:48.255 15:12:57 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:48.255 15:12:57 -- dd/common.sh@11 -- # local nvme_ref= 00:08:48.255 15:12:57 -- dd/common.sh@12 -- # local size=49152 00:08:48.255 15:12:57 -- dd/common.sh@14 -- # local bs=1048576 00:08:48.255 15:12:57 -- dd/common.sh@15 -- # local count=1 00:08:48.255 15:12:57 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:48.255 15:12:57 -- dd/common.sh@18 -- # gen_conf 00:08:48.255 15:12:57 -- dd/common.sh@31 -- # xtrace_disable 00:08:48.255 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:08:48.255 [2024-04-24 15:12:57.307044] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:48.255 [2024-04-24 15:12:57.307137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62910 ] 00:08:48.255 { 00:08:48.255 "subsystems": [ 00:08:48.255 { 00:08:48.255 "subsystem": "bdev", 00:08:48.255 "config": [ 00:08:48.255 { 00:08:48.255 "params": { 00:08:48.255 "trtype": "pcie", 00:08:48.255 "traddr": "0000:00:10.0", 00:08:48.255 "name": "Nvme0" 00:08:48.255 }, 00:08:48.255 "method": "bdev_nvme_attach_controller" 00:08:48.255 }, 00:08:48.255 { 00:08:48.255 "method": "bdev_wait_for_examine" 00:08:48.255 } 00:08:48.255 ] 00:08:48.255 } 00:08:48.255 ] 00:08:48.255 } 00:08:48.255 [2024-04-24 15:12:57.441835] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.513 [2024-04-24 15:12:57.570246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.829  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:48.829 00:08:48.829 00:08:48.829 real 0m16.649s 00:08:48.829 user 0m12.669s 00:08:48.829 sys 0m5.506s 00:08:48.829 15:12:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:48.829 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:08:48.829 ************************************ 00:08:48.829 END TEST dd_rw 00:08:48.829 ************************************ 00:08:48.829 15:12:58 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:48.829 15:12:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:48.829 15:12:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.829 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:08:49.089 ************************************ 00:08:49.089 START TEST dd_rw_offset 00:08:49.089 ************************************ 00:08:49.089 15:12:58 -- common/autotest_common.sh@1111 -- # basic_offset 00:08:49.089 15:12:58 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:49.089 15:12:58 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:49.089 15:12:58 -- dd/common.sh@98 -- # xtrace_disable 00:08:49.089 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:08:49.089 15:12:58 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:49.089 15:12:58 -- dd/basic_rw.sh@56 -- # data=4zyjuu0zyywinow0rzlzl4qn3593u62jj6rfk018wfajoqlq4zpjvcui28ctj8vb42p8qfn7jal6c3u6nhwvcyfwwa2slg7x77sdjy2q2cwupbkt1dvlhy62w06ciuqd9gpljh7lzw7iiijm9vsgo2v5qs205oklj2zqvyrgjgpvzoxqjxe5wph1szobzhyj6ucprwlsoaobar2459ocblu22g4inruyn5jj0f20cr78qmszg04c6cowwfciggvpc7s9lskmhram8zr5ssxj5vbwi52w6fs61h0updy70h41aeq1w9mxr0mo4efuvwjyt7v64wl1hlq2ikych4hqgk5dy7711zie3fe4hxmjqspkwmjrghxq4hzbycal38lgd8t9chv5lwtfa6r96yohjdbym3ig03cscgmd83nqkzozk17rarms9f9nat5czkw8dl1mi9195el27ahshbtnr84fkrqsz3wt768176j577sau1fpcao5ba545a803rus9iqoxuxp1yukrrsg9k041grmp63yg8x55mxbdz5474n4tqlnmgom4p2v1nq03erli3aiaandm3vewo9wgs200jq7v2dp4l8t4ynkakop7jcshdzffc95i12yeyw96xim4kg1iegvdfpa49qhxmt4telhnxmmukbuu9rfa61ca0kdj40txchba3zsuo0bxlk83wn467doxttpb9j9phzcjhela7caag2cedv5n5kvhmo2fjp9c4isu9ttf46zsj6clitz9ci3o7phic822pltguvli5lxq3hew8dg43hisfiqtb0ei4nui3jcm2m65q9n1z2ol9ou9znhlwk1ut00h8o0bk8gjkp1v40yxcvl1vwvb8pdrmohvogenz6i5n16lxs3ip73y1yws4ck9nyw6lntsb94evgplgcczhmyqh7h012ujimtm1jg3dbhucd86vcnm2eldi742uqkwcmr6kfuirtugwgi6ltfwub7ukxhx3evb485hid20zbld8udj4i66yusx2tvizea5crfc4xg7yawtjfq71mjajsgymcu7wcl2xqk992wq68pxhf91ob1mwqyv5ppn4s78shbpjcvozjs6drnt8b8kywpdfnr1gvka3c977dfi8x1l5ufe1omznvqi5takb2ejbz7ahd9dj9oa0dxossbmbo1ko8c16km35sy8g6d3ms31m97h2mk4a2w3qjrp5ywj3k4i1zn65f0dmv6zp7nauy4qta29l5yc9z6g0jw7zxeotvk220nn09yfpumcwappl379r90p1pu19vfjlgephnlwpivfaa7kjq8dxezdjq7treadveasp37k0k5wl0qd2m3wxc9pg1vfbms8fcff9yukxmqvy3lqxlgmwqnl6hs8m0hmkv305zovr9ejwx0wqv947ivqa550ghq1kwb0lfsucd8p3mbme748564v0lh09trscy90um6g74cvhnvtdcvp46mmk03z87c9y3rkxnzj20jdu6whjokmty3besl9cqefoqd5asdndwaeji5cz1uzfbwq0k22z2981892ooijqdchpr3lv8lawoh5mh6j3snk5g5bv2vfbci3mbqihvc11lq7yvn3ytttzhxjhicbwmhinohu330rd51qz0ezbbwjq6ps3rt8g0s0udslgka5xt684nir9ei0tmvdlgflr617hvew3qj62yrn5186e2qy1ziue2u9ulh58xpey8aj6xucaqeikwh72t70jecpvtqw0ol51j2hsqagkk4r00vpehstmej9wscg8m7ythu0zd1qybn5s51j3vyju37rios2pd5o44whgbgdt3tzcz0i7tuapzw3g2f12209erb1j1tjy1xo0w8v97sjj0sn7f3mhah25b2v53hn17d5m4uq5n612aq5vgecwhqyefuamiaqm7tv1phzgpbnq0cxfa6gfdtayx18o6l4rpk31nfzps17kjl47te4pw9vze6ujvyp56pxcdlgcz9a58mx8szy0zs8z0mzv7bozgtvipriasmwy4c60dj7dtzsmz4b9quhi8wdwix9vi7r7op403ak0vfdgd9zhp38keh9tchlal33rthijh0z7dnv0vsbay1mwdedx48c3zq5g0ksugkf2rvlhddnglilw9k35utpswh2kvlhewbw0g90nga9npwq81905igutthp4vwj0wiaalkfk93p6mtm9fg2c7dc40fxy0vuipgp0byek86m9of5c035elcwftjvf3v8ctceqn8z7qf3rbvn1dre0df7hlt9n53pcdo7t2wl76xt2e6l7luf51ntfwrpvmnno4mpln7c7jvjt4nxhqrcxj3oq813wq72t4x1z9xavh8ywh2zyfwq1envdgx1bgawtgffojm6blcbcx71mya6f2ii7ppgsl3tpm2qctye9h1g7yyakgct4q6xg453l19m5e1bkdkqwc2tq4bex0oplwpfm1pdujut5kycpx0szmgof5x5cmbugf4r26gwgl1690idao2qetjyxoc2mei0yuaoswlbu7c8sbgxu9gythx1mrqsn1tst5a5yukm3dz47mhmprb1r7hfwbc3xcfxwanfl2jipdl0bxzp9ttlr3z7x1p7tz85tngwrtft2l1eux7dd95wvb2haabiwhfjf6t2y5ekmepvc851eydpgxn3iluyaca7en86zvd3b32l0lwkcscr88t6zdp29moup31eilir0r36nfivswybczfna6gsoy7ipas4m4mrkeux2co0wciydcrk96rxq12u1bcdhhi0sxw1wdeabb17f18xazrrqhrxuiwuabopztyf7ms0pxr7ov8ub0d8gxrgin7yfwmaqxc44fogtgyt4lcjlu49ui2kpk9cbht7merwgg7hyxsblrmnuv790d89fubkdteuakkbt8ztyyuixmkkklug93l5y0ps2vl1x6r9km4vu9qvgu8z1dh8fbbrxfovfo8lv2q3uxw2up6zihtxqfqt7cwpw8co7sh08e4pmqjnlbwxjqezyzizwlwtjz0zgn38pu5dnvwswnqr62nw3ky81t8ksocirb26snmryu19siroa921jr9hdbzhcp5jqpu8nn38uow871ep1064rsuh9t6xo17opp12qh1eibcqd616xgguymtxl751upfdiik7or631lqt0od1s2i0vua983mhph0ngtoanhiq7azod6713rbpjgj48ja0nw5ncljp93r2j9h7asdvvbh25vd925rmyv9oai1yf9nqjv8z5c3bo1y35andkz45hagemycxg2vu77usdfoau2nuwpw8xfewboc6zxuyguuv248necv9fovm5n1h42i8x60bhxjdh25xe0c95vkr9pvgs485x45y9y1fiwouk77e995f1v5xeymu19erttwpaw8kn1rz78k992j0tng25gbl459pn3xx69mk4pzrkawf9bgqq0sfeisg0iynqb1vvwhat0rpglr3nfq9h4wvt3opivkp21wrnt26xpx71tqekbx5y98g6c6gvvirregjqxcdovnhws7e0jraahein7ihci2bhdrxz3c0an6kxghccl5uc7gkaku0rm1ec5xz38tskndtu5xs8qirtgzdyiyzkpbdr70z0kh85vz8uve6s8fc97iuzumlmj69jcxbwmalqgvp72tyg79irhqpxn3ep7b0vc3tv98ud6tj9m1b6ivjb9pw1ajauuf8sl56a5vlolk0l9te0qep7odxfa7oj3byc7aebqfolelxtirhnc6n2zzmp0t5l34eyv0dsw204mz2zto6i5bmz9uteecn3nr7hj0rr2ojjjq5uk42yj7g3h2bvyo7wv99825tx1j3qnbgmsjd2xqxgolqf1lm48ecbc5gsveoffdmjvwk29hhmapukp4ckvcapl0bs91lmykog9zso1kkkalkaf3d7rk450kolv7jrgg7rhccgwxk1esr7wcyq181vdfg6twheob2408vekuboqi1yckgn1wbael79xbro7xbj3776u69s78szkv1ejp4meg2xmo 00:08:49.089 15:12:58 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:49.089 15:12:58 -- dd/basic_rw.sh@59 -- # gen_conf 00:08:49.089 15:12:58 -- dd/common.sh@31 -- # xtrace_disable 00:08:49.089 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:08:49.089 [2024-04-24 15:12:58.208502] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:49.089 [2024-04-24 15:12:58.208639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62950 ] 00:08:49.089 { 00:08:49.089 "subsystems": [ 00:08:49.089 { 00:08:49.089 "subsystem": "bdev", 00:08:49.089 "config": [ 00:08:49.089 { 00:08:49.089 "params": { 00:08:49.089 "trtype": "pcie", 00:08:49.089 "traddr": "0000:00:10.0", 00:08:49.089 "name": "Nvme0" 00:08:49.089 }, 00:08:49.089 "method": "bdev_nvme_attach_controller" 00:08:49.089 }, 00:08:49.089 { 00:08:49.089 "method": "bdev_wait_for_examine" 00:08:49.089 } 00:08:49.089 ] 00:08:49.089 } 00:08:49.089 ] 00:08:49.089 } 00:08:49.348 [2024-04-24 15:12:58.343806] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.348 [2024-04-24 15:12:58.462560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.865  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:49.865 00:08:49.865 15:12:58 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:49.865 15:12:58 -- dd/basic_rw.sh@65 -- # gen_conf 00:08:49.865 15:12:58 -- dd/common.sh@31 -- # xtrace_disable 00:08:49.865 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:08:49.865 { 00:08:49.865 "subsystems": [ 00:08:49.865 { 00:08:49.865 "subsystem": "bdev", 00:08:49.865 "config": [ 00:08:49.865 { 00:08:49.865 "params": { 00:08:49.865 "trtype": "pcie", 00:08:49.865 "traddr": "0000:00:10.0", 00:08:49.865 "name": "Nvme0" 00:08:49.865 }, 00:08:49.865 "method": "bdev_nvme_attach_controller" 00:08:49.865 }, 00:08:49.865 { 00:08:49.865 "method": "bdev_wait_for_examine" 00:08:49.865 } 00:08:49.865 ] 00:08:49.865 } 00:08:49.865 ] 00:08:49.865 } 00:08:49.865 [2024-04-24 15:12:58.931019] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:49.865 [2024-04-24 15:12:58.931142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62958 ] 00:08:49.865 [2024-04-24 15:12:59.074549] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.124 [2024-04-24 15:12:59.193963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.384  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:50.384 00:08:50.384 15:12:59 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:50.384 15:12:59 -- dd/basic_rw.sh@72 -- # [[ 4zyjuu0zyywinow0rzlzl4qn3593u62jj6rfk018wfajoqlq4zpjvcui28ctj8vb42p8qfn7jal6c3u6nhwvcyfwwa2slg7x77sdjy2q2cwupbkt1dvlhy62w06ciuqd9gpljh7lzw7iiijm9vsgo2v5qs205oklj2zqvyrgjgpvzoxqjxe5wph1szobzhyj6ucprwlsoaobar2459ocblu22g4inruyn5jj0f20cr78qmszg04c6cowwfciggvpc7s9lskmhram8zr5ssxj5vbwi52w6fs61h0updy70h41aeq1w9mxr0mo4efuvwjyt7v64wl1hlq2ikych4hqgk5dy7711zie3fe4hxmjqspkwmjrghxq4hzbycal38lgd8t9chv5lwtfa6r96yohjdbym3ig03cscgmd83nqkzozk17rarms9f9nat5czkw8dl1mi9195el27ahshbtnr84fkrqsz3wt768176j577sau1fpcao5ba545a803rus9iqoxuxp1yukrrsg9k041grmp63yg8x55mxbdz5474n4tqlnmgom4p2v1nq03erli3aiaandm3vewo9wgs200jq7v2dp4l8t4ynkakop7jcshdzffc95i12yeyw96xim4kg1iegvdfpa49qhxmt4telhnxmmukbuu9rfa61ca0kdj40txchba3zsuo0bxlk83wn467doxttpb9j9phzcjhela7caag2cedv5n5kvhmo2fjp9c4isu9ttf46zsj6clitz9ci3o7phic822pltguvli5lxq3hew8dg43hisfiqtb0ei4nui3jcm2m65q9n1z2ol9ou9znhlwk1ut00h8o0bk8gjkp1v40yxcvl1vwvb8pdrmohvogenz6i5n16lxs3ip73y1yws4ck9nyw6lntsb94evgplgcczhmyqh7h012ujimtm1jg3dbhucd86vcnm2eldi742uqkwcmr6kfuirtugwgi6ltfwub7ukxhx3evb485hid20zbld8udj4i66yusx2tvizea5crfc4xg7yawtjfq71mjajsgymcu7wcl2xqk992wq68pxhf91ob1mwqyv5ppn4s78shbpjcvozjs6drnt8b8kywpdfnr1gvka3c977dfi8x1l5ufe1omznvqi5takb2ejbz7ahd9dj9oa0dxossbmbo1ko8c16km35sy8g6d3ms31m97h2mk4a2w3qjrp5ywj3k4i1zn65f0dmv6zp7nauy4qta29l5yc9z6g0jw7zxeotvk220nn09yfpumcwappl379r90p1pu19vfjlgephnlwpivfaa7kjq8dxezdjq7treadveasp37k0k5wl0qd2m3wxc9pg1vfbms8fcff9yukxmqvy3lqxlgmwqnl6hs8m0hmkv305zovr9ejwx0wqv947ivqa550ghq1kwb0lfsucd8p3mbme748564v0lh09trscy90um6g74cvhnvtdcvp46mmk03z87c9y3rkxnzj20jdu6whjokmty3besl9cqefoqd5asdndwaeji5cz1uzfbwq0k22z2981892ooijqdchpr3lv8lawoh5mh6j3snk5g5bv2vfbci3mbqihvc11lq7yvn3ytttzhxjhicbwmhinohu330rd51qz0ezbbwjq6ps3rt8g0s0udslgka5xt684nir9ei0tmvdlgflr617hvew3qj62yrn5186e2qy1ziue2u9ulh58xpey8aj6xucaqeikwh72t70jecpvtqw0ol51j2hsqagkk4r00vpehstmej9wscg8m7ythu0zd1qybn5s51j3vyju37rios2pd5o44whgbgdt3tzcz0i7tuapzw3g2f12209erb1j1tjy1xo0w8v97sjj0sn7f3mhah25b2v53hn17d5m4uq5n612aq5vgecwhqyefuamiaqm7tv1phzgpbnq0cxfa6gfdtayx18o6l4rpk31nfzps17kjl47te4pw9vze6ujvyp56pxcdlgcz9a58mx8szy0zs8z0mzv7bozgtvipriasmwy4c60dj7dtzsmz4b9quhi8wdwix9vi7r7op403ak0vfdgd9zhp38keh9tchlal33rthijh0z7dnv0vsbay1mwdedx48c3zq5g0ksugkf2rvlhddnglilw9k35utpswh2kvlhewbw0g90nga9npwq81905igutthp4vwj0wiaalkfk93p6mtm9fg2c7dc40fxy0vuipgp0byek86m9of5c035elcwftjvf3v8ctceqn8z7qf3rbvn1dre0df7hlt9n53pcdo7t2wl76xt2e6l7luf51ntfwrpvmnno4mpln7c7jvjt4nxhqrcxj3oq813wq72t4x1z9xavh8ywh2zyfwq1envdgx1bgawtgffojm6blcbcx71mya6f2ii7ppgsl3tpm2qctye9h1g7yyakgct4q6xg453l19m5e1bkdkqwc2tq4bex0oplwpfm1pdujut5kycpx0szmgof5x5cmbugf4r26gwgl1690idao2qetjyxoc2mei0yuaoswlbu7c8sbgxu9gythx1mrqsn1tst5a5yukm3dz47mhmprb1r7hfwbc3xcfxwanfl2jipdl0bxzp9ttlr3z7x1p7tz85tngwrtft2l1eux7dd95wvb2haabiwhfjf6t2y5ekmepvc851eydpgxn3iluyaca7en86zvd3b32l0lwkcscr88t6zdp29moup31eilir0r36nfivswybczfna6gsoy7ipas4m4mrkeux2co0wciydcrk96rxq12u1bcdhhi0sxw1wdeabb17f18xazrrqhrxuiwuabopztyf7ms0pxr7ov8ub0d8gxrgin7yfwmaqxc44fogtgyt4lcjlu49ui2kpk9cbht7merwgg7hyxsblrmnuv790d89fubkdteuakkbt8ztyyuixmkkklug93l5y0ps2vl1x6r9km4vu9qvgu8z1dh8fbbrxfovfo8lv2q3uxw2up6zihtxqfqt7cwpw8co7sh08e4pmqjnlbwxjqezyzizwlwtjz0zgn38pu5dnvwswnqr62nw3ky81t8ksocirb26snmryu19siroa921jr9hdbzhcp5jqpu8nn38uow871ep1064rsuh9t6xo17opp12qh1eibcqd616xgguymtxl751upfdiik7or631lqt0od1s2i0vua983mhph0ngtoanhiq7azod6713rbpjgj48ja0nw5ncljp93r2j9h7asdvvbh25vd925rmyv9oai1yf9nqjv8z5c3bo1y35andkz45hagemycxg2vu77usdfoau2nuwpw8xfewboc6zxuyguuv248necv9fovm5n1h42i8x60bhxjdh25xe0c95vkr9pvgs485x45y9y1fiwouk77e995f1v5xeymu19erttwpaw8kn1rz78k992j0tng25gbl459pn3xx69mk4pzrkawf9bgqq0sfeisg0iynqb1vvwhat0rpglr3nfq9h4wvt3opivkp21wrnt26xpx71tqekbx5y98g6c6gvvirregjqxcdovnhws7e0jraahein7ihci2bhdrxz3c0an6kxghccl5uc7gkaku0rm1ec5xz38tskndtu5xs8qirtgzdyiyzkpbdr70z0kh85vz8uve6s8fc97iuzumlmj69jcxbwmalqgvp72tyg79irhqpxn3ep7b0vc3tv98ud6tj9m1b6ivjb9pw1ajauuf8sl56a5vlolk0l9te0qep7odxfa7oj3byc7aebqfolelxtirhnc6n2zzmp0t5l34eyv0dsw204mz2zto6i5bmz9uteecn3nr7hj0rr2ojjjq5uk42yj7g3h2bvyo7wv99825tx1j3qnbgmsjd2xqxgolqf1lm48ecbc5gsveoffdmjvwk29hhmapukp4ckvcapl0bs91lmykog9zso1kkkalkaf3d7rk450kolv7jrgg7rhccgwxk1esr7wcyq181vdfg6twheob2408vekuboqi1yckgn1wbael79xbro7xbj3776u69s78szkv1ejp4meg2xmo == \4\z\y\j\u\u\0\z\y\y\w\i\n\o\w\0\r\z\l\z\l\4\q\n\3\5\9\3\u\6\2\j\j\6\r\f\k\0\1\8\w\f\a\j\o\q\l\q\4\z\p\j\v\c\u\i\2\8\c\t\j\8\v\b\4\2\p\8\q\f\n\7\j\a\l\6\c\3\u\6\n\h\w\v\c\y\f\w\w\a\2\s\l\g\7\x\7\7\s\d\j\y\2\q\2\c\w\u\p\b\k\t\1\d\v\l\h\y\6\2\w\0\6\c\i\u\q\d\9\g\p\l\j\h\7\l\z\w\7\i\i\i\j\m\9\v\s\g\o\2\v\5\q\s\2\0\5\o\k\l\j\2\z\q\v\y\r\g\j\g\p\v\z\o\x\q\j\x\e\5\w\p\h\1\s\z\o\b\z\h\y\j\6\u\c\p\r\w\l\s\o\a\o\b\a\r\2\4\5\9\o\c\b\l\u\2\2\g\4\i\n\r\u\y\n\5\j\j\0\f\2\0\c\r\7\8\q\m\s\z\g\0\4\c\6\c\o\w\w\f\c\i\g\g\v\p\c\7\s\9\l\s\k\m\h\r\a\m\8\z\r\5\s\s\x\j\5\v\b\w\i\5\2\w\6\f\s\6\1\h\0\u\p\d\y\7\0\h\4\1\a\e\q\1\w\9\m\x\r\0\m\o\4\e\f\u\v\w\j\y\t\7\v\6\4\w\l\1\h\l\q\2\i\k\y\c\h\4\h\q\g\k\5\d\y\7\7\1\1\z\i\e\3\f\e\4\h\x\m\j\q\s\p\k\w\m\j\r\g\h\x\q\4\h\z\b\y\c\a\l\3\8\l\g\d\8\t\9\c\h\v\5\l\w\t\f\a\6\r\9\6\y\o\h\j\d\b\y\m\3\i\g\0\3\c\s\c\g\m\d\8\3\n\q\k\z\o\z\k\1\7\r\a\r\m\s\9\f\9\n\a\t\5\c\z\k\w\8\d\l\1\m\i\9\1\9\5\e\l\2\7\a\h\s\h\b\t\n\r\8\4\f\k\r\q\s\z\3\w\t\7\6\8\1\7\6\j\5\7\7\s\a\u\1\f\p\c\a\o\5\b\a\5\4\5\a\8\0\3\r\u\s\9\i\q\o\x\u\x\p\1\y\u\k\r\r\s\g\9\k\0\4\1\g\r\m\p\6\3\y\g\8\x\5\5\m\x\b\d\z\5\4\7\4\n\4\t\q\l\n\m\g\o\m\4\p\2\v\1\n\q\0\3\e\r\l\i\3\a\i\a\a\n\d\m\3\v\e\w\o\9\w\g\s\2\0\0\j\q\7\v\2\d\p\4\l\8\t\4\y\n\k\a\k\o\p\7\j\c\s\h\d\z\f\f\c\9\5\i\1\2\y\e\y\w\9\6\x\i\m\4\k\g\1\i\e\g\v\d\f\p\a\4\9\q\h\x\m\t\4\t\e\l\h\n\x\m\m\u\k\b\u\u\9\r\f\a\6\1\c\a\0\k\d\j\4\0\t\x\c\h\b\a\3\z\s\u\o\0\b\x\l\k\8\3\w\n\4\6\7\d\o\x\t\t\p\b\9\j\9\p\h\z\c\j\h\e\l\a\7\c\a\a\g\2\c\e\d\v\5\n\5\k\v\h\m\o\2\f\j\p\9\c\4\i\s\u\9\t\t\f\4\6\z\s\j\6\c\l\i\t\z\9\c\i\3\o\7\p\h\i\c\8\2\2\p\l\t\g\u\v\l\i\5\l\x\q\3\h\e\w\8\d\g\4\3\h\i\s\f\i\q\t\b\0\e\i\4\n\u\i\3\j\c\m\2\m\6\5\q\9\n\1\z\2\o\l\9\o\u\9\z\n\h\l\w\k\1\u\t\0\0\h\8\o\0\b\k\8\g\j\k\p\1\v\4\0\y\x\c\v\l\1\v\w\v\b\8\p\d\r\m\o\h\v\o\g\e\n\z\6\i\5\n\1\6\l\x\s\3\i\p\7\3\y\1\y\w\s\4\c\k\9\n\y\w\6\l\n\t\s\b\9\4\e\v\g\p\l\g\c\c\z\h\m\y\q\h\7\h\0\1\2\u\j\i\m\t\m\1\j\g\3\d\b\h\u\c\d\8\6\v\c\n\m\2\e\l\d\i\7\4\2\u\q\k\w\c\m\r\6\k\f\u\i\r\t\u\g\w\g\i\6\l\t\f\w\u\b\7\u\k\x\h\x\3\e\v\b\4\8\5\h\i\d\2\0\z\b\l\d\8\u\d\j\4\i\6\6\y\u\s\x\2\t\v\i\z\e\a\5\c\r\f\c\4\x\g\7\y\a\w\t\j\f\q\7\1\m\j\a\j\s\g\y\m\c\u\7\w\c\l\2\x\q\k\9\9\2\w\q\6\8\p\x\h\f\9\1\o\b\1\m\w\q\y\v\5\p\p\n\4\s\7\8\s\h\b\p\j\c\v\o\z\j\s\6\d\r\n\t\8\b\8\k\y\w\p\d\f\n\r\1\g\v\k\a\3\c\9\7\7\d\f\i\8\x\1\l\5\u\f\e\1\o\m\z\n\v\q\i\5\t\a\k\b\2\e\j\b\z\7\a\h\d\9\d\j\9\o\a\0\d\x\o\s\s\b\m\b\o\1\k\o\8\c\1\6\k\m\3\5\s\y\8\g\6\d\3\m\s\3\1\m\9\7\h\2\m\k\4\a\2\w\3\q\j\r\p\5\y\w\j\3\k\4\i\1\z\n\6\5\f\0\d\m\v\6\z\p\7\n\a\u\y\4\q\t\a\2\9\l\5\y\c\9\z\6\g\0\j\w\7\z\x\e\o\t\v\k\2\2\0\n\n\0\9\y\f\p\u\m\c\w\a\p\p\l\3\7\9\r\9\0\p\1\p\u\1\9\v\f\j\l\g\e\p\h\n\l\w\p\i\v\f\a\a\7\k\j\q\8\d\x\e\z\d\j\q\7\t\r\e\a\d\v\e\a\s\p\3\7\k\0\k\5\w\l\0\q\d\2\m\3\w\x\c\9\p\g\1\v\f\b\m\s\8\f\c\f\f\9\y\u\k\x\m\q\v\y\3\l\q\x\l\g\m\w\q\n\l\6\h\s\8\m\0\h\m\k\v\3\0\5\z\o\v\r\9\e\j\w\x\0\w\q\v\9\4\7\i\v\q\a\5\5\0\g\h\q\1\k\w\b\0\l\f\s\u\c\d\8\p\3\m\b\m\e\7\4\8\5\6\4\v\0\l\h\0\9\t\r\s\c\y\9\0\u\m\6\g\7\4\c\v\h\n\v\t\d\c\v\p\4\6\m\m\k\0\3\z\8\7\c\9\y\3\r\k\x\n\z\j\2\0\j\d\u\6\w\h\j\o\k\m\t\y\3\b\e\s\l\9\c\q\e\f\o\q\d\5\a\s\d\n\d\w\a\e\j\i\5\c\z\1\u\z\f\b\w\q\0\k\2\2\z\2\9\8\1\8\9\2\o\o\i\j\q\d\c\h\p\r\3\l\v\8\l\a\w\o\h\5\m\h\6\j\3\s\n\k\5\g\5\b\v\2\v\f\b\c\i\3\m\b\q\i\h\v\c\1\1\l\q\7\y\v\n\3\y\t\t\t\z\h\x\j\h\i\c\b\w\m\h\i\n\o\h\u\3\3\0\r\d\5\1\q\z\0\e\z\b\b\w\j\q\6\p\s\3\r\t\8\g\0\s\0\u\d\s\l\g\k\a\5\x\t\6\8\4\n\i\r\9\e\i\0\t\m\v\d\l\g\f\l\r\6\1\7\h\v\e\w\3\q\j\6\2\y\r\n\5\1\8\6\e\2\q\y\1\z\i\u\e\2\u\9\u\l\h\5\8\x\p\e\y\8\a\j\6\x\u\c\a\q\e\i\k\w\h\7\2\t\7\0\j\e\c\p\v\t\q\w\0\o\l\5\1\j\2\h\s\q\a\g\k\k\4\r\0\0\v\p\e\h\s\t\m\e\j\9\w\s\c\g\8\m\7\y\t\h\u\0\z\d\1\q\y\b\n\5\s\5\1\j\3\v\y\j\u\3\7\r\i\o\s\2\p\d\5\o\4\4\w\h\g\b\g\d\t\3\t\z\c\z\0\i\7\t\u\a\p\z\w\3\g\2\f\1\2\2\0\9\e\r\b\1\j\1\t\j\y\1\x\o\0\w\8\v\9\7\s\j\j\0\s\n\7\f\3\m\h\a\h\2\5\b\2\v\5\3\h\n\1\7\d\5\m\4\u\q\5\n\6\1\2\a\q\5\v\g\e\c\w\h\q\y\e\f\u\a\m\i\a\q\m\7\t\v\1\p\h\z\g\p\b\n\q\0\c\x\f\a\6\g\f\d\t\a\y\x\1\8\o\6\l\4\r\p\k\3\1\n\f\z\p\s\1\7\k\j\l\4\7\t\e\4\p\w\9\v\z\e\6\u\j\v\y\p\5\6\p\x\c\d\l\g\c\z\9\a\5\8\m\x\8\s\z\y\0\z\s\8\z\0\m\z\v\7\b\o\z\g\t\v\i\p\r\i\a\s\m\w\y\4\c\6\0\d\j\7\d\t\z\s\m\z\4\b\9\q\u\h\i\8\w\d\w\i\x\9\v\i\7\r\7\o\p\4\0\3\a\k\0\v\f\d\g\d\9\z\h\p\3\8\k\e\h\9\t\c\h\l\a\l\3\3\r\t\h\i\j\h\0\z\7\d\n\v\0\v\s\b\a\y\1\m\w\d\e\d\x\4\8\c\3\z\q\5\g\0\k\s\u\g\k\f\2\r\v\l\h\d\d\n\g\l\i\l\w\9\k\3\5\u\t\p\s\w\h\2\k\v\l\h\e\w\b\w\0\g\9\0\n\g\a\9\n\p\w\q\8\1\9\0\5\i\g\u\t\t\h\p\4\v\w\j\0\w\i\a\a\l\k\f\k\9\3\p\6\m\t\m\9\f\g\2\c\7\d\c\4\0\f\x\y\0\v\u\i\p\g\p\0\b\y\e\k\8\6\m\9\o\f\5\c\0\3\5\e\l\c\w\f\t\j\v\f\3\v\8\c\t\c\e\q\n\8\z\7\q\f\3\r\b\v\n\1\d\r\e\0\d\f\7\h\l\t\9\n\5\3\p\c\d\o\7\t\2\w\l\7\6\x\t\2\e\6\l\7\l\u\f\5\1\n\t\f\w\r\p\v\m\n\n\o\4\m\p\l\n\7\c\7\j\v\j\t\4\n\x\h\q\r\c\x\j\3\o\q\8\1\3\w\q\7\2\t\4\x\1\z\9\x\a\v\h\8\y\w\h\2\z\y\f\w\q\1\e\n\v\d\g\x\1\b\g\a\w\t\g\f\f\o\j\m\6\b\l\c\b\c\x\7\1\m\y\a\6\f\2\i\i\7\p\p\g\s\l\3\t\p\m\2\q\c\t\y\e\9\h\1\g\7\y\y\a\k\g\c\t\4\q\6\x\g\4\5\3\l\1\9\m\5\e\1\b\k\d\k\q\w\c\2\t\q\4\b\e\x\0\o\p\l\w\p\f\m\1\p\d\u\j\u\t\5\k\y\c\p\x\0\s\z\m\g\o\f\5\x\5\c\m\b\u\g\f\4\r\2\6\g\w\g\l\1\6\9\0\i\d\a\o\2\q\e\t\j\y\x\o\c\2\m\e\i\0\y\u\a\o\s\w\l\b\u\7\c\8\s\b\g\x\u\9\g\y\t\h\x\1\m\r\q\s\n\1\t\s\t\5\a\5\y\u\k\m\3\d\z\4\7\m\h\m\p\r\b\1\r\7\h\f\w\b\c\3\x\c\f\x\w\a\n\f\l\2\j\i\p\d\l\0\b\x\z\p\9\t\t\l\r\3\z\7\x\1\p\7\t\z\8\5\t\n\g\w\r\t\f\t\2\l\1\e\u\x\7\d\d\9\5\w\v\b\2\h\a\a\b\i\w\h\f\j\f\6\t\2\y\5\e\k\m\e\p\v\c\8\5\1\e\y\d\p\g\x\n\3\i\l\u\y\a\c\a\7\e\n\8\6\z\v\d\3\b\3\2\l\0\l\w\k\c\s\c\r\8\8\t\6\z\d\p\2\9\m\o\u\p\3\1\e\i\l\i\r\0\r\3\6\n\f\i\v\s\w\y\b\c\z\f\n\a\6\g\s\o\y\7\i\p\a\s\4\m\4\m\r\k\e\u\x\2\c\o\0\w\c\i\y\d\c\r\k\9\6\r\x\q\1\2\u\1\b\c\d\h\h\i\0\s\x\w\1\w\d\e\a\b\b\1\7\f\1\8\x\a\z\r\r\q\h\r\x\u\i\w\u\a\b\o\p\z\t\y\f\7\m\s\0\p\x\r\7\o\v\8\u\b\0\d\8\g\x\r\g\i\n\7\y\f\w\m\a\q\x\c\4\4\f\o\g\t\g\y\t\4\l\c\j\l\u\4\9\u\i\2\k\p\k\9\c\b\h\t\7\m\e\r\w\g\g\7\h\y\x\s\b\l\r\m\n\u\v\7\9\0\d\8\9\f\u\b\k\d\t\e\u\a\k\k\b\t\8\z\t\y\y\u\i\x\m\k\k\k\l\u\g\9\3\l\5\y\0\p\s\2\v\l\1\x\6\r\9\k\m\4\v\u\9\q\v\g\u\8\z\1\d\h\8\f\b\b\r\x\f\o\v\f\o\8\l\v\2\q\3\u\x\w\2\u\p\6\z\i\h\t\x\q\f\q\t\7\c\w\p\w\8\c\o\7\s\h\0\8\e\4\p\m\q\j\n\l\b\w\x\j\q\e\z\y\z\i\z\w\l\w\t\j\z\0\z\g\n\3\8\p\u\5\d\n\v\w\s\w\n\q\r\6\2\n\w\3\k\y\8\1\t\8\k\s\o\c\i\r\b\2\6\s\n\m\r\y\u\1\9\s\i\r\o\a\9\2\1\j\r\9\h\d\b\z\h\c\p\5\j\q\p\u\8\n\n\3\8\u\o\w\8\7\1\e\p\1\0\6\4\r\s\u\h\9\t\6\x\o\1\7\o\p\p\1\2\q\h\1\e\i\b\c\q\d\6\1\6\x\g\g\u\y\m\t\x\l\7\5\1\u\p\f\d\i\i\k\7\o\r\6\3\1\l\q\t\0\o\d\1\s\2\i\0\v\u\a\9\8\3\m\h\p\h\0\n\g\t\o\a\n\h\i\q\7\a\z\o\d\6\7\1\3\r\b\p\j\g\j\4\8\j\a\0\n\w\5\n\c\l\j\p\9\3\r\2\j\9\h\7\a\s\d\v\v\b\h\2\5\v\d\9\2\5\r\m\y\v\9\o\a\i\1\y\f\9\n\q\j\v\8\z\5\c\3\b\o\1\y\3\5\a\n\d\k\z\4\5\h\a\g\e\m\y\c\x\g\2\v\u\7\7\u\s\d\f\o\a\u\2\n\u\w\p\w\8\x\f\e\w\b\o\c\6\z\x\u\y\g\u\u\v\2\4\8\n\e\c\v\9\f\o\v\m\5\n\1\h\4\2\i\8\x\6\0\b\h\x\j\d\h\2\5\x\e\0\c\9\5\v\k\r\9\p\v\g\s\4\8\5\x\4\5\y\9\y\1\f\i\w\o\u\k\7\7\e\9\9\5\f\1\v\5\x\e\y\m\u\1\9\e\r\t\t\w\p\a\w\8\k\n\1\r\z\7\8\k\9\9\2\j\0\t\n\g\2\5\g\b\l\4\5\9\p\n\3\x\x\6\9\m\k\4\p\z\r\k\a\w\f\9\b\g\q\q\0\s\f\e\i\s\g\0\i\y\n\q\b\1\v\v\w\h\a\t\0\r\p\g\l\r\3\n\f\q\9\h\4\w\v\t\3\o\p\i\v\k\p\2\1\w\r\n\t\2\6\x\p\x\7\1\t\q\e\k\b\x\5\y\9\8\g\6\c\6\g\v\v\i\r\r\e\g\j\q\x\c\d\o\v\n\h\w\s\7\e\0\j\r\a\a\h\e\i\n\7\i\h\c\i\2\b\h\d\r\x\z\3\c\0\a\n\6\k\x\g\h\c\c\l\5\u\c\7\g\k\a\k\u\0\r\m\1\e\c\5\x\z\3\8\t\s\k\n\d\t\u\5\x\s\8\q\i\r\t\g\z\d\y\i\y\z\k\p\b\d\r\7\0\z\0\k\h\8\5\v\z\8\u\v\e\6\s\8\f\c\9\7\i\u\z\u\m\l\m\j\6\9\j\c\x\b\w\m\a\l\q\g\v\p\7\2\t\y\g\7\9\i\r\h\q\p\x\n\3\e\p\7\b\0\v\c\3\t\v\9\8\u\d\6\t\j\9\m\1\b\6\i\v\j\b\9\p\w\1\a\j\a\u\u\f\8\s\l\5\6\a\5\v\l\o\l\k\0\l\9\t\e\0\q\e\p\7\o\d\x\f\a\7\o\j\3\b\y\c\7\a\e\b\q\f\o\l\e\l\x\t\i\r\h\n\c\6\n\2\z\z\m\p\0\t\5\l\3\4\e\y\v\0\d\s\w\2\0\4\m\z\2\z\t\o\6\i\5\b\m\z\9\u\t\e\e\c\n\3\n\r\7\h\j\0\r\r\2\o\j\j\j\q\5\u\k\4\2\y\j\7\g\3\h\2\b\v\y\o\7\w\v\9\9\8\2\5\t\x\1\j\3\q\n\b\g\m\s\j\d\2\x\q\x\g\o\l\q\f\1\l\m\4\8\e\c\b\c\5\g\s\v\e\o\f\f\d\m\j\v\w\k\2\9\h\h\m\a\p\u\k\p\4\c\k\v\c\a\p\l\0\b\s\9\1\l\m\y\k\o\g\9\z\s\o\1\k\k\k\a\l\k\a\f\3\d\7\r\k\4\5\0\k\o\l\v\7\j\r\g\g\7\r\h\c\c\g\w\x\k\1\e\s\r\7\w\c\y\q\1\8\1\v\d\f\g\6\t\w\h\e\o\b\2\4\0\8\v\e\k\u\b\o\q\i\1\y\c\k\g\n\1\w\b\a\e\l\7\9\x\b\r\o\7\x\b\j\3\7\7\6\u\6\9\s\7\8\s\z\k\v\1\e\j\p\4\m\e\g\2\x\m\o ]] 00:08:50.384 00:08:50.384 real 0m1.513s 00:08:50.384 user 0m1.085s 00:08:50.385 sys 0m0.606s 00:08:50.385 15:12:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:50.385 ************************************ 00:08:50.385 END TEST dd_rw_offset 00:08:50.385 ************************************ 00:08:50.385 15:12:59 -- common/autotest_common.sh@10 -- # set +x 00:08:50.643 15:12:59 -- dd/basic_rw.sh@1 -- # cleanup 00:08:50.643 15:12:59 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:50.643 15:12:59 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:50.643 15:12:59 -- dd/common.sh@11 -- # local nvme_ref= 00:08:50.644 15:12:59 -- dd/common.sh@12 -- # local size=0xffff 00:08:50.644 15:12:59 -- dd/common.sh@14 -- # local bs=1048576 00:08:50.644 15:12:59 -- dd/common.sh@15 -- # local count=1 00:08:50.644 15:12:59 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:50.644 15:12:59 -- dd/common.sh@18 -- # gen_conf 00:08:50.644 15:12:59 -- dd/common.sh@31 -- # xtrace_disable 00:08:50.644 15:12:59 -- common/autotest_common.sh@10 -- # set +x 00:08:50.644 [2024-04-24 15:12:59.703492] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:50.644 [2024-04-24 15:12:59.703624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62993 ] 00:08:50.644 { 00:08:50.644 "subsystems": [ 00:08:50.644 { 00:08:50.644 "subsystem": "bdev", 00:08:50.644 "config": [ 00:08:50.644 { 00:08:50.644 "params": { 00:08:50.644 "trtype": "pcie", 00:08:50.644 "traddr": "0000:00:10.0", 00:08:50.644 "name": "Nvme0" 00:08:50.644 }, 00:08:50.644 "method": "bdev_nvme_attach_controller" 00:08:50.644 }, 00:08:50.644 { 00:08:50.644 "method": "bdev_wait_for_examine" 00:08:50.644 } 00:08:50.644 ] 00:08:50.644 } 00:08:50.644 ] 00:08:50.644 } 00:08:50.644 [2024-04-24 15:12:59.839605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.902 [2024-04-24 15:12:59.980854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.160  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:51.160 00:08:51.161 15:13:00 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:51.161 ************************************ 00:08:51.161 END TEST spdk_dd_basic_rw 00:08:51.161 ************************************ 00:08:51.161 00:08:51.161 real 0m20.269s 00:08:51.161 user 0m15.050s 00:08:51.161 sys 0m6.825s 00:08:51.161 15:13:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:51.161 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:08:51.420 15:13:00 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:51.420 15:13:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:51.420 15:13:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.420 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:08:51.420 ************************************ 00:08:51.420 START TEST spdk_dd_posix 00:08:51.420 ************************************ 00:08:51.420 15:13:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:51.420 * Looking for test storage... 00:08:51.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:51.420 15:13:00 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.420 15:13:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.420 15:13:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.420 15:13:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.420 15:13:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.420 15:13:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.420 15:13:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.420 15:13:00 -- paths/export.sh@5 -- # export PATH 00:08:51.420 15:13:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.420 15:13:00 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:51.420 15:13:00 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:51.420 15:13:00 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:51.420 15:13:00 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:51.420 15:13:00 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:51.420 15:13:00 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:51.420 15:13:00 -- dd/posix.sh@130 -- # tests 00:08:51.420 15:13:00 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:51.420 * First test run, liburing in use 00:08:51.420 15:13:00 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:51.420 15:13:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:51.420 15:13:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.420 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:08:51.420 ************************************ 00:08:51.420 START TEST dd_flag_append 00:08:51.420 ************************************ 00:08:51.420 15:13:00 -- common/autotest_common.sh@1111 -- # append 00:08:51.420 15:13:00 -- dd/posix.sh@16 -- # local dump0 00:08:51.420 15:13:00 -- dd/posix.sh@17 -- # local dump1 00:08:51.420 15:13:00 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:51.420 15:13:00 -- dd/common.sh@98 -- # xtrace_disable 00:08:51.420 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:08:51.420 15:13:00 -- dd/posix.sh@19 -- # dump0=f17gqbtjub4n97xwfxk0ixrch8edcoz6 00:08:51.678 15:13:00 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:51.678 15:13:00 -- dd/common.sh@98 -- # xtrace_disable 00:08:51.678 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:08:51.678 15:13:00 -- dd/posix.sh@20 -- # dump1=r2zhgz9fu3tng8cq9yx8j61i5p6xhb8u 00:08:51.678 15:13:00 -- dd/posix.sh@22 -- # printf %s f17gqbtjub4n97xwfxk0ixrch8edcoz6 00:08:51.678 15:13:00 -- dd/posix.sh@23 -- # printf %s r2zhgz9fu3tng8cq9yx8j61i5p6xhb8u 00:08:51.678 15:13:00 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:51.678 [2024-04-24 15:13:00.711570] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:51.678 [2024-04-24 15:13:00.711670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63066 ] 00:08:51.678 [2024-04-24 15:13:00.843227] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.938 [2024-04-24 15:13:00.959525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.197  Copying: 32/32 [B] (average 31 kBps) 00:08:52.197 00:08:52.197 15:13:01 -- dd/posix.sh@27 -- # [[ r2zhgz9fu3tng8cq9yx8j61i5p6xhb8uf17gqbtjub4n97xwfxk0ixrch8edcoz6 == \r\2\z\h\g\z\9\f\u\3\t\n\g\8\c\q\9\y\x\8\j\6\1\i\5\p\6\x\h\b\8\u\f\1\7\g\q\b\t\j\u\b\4\n\9\7\x\w\f\x\k\0\i\x\r\c\h\8\e\d\c\o\z\6 ]] 00:08:52.197 00:08:52.197 real 0m0.641s 00:08:52.197 user 0m0.384s 00:08:52.197 sys 0m0.289s 00:08:52.197 15:13:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:52.197 15:13:01 -- common/autotest_common.sh@10 -- # set +x 00:08:52.197 ************************************ 00:08:52.197 END TEST dd_flag_append 00:08:52.197 ************************************ 00:08:52.197 15:13:01 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:52.197 15:13:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:52.197 15:13:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:52.197 15:13:01 -- common/autotest_common.sh@10 -- # set +x 00:08:52.197 ************************************ 00:08:52.197 START TEST dd_flag_directory 00:08:52.197 ************************************ 00:08:52.197 15:13:01 -- common/autotest_common.sh@1111 -- # directory 00:08:52.197 15:13:01 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:52.197 15:13:01 -- common/autotest_common.sh@638 -- # local es=0 00:08:52.197 15:13:01 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:52.197 15:13:01 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.197 15:13:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:52.197 15:13:01 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.197 15:13:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:52.197 15:13:01 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.197 15:13:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:52.197 15:13:01 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.198 15:13:01 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:52.198 15:13:01 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:52.457 [2024-04-24 15:13:01.467758] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:52.457 [2024-04-24 15:13:01.467861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63104 ] 00:08:52.457 [2024-04-24 15:13:01.606474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.716 [2024-04-24 15:13:01.722725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.716 [2024-04-24 15:13:01.814202] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:52.716 [2024-04-24 15:13:01.814263] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:52.716 [2024-04-24 15:13:01.814298] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:52.716 [2024-04-24 15:13:01.930982] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:52.975 15:13:02 -- common/autotest_common.sh@641 -- # es=236 00:08:52.975 15:13:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:52.975 15:13:02 -- common/autotest_common.sh@650 -- # es=108 00:08:52.975 15:13:02 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:52.975 15:13:02 -- common/autotest_common.sh@658 -- # es=1 00:08:52.975 15:13:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:52.975 15:13:02 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:52.975 15:13:02 -- common/autotest_common.sh@638 -- # local es=0 00:08:52.975 15:13:02 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:52.975 15:13:02 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.975 15:13:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:52.975 15:13:02 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.975 15:13:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:52.975 15:13:02 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.975 15:13:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:52.975 15:13:02 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.975 15:13:02 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:52.975 15:13:02 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:52.975 [2024-04-24 15:13:02.105254] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:52.975 [2024-04-24 15:13:02.105346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63108 ] 00:08:53.234 [2024-04-24 15:13:02.236330] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.234 [2024-04-24 15:13:02.356943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.234 [2024-04-24 15:13:02.448949] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:53.234 [2024-04-24 15:13:02.449009] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:53.234 [2024-04-24 15:13:02.449028] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:53.494 [2024-04-24 15:13:02.570667] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:53.494 15:13:02 -- common/autotest_common.sh@641 -- # es=236 00:08:53.494 15:13:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:53.494 15:13:02 -- common/autotest_common.sh@650 -- # es=108 00:08:53.494 15:13:02 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:53.494 15:13:02 -- common/autotest_common.sh@658 -- # es=1 00:08:53.494 15:13:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:53.494 00:08:53.494 real 0m1.288s 00:08:53.494 user 0m0.773s 00:08:53.494 sys 0m0.303s 00:08:53.494 15:13:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:53.494 ************************************ 00:08:53.494 END TEST dd_flag_directory 00:08:53.494 ************************************ 00:08:53.494 15:13:02 -- common/autotest_common.sh@10 -- # set +x 00:08:53.753 15:13:02 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:53.753 15:13:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:53.753 15:13:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.753 15:13:02 -- common/autotest_common.sh@10 -- # set +x 00:08:53.753 ************************************ 00:08:53.753 START TEST dd_flag_nofollow 00:08:53.753 ************************************ 00:08:53.753 15:13:02 -- common/autotest_common.sh@1111 -- # nofollow 00:08:53.753 15:13:02 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:53.753 15:13:02 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:53.753 15:13:02 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:53.753 15:13:02 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:53.753 15:13:02 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:53.753 15:13:02 -- common/autotest_common.sh@638 -- # local es=0 00:08:53.753 15:13:02 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:53.753 15:13:02 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.753 15:13:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:53.753 15:13:02 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.753 15:13:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:53.753 15:13:02 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.753 15:13:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:53.753 15:13:02 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.753 15:13:02 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:53.753 15:13:02 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:53.753 [2024-04-24 15:13:02.881973] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:53.753 [2024-04-24 15:13:02.882078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63146 ] 00:08:54.011 [2024-04-24 15:13:03.021892] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.011 [2024-04-24 15:13:03.157666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.269 [2024-04-24 15:13:03.255348] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:54.269 [2024-04-24 15:13:03.255439] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:54.269 [2024-04-24 15:13:03.255463] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:54.269 [2024-04-24 15:13:03.378140] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:54.528 15:13:03 -- common/autotest_common.sh@641 -- # es=216 00:08:54.528 15:13:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:54.528 15:13:03 -- common/autotest_common.sh@650 -- # es=88 00:08:54.528 15:13:03 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:54.528 15:13:03 -- common/autotest_common.sh@658 -- # es=1 00:08:54.528 15:13:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:54.528 15:13:03 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:54.528 15:13:03 -- common/autotest_common.sh@638 -- # local es=0 00:08:54.528 15:13:03 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:54.528 15:13:03 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.528 15:13:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:54.528 15:13:03 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.528 15:13:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:54.528 15:13:03 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.528 15:13:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:54.528 15:13:03 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.528 15:13:03 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:54.528 15:13:03 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:54.528 [2024-04-24 15:13:03.559389] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:54.528 [2024-04-24 15:13:03.559520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63161 ] 00:08:54.528 [2024-04-24 15:13:03.696884] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.880 [2024-04-24 15:13:03.817693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.880 [2024-04-24 15:13:03.911976] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:54.880 [2024-04-24 15:13:03.912040] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:54.880 [2024-04-24 15:13:03.912061] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:54.880 [2024-04-24 15:13:04.036071] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:55.142 15:13:04 -- common/autotest_common.sh@641 -- # es=216 00:08:55.142 15:13:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:55.142 15:13:04 -- common/autotest_common.sh@650 -- # es=88 00:08:55.142 15:13:04 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:55.142 15:13:04 -- common/autotest_common.sh@658 -- # es=1 00:08:55.142 15:13:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:55.142 15:13:04 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:55.142 15:13:04 -- dd/common.sh@98 -- # xtrace_disable 00:08:55.142 15:13:04 -- common/autotest_common.sh@10 -- # set +x 00:08:55.142 15:13:04 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:55.142 [2024-04-24 15:13:04.212089] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:55.142 [2024-04-24 15:13:04.212229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63163 ] 00:08:55.142 [2024-04-24 15:13:04.346616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.401 [2024-04-24 15:13:04.501653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.660  Copying: 512/512 [B] (average 500 kBps) 00:08:55.660 00:08:55.660 15:13:04 -- dd/posix.sh@49 -- # [[ k5tlopaeqsw92hhwszgfjcyg96mqeni9c5qzefda8x587teloli5kzdzyuxlgb6y9dag9teu58ie7f5gb0jyunhyabbktw2qhvnnqlx0njphf8kwrt370i67y0ixwq8qp2h77ak5j8rlpuw24p8obj0njknhcj4sl89j85lnm4kxxekog4bkei6dn5sqrvwfq4j49rp6vfq4m6pcu08ycboihb279txsn1vsxbipe9gb06mbhfcaxd3djonyn9oriulc48y94sk9hlglfzfw5vzj8kzvutliixxbljlalva6hgs9mi22friof4y1sbpci3d4abukgctzezlupaoccsb57juaj12dadpgn0io0rzir89ato7v0i9bdgcwgio2r565gxk187i1uxrcv1omzbq84eqb7tewgdh87dyxcy5rlyrntkst3r77pvvb54z9fb5731l3hyic4ut0kvig3j59i221pfy2l9ijpwjajnsmbt4esnackx5svz9l4wuq == \k\5\t\l\o\p\a\e\q\s\w\9\2\h\h\w\s\z\g\f\j\c\y\g\9\6\m\q\e\n\i\9\c\5\q\z\e\f\d\a\8\x\5\8\7\t\e\l\o\l\i\5\k\z\d\z\y\u\x\l\g\b\6\y\9\d\a\g\9\t\e\u\5\8\i\e\7\f\5\g\b\0\j\y\u\n\h\y\a\b\b\k\t\w\2\q\h\v\n\n\q\l\x\0\n\j\p\h\f\8\k\w\r\t\3\7\0\i\6\7\y\0\i\x\w\q\8\q\p\2\h\7\7\a\k\5\j\8\r\l\p\u\w\2\4\p\8\o\b\j\0\n\j\k\n\h\c\j\4\s\l\8\9\j\8\5\l\n\m\4\k\x\x\e\k\o\g\4\b\k\e\i\6\d\n\5\s\q\r\v\w\f\q\4\j\4\9\r\p\6\v\f\q\4\m\6\p\c\u\0\8\y\c\b\o\i\h\b\2\7\9\t\x\s\n\1\v\s\x\b\i\p\e\9\g\b\0\6\m\b\h\f\c\a\x\d\3\d\j\o\n\y\n\9\o\r\i\u\l\c\4\8\y\9\4\s\k\9\h\l\g\l\f\z\f\w\5\v\z\j\8\k\z\v\u\t\l\i\i\x\x\b\l\j\l\a\l\v\a\6\h\g\s\9\m\i\2\2\f\r\i\o\f\4\y\1\s\b\p\c\i\3\d\4\a\b\u\k\g\c\t\z\e\z\l\u\p\a\o\c\c\s\b\5\7\j\u\a\j\1\2\d\a\d\p\g\n\0\i\o\0\r\z\i\r\8\9\a\t\o\7\v\0\i\9\b\d\g\c\w\g\i\o\2\r\5\6\5\g\x\k\1\8\7\i\1\u\x\r\c\v\1\o\m\z\b\q\8\4\e\q\b\7\t\e\w\g\d\h\8\7\d\y\x\c\y\5\r\l\y\r\n\t\k\s\t\3\r\7\7\p\v\v\b\5\4\z\9\f\b\5\7\3\1\l\3\h\y\i\c\4\u\t\0\k\v\i\g\3\j\5\9\i\2\2\1\p\f\y\2\l\9\i\j\p\w\j\a\j\n\s\m\b\t\4\e\s\n\a\c\k\x\5\s\v\z\9\l\4\w\u\q ]] 00:08:55.660 00:08:55.660 real 0m2.031s 00:08:55.660 user 0m1.252s 00:08:55.660 sys 0m0.586s 00:08:55.660 15:13:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:55.660 15:13:04 -- common/autotest_common.sh@10 -- # set +x 00:08:55.660 ************************************ 00:08:55.660 END TEST dd_flag_nofollow 00:08:55.660 ************************************ 00:08:55.660 15:13:04 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:55.660 15:13:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:55.660 15:13:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.660 15:13:04 -- common/autotest_common.sh@10 -- # set +x 00:08:55.918 ************************************ 00:08:55.918 START TEST dd_flag_noatime 00:08:55.918 ************************************ 00:08:55.918 15:13:04 -- common/autotest_common.sh@1111 -- # noatime 00:08:55.918 15:13:04 -- dd/posix.sh@53 -- # local atime_if 00:08:55.919 15:13:04 -- dd/posix.sh@54 -- # local atime_of 00:08:55.919 15:13:04 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:55.919 15:13:04 -- dd/common.sh@98 -- # xtrace_disable 00:08:55.919 15:13:04 -- common/autotest_common.sh@10 -- # set +x 00:08:55.919 15:13:04 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:55.919 15:13:04 -- dd/posix.sh@60 -- # atime_if=1713971584 00:08:55.919 15:13:04 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:55.919 15:13:04 -- dd/posix.sh@61 -- # atime_of=1713971584 00:08:55.919 15:13:04 -- dd/posix.sh@66 -- # sleep 1 00:08:56.853 15:13:05 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:56.853 [2024-04-24 15:13:06.038068] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:56.853 [2024-04-24 15:13:06.038175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63218 ] 00:08:57.112 [2024-04-24 15:13:06.175783] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.112 [2024-04-24 15:13:06.311551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.629  Copying: 512/512 [B] (average 500 kBps) 00:08:57.629 00:08:57.629 15:13:06 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:57.629 15:13:06 -- dd/posix.sh@69 -- # (( atime_if == 1713971584 )) 00:08:57.629 15:13:06 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:57.629 15:13:06 -- dd/posix.sh@70 -- # (( atime_of == 1713971584 )) 00:08:57.629 15:13:06 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:57.629 [2024-04-24 15:13:06.723887] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:57.629 [2024-04-24 15:13:06.724005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63237 ] 00:08:57.629 [2024-04-24 15:13:06.862332] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.899 [2024-04-24 15:13:06.982613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.156  Copying: 512/512 [B] (average 500 kBps) 00:08:58.156 00:08:58.156 15:13:07 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.156 15:13:07 -- dd/posix.sh@73 -- # (( atime_if < 1713971587 )) 00:08:58.156 00:08:58.156 real 0m2.373s 00:08:58.156 user 0m0.823s 00:08:58.156 sys 0m0.591s 00:08:58.156 15:13:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:58.156 ************************************ 00:08:58.156 15:13:07 -- common/autotest_common.sh@10 -- # set +x 00:08:58.156 END TEST dd_flag_noatime 00:08:58.156 ************************************ 00:08:58.156 15:13:07 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:58.156 15:13:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:58.156 15:13:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.156 15:13:07 -- common/autotest_common.sh@10 -- # set +x 00:08:58.415 ************************************ 00:08:58.415 START TEST dd_flags_misc 00:08:58.415 ************************************ 00:08:58.415 15:13:07 -- common/autotest_common.sh@1111 -- # io 00:08:58.415 15:13:07 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:58.415 15:13:07 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:58.415 15:13:07 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:58.415 15:13:07 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:58.415 15:13:07 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:58.415 15:13:07 -- dd/common.sh@98 -- # xtrace_disable 00:08:58.415 15:13:07 -- common/autotest_common.sh@10 -- # set +x 00:08:58.415 15:13:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:58.415 15:13:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:58.415 [2024-04-24 15:13:07.506039] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:58.415 [2024-04-24 15:13:07.506133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63271 ] 00:08:58.415 [2024-04-24 15:13:07.641644] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.673 [2024-04-24 15:13:07.758607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.932  Copying: 512/512 [B] (average 500 kBps) 00:08:58.932 00:08:58.932 15:13:08 -- dd/posix.sh@93 -- # [[ nt0dyotigbybstasdmhoaydg4vfukjazmcnh6zs0z2sse7s8mq033ys3smlt3xl1sef0cw97no9xokqfzljabsgfrmaqk59smhr2e8o6pntmovhl7utr8c4sf3rlq24d4dn9mbtpuddlc25b2oui9v3qd2eyvl2ujvi4s52oix3rm9vnz0i5yz1jqd2ortv0q3bbyw65cu0hrrmyib2t6n8iawa8os13nbjy4qiafdxnscg78w8idfr3lged0pypq7ihdo61pm47osmagp407t3vlbwh6qz23jodc66czezv9xq2rtq4vxqkr80s1rbt7udxe5hsg6gejhk2fmos9prvk6n8vbas4v10j6iyt70b67kf4itzerhzegzzdl6sgvf73rtv6i1ba11cp07umohteutragzp06fgs2d9cd1kursud504q5f0ei2s477oz9rpv5h9ovs0t72i1kcpdm7cswz2qi7ya5n3ogcwydizjbc075lj0qjdxf0dw1py == \n\t\0\d\y\o\t\i\g\b\y\b\s\t\a\s\d\m\h\o\a\y\d\g\4\v\f\u\k\j\a\z\m\c\n\h\6\z\s\0\z\2\s\s\e\7\s\8\m\q\0\3\3\y\s\3\s\m\l\t\3\x\l\1\s\e\f\0\c\w\9\7\n\o\9\x\o\k\q\f\z\l\j\a\b\s\g\f\r\m\a\q\k\5\9\s\m\h\r\2\e\8\o\6\p\n\t\m\o\v\h\l\7\u\t\r\8\c\4\s\f\3\r\l\q\2\4\d\4\d\n\9\m\b\t\p\u\d\d\l\c\2\5\b\2\o\u\i\9\v\3\q\d\2\e\y\v\l\2\u\j\v\i\4\s\5\2\o\i\x\3\r\m\9\v\n\z\0\i\5\y\z\1\j\q\d\2\o\r\t\v\0\q\3\b\b\y\w\6\5\c\u\0\h\r\r\m\y\i\b\2\t\6\n\8\i\a\w\a\8\o\s\1\3\n\b\j\y\4\q\i\a\f\d\x\n\s\c\g\7\8\w\8\i\d\f\r\3\l\g\e\d\0\p\y\p\q\7\i\h\d\o\6\1\p\m\4\7\o\s\m\a\g\p\4\0\7\t\3\v\l\b\w\h\6\q\z\2\3\j\o\d\c\6\6\c\z\e\z\v\9\x\q\2\r\t\q\4\v\x\q\k\r\8\0\s\1\r\b\t\7\u\d\x\e\5\h\s\g\6\g\e\j\h\k\2\f\m\o\s\9\p\r\v\k\6\n\8\v\b\a\s\4\v\1\0\j\6\i\y\t\7\0\b\6\7\k\f\4\i\t\z\e\r\h\z\e\g\z\z\d\l\6\s\g\v\f\7\3\r\t\v\6\i\1\b\a\1\1\c\p\0\7\u\m\o\h\t\e\u\t\r\a\g\z\p\0\6\f\g\s\2\d\9\c\d\1\k\u\r\s\u\d\5\0\4\q\5\f\0\e\i\2\s\4\7\7\o\z\9\r\p\v\5\h\9\o\v\s\0\t\7\2\i\1\k\c\p\d\m\7\c\s\w\z\2\q\i\7\y\a\5\n\3\o\g\c\w\y\d\i\z\j\b\c\0\7\5\l\j\0\q\j\d\x\f\0\d\w\1\p\y ]] 00:08:58.932 15:13:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:58.932 15:13:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:58.932 [2024-04-24 15:13:08.143178] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:58.932 [2024-04-24 15:13:08.143276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63279 ] 00:08:59.190 [2024-04-24 15:13:08.279719] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.190 [2024-04-24 15:13:08.396400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.707  Copying: 512/512 [B] (average 500 kBps) 00:08:59.707 00:08:59.707 15:13:08 -- dd/posix.sh@93 -- # [[ nt0dyotigbybstasdmhoaydg4vfukjazmcnh6zs0z2sse7s8mq033ys3smlt3xl1sef0cw97no9xokqfzljabsgfrmaqk59smhr2e8o6pntmovhl7utr8c4sf3rlq24d4dn9mbtpuddlc25b2oui9v3qd2eyvl2ujvi4s52oix3rm9vnz0i5yz1jqd2ortv0q3bbyw65cu0hrrmyib2t6n8iawa8os13nbjy4qiafdxnscg78w8idfr3lged0pypq7ihdo61pm47osmagp407t3vlbwh6qz23jodc66czezv9xq2rtq4vxqkr80s1rbt7udxe5hsg6gejhk2fmos9prvk6n8vbas4v10j6iyt70b67kf4itzerhzegzzdl6sgvf73rtv6i1ba11cp07umohteutragzp06fgs2d9cd1kursud504q5f0ei2s477oz9rpv5h9ovs0t72i1kcpdm7cswz2qi7ya5n3ogcwydizjbc075lj0qjdxf0dw1py == \n\t\0\d\y\o\t\i\g\b\y\b\s\t\a\s\d\m\h\o\a\y\d\g\4\v\f\u\k\j\a\z\m\c\n\h\6\z\s\0\z\2\s\s\e\7\s\8\m\q\0\3\3\y\s\3\s\m\l\t\3\x\l\1\s\e\f\0\c\w\9\7\n\o\9\x\o\k\q\f\z\l\j\a\b\s\g\f\r\m\a\q\k\5\9\s\m\h\r\2\e\8\o\6\p\n\t\m\o\v\h\l\7\u\t\r\8\c\4\s\f\3\r\l\q\2\4\d\4\d\n\9\m\b\t\p\u\d\d\l\c\2\5\b\2\o\u\i\9\v\3\q\d\2\e\y\v\l\2\u\j\v\i\4\s\5\2\o\i\x\3\r\m\9\v\n\z\0\i\5\y\z\1\j\q\d\2\o\r\t\v\0\q\3\b\b\y\w\6\5\c\u\0\h\r\r\m\y\i\b\2\t\6\n\8\i\a\w\a\8\o\s\1\3\n\b\j\y\4\q\i\a\f\d\x\n\s\c\g\7\8\w\8\i\d\f\r\3\l\g\e\d\0\p\y\p\q\7\i\h\d\o\6\1\p\m\4\7\o\s\m\a\g\p\4\0\7\t\3\v\l\b\w\h\6\q\z\2\3\j\o\d\c\6\6\c\z\e\z\v\9\x\q\2\r\t\q\4\v\x\q\k\r\8\0\s\1\r\b\t\7\u\d\x\e\5\h\s\g\6\g\e\j\h\k\2\f\m\o\s\9\p\r\v\k\6\n\8\v\b\a\s\4\v\1\0\j\6\i\y\t\7\0\b\6\7\k\f\4\i\t\z\e\r\h\z\e\g\z\z\d\l\6\s\g\v\f\7\3\r\t\v\6\i\1\b\a\1\1\c\p\0\7\u\m\o\h\t\e\u\t\r\a\g\z\p\0\6\f\g\s\2\d\9\c\d\1\k\u\r\s\u\d\5\0\4\q\5\f\0\e\i\2\s\4\7\7\o\z\9\r\p\v\5\h\9\o\v\s\0\t\7\2\i\1\k\c\p\d\m\7\c\s\w\z\2\q\i\7\y\a\5\n\3\o\g\c\w\y\d\i\z\j\b\c\0\7\5\l\j\0\q\j\d\x\f\0\d\w\1\p\y ]] 00:08:59.707 15:13:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:59.707 15:13:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:59.707 [2024-04-24 15:13:08.781655] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:08:59.707 [2024-04-24 15:13:08.781785] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63294 ] 00:08:59.707 [2024-04-24 15:13:08.920048] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.966 [2024-04-24 15:13:09.040346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.224  Copying: 512/512 [B] (average 166 kBps) 00:09:00.224 00:09:00.224 15:13:09 -- dd/posix.sh@93 -- # [[ nt0dyotigbybstasdmhoaydg4vfukjazmcnh6zs0z2sse7s8mq033ys3smlt3xl1sef0cw97no9xokqfzljabsgfrmaqk59smhr2e8o6pntmovhl7utr8c4sf3rlq24d4dn9mbtpuddlc25b2oui9v3qd2eyvl2ujvi4s52oix3rm9vnz0i5yz1jqd2ortv0q3bbyw65cu0hrrmyib2t6n8iawa8os13nbjy4qiafdxnscg78w8idfr3lged0pypq7ihdo61pm47osmagp407t3vlbwh6qz23jodc66czezv9xq2rtq4vxqkr80s1rbt7udxe5hsg6gejhk2fmos9prvk6n8vbas4v10j6iyt70b67kf4itzerhzegzzdl6sgvf73rtv6i1ba11cp07umohteutragzp06fgs2d9cd1kursud504q5f0ei2s477oz9rpv5h9ovs0t72i1kcpdm7cswz2qi7ya5n3ogcwydizjbc075lj0qjdxf0dw1py == \n\t\0\d\y\o\t\i\g\b\y\b\s\t\a\s\d\m\h\o\a\y\d\g\4\v\f\u\k\j\a\z\m\c\n\h\6\z\s\0\z\2\s\s\e\7\s\8\m\q\0\3\3\y\s\3\s\m\l\t\3\x\l\1\s\e\f\0\c\w\9\7\n\o\9\x\o\k\q\f\z\l\j\a\b\s\g\f\r\m\a\q\k\5\9\s\m\h\r\2\e\8\o\6\p\n\t\m\o\v\h\l\7\u\t\r\8\c\4\s\f\3\r\l\q\2\4\d\4\d\n\9\m\b\t\p\u\d\d\l\c\2\5\b\2\o\u\i\9\v\3\q\d\2\e\y\v\l\2\u\j\v\i\4\s\5\2\o\i\x\3\r\m\9\v\n\z\0\i\5\y\z\1\j\q\d\2\o\r\t\v\0\q\3\b\b\y\w\6\5\c\u\0\h\r\r\m\y\i\b\2\t\6\n\8\i\a\w\a\8\o\s\1\3\n\b\j\y\4\q\i\a\f\d\x\n\s\c\g\7\8\w\8\i\d\f\r\3\l\g\e\d\0\p\y\p\q\7\i\h\d\o\6\1\p\m\4\7\o\s\m\a\g\p\4\0\7\t\3\v\l\b\w\h\6\q\z\2\3\j\o\d\c\6\6\c\z\e\z\v\9\x\q\2\r\t\q\4\v\x\q\k\r\8\0\s\1\r\b\t\7\u\d\x\e\5\h\s\g\6\g\e\j\h\k\2\f\m\o\s\9\p\r\v\k\6\n\8\v\b\a\s\4\v\1\0\j\6\i\y\t\7\0\b\6\7\k\f\4\i\t\z\e\r\h\z\e\g\z\z\d\l\6\s\g\v\f\7\3\r\t\v\6\i\1\b\a\1\1\c\p\0\7\u\m\o\h\t\e\u\t\r\a\g\z\p\0\6\f\g\s\2\d\9\c\d\1\k\u\r\s\u\d\5\0\4\q\5\f\0\e\i\2\s\4\7\7\o\z\9\r\p\v\5\h\9\o\v\s\0\t\7\2\i\1\k\c\p\d\m\7\c\s\w\z\2\q\i\7\y\a\5\n\3\o\g\c\w\y\d\i\z\j\b\c\0\7\5\l\j\0\q\j\d\x\f\0\d\w\1\p\y ]] 00:09:00.224 15:13:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:00.224 15:13:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:00.224 [2024-04-24 15:13:09.446452] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:00.224 [2024-04-24 15:13:09.446569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63304 ] 00:09:00.483 [2024-04-24 15:13:09.584808] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.483 [2024-04-24 15:13:09.706484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.000  Copying: 512/512 [B] (average 250 kBps) 00:09:01.000 00:09:01.000 15:13:10 -- dd/posix.sh@93 -- # [[ nt0dyotigbybstasdmhoaydg4vfukjazmcnh6zs0z2sse7s8mq033ys3smlt3xl1sef0cw97no9xokqfzljabsgfrmaqk59smhr2e8o6pntmovhl7utr8c4sf3rlq24d4dn9mbtpuddlc25b2oui9v3qd2eyvl2ujvi4s52oix3rm9vnz0i5yz1jqd2ortv0q3bbyw65cu0hrrmyib2t6n8iawa8os13nbjy4qiafdxnscg78w8idfr3lged0pypq7ihdo61pm47osmagp407t3vlbwh6qz23jodc66czezv9xq2rtq4vxqkr80s1rbt7udxe5hsg6gejhk2fmos9prvk6n8vbas4v10j6iyt70b67kf4itzerhzegzzdl6sgvf73rtv6i1ba11cp07umohteutragzp06fgs2d9cd1kursud504q5f0ei2s477oz9rpv5h9ovs0t72i1kcpdm7cswz2qi7ya5n3ogcwydizjbc075lj0qjdxf0dw1py == \n\t\0\d\y\o\t\i\g\b\y\b\s\t\a\s\d\m\h\o\a\y\d\g\4\v\f\u\k\j\a\z\m\c\n\h\6\z\s\0\z\2\s\s\e\7\s\8\m\q\0\3\3\y\s\3\s\m\l\t\3\x\l\1\s\e\f\0\c\w\9\7\n\o\9\x\o\k\q\f\z\l\j\a\b\s\g\f\r\m\a\q\k\5\9\s\m\h\r\2\e\8\o\6\p\n\t\m\o\v\h\l\7\u\t\r\8\c\4\s\f\3\r\l\q\2\4\d\4\d\n\9\m\b\t\p\u\d\d\l\c\2\5\b\2\o\u\i\9\v\3\q\d\2\e\y\v\l\2\u\j\v\i\4\s\5\2\o\i\x\3\r\m\9\v\n\z\0\i\5\y\z\1\j\q\d\2\o\r\t\v\0\q\3\b\b\y\w\6\5\c\u\0\h\r\r\m\y\i\b\2\t\6\n\8\i\a\w\a\8\o\s\1\3\n\b\j\y\4\q\i\a\f\d\x\n\s\c\g\7\8\w\8\i\d\f\r\3\l\g\e\d\0\p\y\p\q\7\i\h\d\o\6\1\p\m\4\7\o\s\m\a\g\p\4\0\7\t\3\v\l\b\w\h\6\q\z\2\3\j\o\d\c\6\6\c\z\e\z\v\9\x\q\2\r\t\q\4\v\x\q\k\r\8\0\s\1\r\b\t\7\u\d\x\e\5\h\s\g\6\g\e\j\h\k\2\f\m\o\s\9\p\r\v\k\6\n\8\v\b\a\s\4\v\1\0\j\6\i\y\t\7\0\b\6\7\k\f\4\i\t\z\e\r\h\z\e\g\z\z\d\l\6\s\g\v\f\7\3\r\t\v\6\i\1\b\a\1\1\c\p\0\7\u\m\o\h\t\e\u\t\r\a\g\z\p\0\6\f\g\s\2\d\9\c\d\1\k\u\r\s\u\d\5\0\4\q\5\f\0\e\i\2\s\4\7\7\o\z\9\r\p\v\5\h\9\o\v\s\0\t\7\2\i\1\k\c\p\d\m\7\c\s\w\z\2\q\i\7\y\a\5\n\3\o\g\c\w\y\d\i\z\j\b\c\0\7\5\l\j\0\q\j\d\x\f\0\d\w\1\p\y ]] 00:09:01.000 15:13:10 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:01.000 15:13:10 -- dd/posix.sh@86 -- # gen_bytes 512 00:09:01.000 15:13:10 -- dd/common.sh@98 -- # xtrace_disable 00:09:01.000 15:13:10 -- common/autotest_common.sh@10 -- # set +x 00:09:01.000 15:13:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:01.000 15:13:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:01.000 [2024-04-24 15:13:10.101975] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:01.000 [2024-04-24 15:13:10.102106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63313 ] 00:09:01.000 [2024-04-24 15:13:10.232578] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.263 [2024-04-24 15:13:10.351432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.545  Copying: 512/512 [B] (average 500 kBps) 00:09:01.545 00:09:01.545 15:13:10 -- dd/posix.sh@93 -- # [[ 1qvv916ubi6ao2po91b2zmlblzlg4odj4pi6vllyhzfvk7rthsrs7rqq5y0zonrudntcpbdchii5qz5357uqvap6qaf2tzaqch3a7tks58iwznawy7fey76wnz4kx79peyqd9evllto38ehnuri0c20rbik6m0eq6zdpr40minrenm09cb3chuw6246tuy39n8sh8ipzeejzwbnr141em46nswbfis3ol8xy3f5nlqypwf06ouezalmfow6ja0gdn3v2ppcgyeajr8fs03s23lvz580s3xxyka7xk1jfkhxgkoakorlmgdogm8lnbtifl0d7ldpcysm5i07gbagesu02lygecmtl6rlx2725t3tz3imik4rysmjd292knn4kzhbr729jfe92vo1kmvdrzexk4wcyo3dv6q04safa787kvh0vlmc86ttyx5cs1idpfksys4eetnkdcyqs1th0kvxcty9ue8dzikrlnbwrleih0k44wq6g8vti5s1hydim == \1\q\v\v\9\1\6\u\b\i\6\a\o\2\p\o\9\1\b\2\z\m\l\b\l\z\l\g\4\o\d\j\4\p\i\6\v\l\l\y\h\z\f\v\k\7\r\t\h\s\r\s\7\r\q\q\5\y\0\z\o\n\r\u\d\n\t\c\p\b\d\c\h\i\i\5\q\z\5\3\5\7\u\q\v\a\p\6\q\a\f\2\t\z\a\q\c\h\3\a\7\t\k\s\5\8\i\w\z\n\a\w\y\7\f\e\y\7\6\w\n\z\4\k\x\7\9\p\e\y\q\d\9\e\v\l\l\t\o\3\8\e\h\n\u\r\i\0\c\2\0\r\b\i\k\6\m\0\e\q\6\z\d\p\r\4\0\m\i\n\r\e\n\m\0\9\c\b\3\c\h\u\w\6\2\4\6\t\u\y\3\9\n\8\s\h\8\i\p\z\e\e\j\z\w\b\n\r\1\4\1\e\m\4\6\n\s\w\b\f\i\s\3\o\l\8\x\y\3\f\5\n\l\q\y\p\w\f\0\6\o\u\e\z\a\l\m\f\o\w\6\j\a\0\g\d\n\3\v\2\p\p\c\g\y\e\a\j\r\8\f\s\0\3\s\2\3\l\v\z\5\8\0\s\3\x\x\y\k\a\7\x\k\1\j\f\k\h\x\g\k\o\a\k\o\r\l\m\g\d\o\g\m\8\l\n\b\t\i\f\l\0\d\7\l\d\p\c\y\s\m\5\i\0\7\g\b\a\g\e\s\u\0\2\l\y\g\e\c\m\t\l\6\r\l\x\2\7\2\5\t\3\t\z\3\i\m\i\k\4\r\y\s\m\j\d\2\9\2\k\n\n\4\k\z\h\b\r\7\2\9\j\f\e\9\2\v\o\1\k\m\v\d\r\z\e\x\k\4\w\c\y\o\3\d\v\6\q\0\4\s\a\f\a\7\8\7\k\v\h\0\v\l\m\c\8\6\t\t\y\x\5\c\s\1\i\d\p\f\k\s\y\s\4\e\e\t\n\k\d\c\y\q\s\1\t\h\0\k\v\x\c\t\y\9\u\e\8\d\z\i\k\r\l\n\b\w\r\l\e\i\h\0\k\4\4\w\q\6\g\8\v\t\i\5\s\1\h\y\d\i\m ]] 00:09:01.545 15:13:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:01.545 15:13:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:01.545 [2024-04-24 15:13:10.735351] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:01.545 [2024-04-24 15:13:10.735484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63327 ] 00:09:01.803 [2024-04-24 15:13:10.867467] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.803 [2024-04-24 15:13:10.994335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.319  Copying: 512/512 [B] (average 500 kBps) 00:09:02.319 00:09:02.319 15:13:11 -- dd/posix.sh@93 -- # [[ 1qvv916ubi6ao2po91b2zmlblzlg4odj4pi6vllyhzfvk7rthsrs7rqq5y0zonrudntcpbdchii5qz5357uqvap6qaf2tzaqch3a7tks58iwznawy7fey76wnz4kx79peyqd9evllto38ehnuri0c20rbik6m0eq6zdpr40minrenm09cb3chuw6246tuy39n8sh8ipzeejzwbnr141em46nswbfis3ol8xy3f5nlqypwf06ouezalmfow6ja0gdn3v2ppcgyeajr8fs03s23lvz580s3xxyka7xk1jfkhxgkoakorlmgdogm8lnbtifl0d7ldpcysm5i07gbagesu02lygecmtl6rlx2725t3tz3imik4rysmjd292knn4kzhbr729jfe92vo1kmvdrzexk4wcyo3dv6q04safa787kvh0vlmc86ttyx5cs1idpfksys4eetnkdcyqs1th0kvxcty9ue8dzikrlnbwrleih0k44wq6g8vti5s1hydim == \1\q\v\v\9\1\6\u\b\i\6\a\o\2\p\o\9\1\b\2\z\m\l\b\l\z\l\g\4\o\d\j\4\p\i\6\v\l\l\y\h\z\f\v\k\7\r\t\h\s\r\s\7\r\q\q\5\y\0\z\o\n\r\u\d\n\t\c\p\b\d\c\h\i\i\5\q\z\5\3\5\7\u\q\v\a\p\6\q\a\f\2\t\z\a\q\c\h\3\a\7\t\k\s\5\8\i\w\z\n\a\w\y\7\f\e\y\7\6\w\n\z\4\k\x\7\9\p\e\y\q\d\9\e\v\l\l\t\o\3\8\e\h\n\u\r\i\0\c\2\0\r\b\i\k\6\m\0\e\q\6\z\d\p\r\4\0\m\i\n\r\e\n\m\0\9\c\b\3\c\h\u\w\6\2\4\6\t\u\y\3\9\n\8\s\h\8\i\p\z\e\e\j\z\w\b\n\r\1\4\1\e\m\4\6\n\s\w\b\f\i\s\3\o\l\8\x\y\3\f\5\n\l\q\y\p\w\f\0\6\o\u\e\z\a\l\m\f\o\w\6\j\a\0\g\d\n\3\v\2\p\p\c\g\y\e\a\j\r\8\f\s\0\3\s\2\3\l\v\z\5\8\0\s\3\x\x\y\k\a\7\x\k\1\j\f\k\h\x\g\k\o\a\k\o\r\l\m\g\d\o\g\m\8\l\n\b\t\i\f\l\0\d\7\l\d\p\c\y\s\m\5\i\0\7\g\b\a\g\e\s\u\0\2\l\y\g\e\c\m\t\l\6\r\l\x\2\7\2\5\t\3\t\z\3\i\m\i\k\4\r\y\s\m\j\d\2\9\2\k\n\n\4\k\z\h\b\r\7\2\9\j\f\e\9\2\v\o\1\k\m\v\d\r\z\e\x\k\4\w\c\y\o\3\d\v\6\q\0\4\s\a\f\a\7\8\7\k\v\h\0\v\l\m\c\8\6\t\t\y\x\5\c\s\1\i\d\p\f\k\s\y\s\4\e\e\t\n\k\d\c\y\q\s\1\t\h\0\k\v\x\c\t\y\9\u\e\8\d\z\i\k\r\l\n\b\w\r\l\e\i\h\0\k\4\4\w\q\6\g\8\v\t\i\5\s\1\h\y\d\i\m ]] 00:09:02.319 15:13:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:02.319 15:13:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:02.319 [2024-04-24 15:13:11.384043] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:02.319 [2024-04-24 15:13:11.384134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63332 ] 00:09:02.319 [2024-04-24 15:13:11.520360] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.578 [2024-04-24 15:13:11.641093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.837  Copying: 512/512 [B] (average 166 kBps) 00:09:02.837 00:09:02.837 15:13:11 -- dd/posix.sh@93 -- # [[ 1qvv916ubi6ao2po91b2zmlblzlg4odj4pi6vllyhzfvk7rthsrs7rqq5y0zonrudntcpbdchii5qz5357uqvap6qaf2tzaqch3a7tks58iwznawy7fey76wnz4kx79peyqd9evllto38ehnuri0c20rbik6m0eq6zdpr40minrenm09cb3chuw6246tuy39n8sh8ipzeejzwbnr141em46nswbfis3ol8xy3f5nlqypwf06ouezalmfow6ja0gdn3v2ppcgyeajr8fs03s23lvz580s3xxyka7xk1jfkhxgkoakorlmgdogm8lnbtifl0d7ldpcysm5i07gbagesu02lygecmtl6rlx2725t3tz3imik4rysmjd292knn4kzhbr729jfe92vo1kmvdrzexk4wcyo3dv6q04safa787kvh0vlmc86ttyx5cs1idpfksys4eetnkdcyqs1th0kvxcty9ue8dzikrlnbwrleih0k44wq6g8vti5s1hydim == \1\q\v\v\9\1\6\u\b\i\6\a\o\2\p\o\9\1\b\2\z\m\l\b\l\z\l\g\4\o\d\j\4\p\i\6\v\l\l\y\h\z\f\v\k\7\r\t\h\s\r\s\7\r\q\q\5\y\0\z\o\n\r\u\d\n\t\c\p\b\d\c\h\i\i\5\q\z\5\3\5\7\u\q\v\a\p\6\q\a\f\2\t\z\a\q\c\h\3\a\7\t\k\s\5\8\i\w\z\n\a\w\y\7\f\e\y\7\6\w\n\z\4\k\x\7\9\p\e\y\q\d\9\e\v\l\l\t\o\3\8\e\h\n\u\r\i\0\c\2\0\r\b\i\k\6\m\0\e\q\6\z\d\p\r\4\0\m\i\n\r\e\n\m\0\9\c\b\3\c\h\u\w\6\2\4\6\t\u\y\3\9\n\8\s\h\8\i\p\z\e\e\j\z\w\b\n\r\1\4\1\e\m\4\6\n\s\w\b\f\i\s\3\o\l\8\x\y\3\f\5\n\l\q\y\p\w\f\0\6\o\u\e\z\a\l\m\f\o\w\6\j\a\0\g\d\n\3\v\2\p\p\c\g\y\e\a\j\r\8\f\s\0\3\s\2\3\l\v\z\5\8\0\s\3\x\x\y\k\a\7\x\k\1\j\f\k\h\x\g\k\o\a\k\o\r\l\m\g\d\o\g\m\8\l\n\b\t\i\f\l\0\d\7\l\d\p\c\y\s\m\5\i\0\7\g\b\a\g\e\s\u\0\2\l\y\g\e\c\m\t\l\6\r\l\x\2\7\2\5\t\3\t\z\3\i\m\i\k\4\r\y\s\m\j\d\2\9\2\k\n\n\4\k\z\h\b\r\7\2\9\j\f\e\9\2\v\o\1\k\m\v\d\r\z\e\x\k\4\w\c\y\o\3\d\v\6\q\0\4\s\a\f\a\7\8\7\k\v\h\0\v\l\m\c\8\6\t\t\y\x\5\c\s\1\i\d\p\f\k\s\y\s\4\e\e\t\n\k\d\c\y\q\s\1\t\h\0\k\v\x\c\t\y\9\u\e\8\d\z\i\k\r\l\n\b\w\r\l\e\i\h\0\k\4\4\w\q\6\g\8\v\t\i\5\s\1\h\y\d\i\m ]] 00:09:02.837 15:13:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:02.837 15:13:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:02.837 [2024-04-24 15:13:12.046601] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:02.837 [2024-04-24 15:13:12.046699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63347 ] 00:09:03.096 [2024-04-24 15:13:12.184353] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.096 [2024-04-24 15:13:12.305208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.613  Copying: 512/512 [B] (average 500 kBps) 00:09:03.613 00:09:03.613 ************************************ 00:09:03.613 END TEST dd_flags_misc 00:09:03.613 ************************************ 00:09:03.613 15:13:12 -- dd/posix.sh@93 -- # [[ 1qvv916ubi6ao2po91b2zmlblzlg4odj4pi6vllyhzfvk7rthsrs7rqq5y0zonrudntcpbdchii5qz5357uqvap6qaf2tzaqch3a7tks58iwznawy7fey76wnz4kx79peyqd9evllto38ehnuri0c20rbik6m0eq6zdpr40minrenm09cb3chuw6246tuy39n8sh8ipzeejzwbnr141em46nswbfis3ol8xy3f5nlqypwf06ouezalmfow6ja0gdn3v2ppcgyeajr8fs03s23lvz580s3xxyka7xk1jfkhxgkoakorlmgdogm8lnbtifl0d7ldpcysm5i07gbagesu02lygecmtl6rlx2725t3tz3imik4rysmjd292knn4kzhbr729jfe92vo1kmvdrzexk4wcyo3dv6q04safa787kvh0vlmc86ttyx5cs1idpfksys4eetnkdcyqs1th0kvxcty9ue8dzikrlnbwrleih0k44wq6g8vti5s1hydim == \1\q\v\v\9\1\6\u\b\i\6\a\o\2\p\o\9\1\b\2\z\m\l\b\l\z\l\g\4\o\d\j\4\p\i\6\v\l\l\y\h\z\f\v\k\7\r\t\h\s\r\s\7\r\q\q\5\y\0\z\o\n\r\u\d\n\t\c\p\b\d\c\h\i\i\5\q\z\5\3\5\7\u\q\v\a\p\6\q\a\f\2\t\z\a\q\c\h\3\a\7\t\k\s\5\8\i\w\z\n\a\w\y\7\f\e\y\7\6\w\n\z\4\k\x\7\9\p\e\y\q\d\9\e\v\l\l\t\o\3\8\e\h\n\u\r\i\0\c\2\0\r\b\i\k\6\m\0\e\q\6\z\d\p\r\4\0\m\i\n\r\e\n\m\0\9\c\b\3\c\h\u\w\6\2\4\6\t\u\y\3\9\n\8\s\h\8\i\p\z\e\e\j\z\w\b\n\r\1\4\1\e\m\4\6\n\s\w\b\f\i\s\3\o\l\8\x\y\3\f\5\n\l\q\y\p\w\f\0\6\o\u\e\z\a\l\m\f\o\w\6\j\a\0\g\d\n\3\v\2\p\p\c\g\y\e\a\j\r\8\f\s\0\3\s\2\3\l\v\z\5\8\0\s\3\x\x\y\k\a\7\x\k\1\j\f\k\h\x\g\k\o\a\k\o\r\l\m\g\d\o\g\m\8\l\n\b\t\i\f\l\0\d\7\l\d\p\c\y\s\m\5\i\0\7\g\b\a\g\e\s\u\0\2\l\y\g\e\c\m\t\l\6\r\l\x\2\7\2\5\t\3\t\z\3\i\m\i\k\4\r\y\s\m\j\d\2\9\2\k\n\n\4\k\z\h\b\r\7\2\9\j\f\e\9\2\v\o\1\k\m\v\d\r\z\e\x\k\4\w\c\y\o\3\d\v\6\q\0\4\s\a\f\a\7\8\7\k\v\h\0\v\l\m\c\8\6\t\t\y\x\5\c\s\1\i\d\p\f\k\s\y\s\4\e\e\t\n\k\d\c\y\q\s\1\t\h\0\k\v\x\c\t\y\9\u\e\8\d\z\i\k\r\l\n\b\w\r\l\e\i\h\0\k\4\4\w\q\6\g\8\v\t\i\5\s\1\h\y\d\i\m ]] 00:09:03.613 00:09:03.613 real 0m5.208s 00:09:03.613 user 0m3.155s 00:09:03.613 sys 0m2.289s 00:09:03.613 15:13:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:03.613 15:13:12 -- common/autotest_common.sh@10 -- # set +x 00:09:03.613 15:13:12 -- dd/posix.sh@131 -- # tests_forced_aio 00:09:03.613 15:13:12 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:09:03.613 * Second test run, disabling liburing, forcing AIO 00:09:03.613 15:13:12 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:09:03.613 15:13:12 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:09:03.613 15:13:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:03.613 15:13:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.613 15:13:12 -- common/autotest_common.sh@10 -- # set +x 00:09:03.613 ************************************ 00:09:03.613 START TEST dd_flag_append_forced_aio 00:09:03.613 ************************************ 00:09:03.613 15:13:12 -- common/autotest_common.sh@1111 -- # append 00:09:03.613 15:13:12 -- dd/posix.sh@16 -- # local dump0 00:09:03.613 15:13:12 -- dd/posix.sh@17 -- # local dump1 00:09:03.613 15:13:12 -- dd/posix.sh@19 -- # gen_bytes 32 00:09:03.614 15:13:12 -- dd/common.sh@98 -- # xtrace_disable 00:09:03.614 15:13:12 -- common/autotest_common.sh@10 -- # set +x 00:09:03.614 15:13:12 -- dd/posix.sh@19 -- # dump0=p26hxgr661aogu1uk7cqewsjy2xrg2fy 00:09:03.614 15:13:12 -- dd/posix.sh@20 -- # gen_bytes 32 00:09:03.614 15:13:12 -- dd/common.sh@98 -- # xtrace_disable 00:09:03.614 15:13:12 -- common/autotest_common.sh@10 -- # set +x 00:09:03.614 15:13:12 -- dd/posix.sh@20 -- # dump1=xv6qq6umf7tqbhzfjnbsy8dv1b1uzjgh 00:09:03.614 15:13:12 -- dd/posix.sh@22 -- # printf %s p26hxgr661aogu1uk7cqewsjy2xrg2fy 00:09:03.614 15:13:12 -- dd/posix.sh@23 -- # printf %s xv6qq6umf7tqbhzfjnbsy8dv1b1uzjgh 00:09:03.614 15:13:12 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:09:03.614 [2024-04-24 15:13:12.850613] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:03.614 [2024-04-24 15:13:12.850721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63385 ] 00:09:03.872 [2024-04-24 15:13:12.995127] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.131 [2024-04-24 15:13:13.143161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.390  Copying: 32/32 [B] (average 31 kBps) 00:09:04.390 00:09:04.390 ************************************ 00:09:04.390 END TEST dd_flag_append_forced_aio 00:09:04.390 ************************************ 00:09:04.390 15:13:13 -- dd/posix.sh@27 -- # [[ xv6qq6umf7tqbhzfjnbsy8dv1b1uzjghp26hxgr661aogu1uk7cqewsjy2xrg2fy == \x\v\6\q\q\6\u\m\f\7\t\q\b\h\z\f\j\n\b\s\y\8\d\v\1\b\1\u\z\j\g\h\p\2\6\h\x\g\r\6\6\1\a\o\g\u\1\u\k\7\c\q\e\w\s\j\y\2\x\r\g\2\f\y ]] 00:09:04.390 00:09:04.390 real 0m0.748s 00:09:04.390 user 0m0.441s 00:09:04.390 sys 0m0.178s 00:09:04.390 15:13:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:04.390 15:13:13 -- common/autotest_common.sh@10 -- # set +x 00:09:04.390 15:13:13 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:09:04.390 15:13:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:04.390 15:13:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.390 15:13:13 -- common/autotest_common.sh@10 -- # set +x 00:09:04.648 ************************************ 00:09:04.648 START TEST dd_flag_directory_forced_aio 00:09:04.648 ************************************ 00:09:04.648 15:13:13 -- common/autotest_common.sh@1111 -- # directory 00:09:04.648 15:13:13 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:04.648 15:13:13 -- common/autotest_common.sh@638 -- # local es=0 00:09:04.649 15:13:13 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:04.649 15:13:13 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.649 15:13:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:04.649 15:13:13 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.649 15:13:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:04.649 15:13:13 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.649 15:13:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:04.649 15:13:13 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.649 15:13:13 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:04.649 15:13:13 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:04.649 [2024-04-24 15:13:13.699738] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:04.649 [2024-04-24 15:13:13.699859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63417 ] 00:09:04.649 [2024-04-24 15:13:13.837223] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.907 [2024-04-24 15:13:13.968456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.907 [2024-04-24 15:13:14.062392] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:04.907 [2024-04-24 15:13:14.062469] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:04.907 [2024-04-24 15:13:14.062490] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:05.165 [2024-04-24 15:13:14.178144] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:05.165 15:13:14 -- common/autotest_common.sh@641 -- # es=236 00:09:05.165 15:13:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:05.165 15:13:14 -- common/autotest_common.sh@650 -- # es=108 00:09:05.165 15:13:14 -- common/autotest_common.sh@651 -- # case "$es" in 00:09:05.165 15:13:14 -- common/autotest_common.sh@658 -- # es=1 00:09:05.165 15:13:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:05.165 15:13:14 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:05.165 15:13:14 -- common/autotest_common.sh@638 -- # local es=0 00:09:05.165 15:13:14 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:05.165 15:13:14 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.165 15:13:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:05.165 15:13:14 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.165 15:13:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:05.165 15:13:14 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.165 15:13:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:05.165 15:13:14 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.165 15:13:14 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:05.165 15:13:14 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:05.165 [2024-04-24 15:13:14.355990] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:05.165 [2024-04-24 15:13:14.356087] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63426 ] 00:09:05.423 [2024-04-24 15:13:14.491802] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.423 [2024-04-24 15:13:14.611468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.682 [2024-04-24 15:13:14.703629] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:05.682 [2024-04-24 15:13:14.703682] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:05.682 [2024-04-24 15:13:14.703701] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:05.682 [2024-04-24 15:13:14.823114] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:05.940 15:13:14 -- common/autotest_common.sh@641 -- # es=236 00:09:05.940 15:13:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:05.940 15:13:14 -- common/autotest_common.sh@650 -- # es=108 00:09:05.940 15:13:14 -- common/autotest_common.sh@651 -- # case "$es" in 00:09:05.940 15:13:14 -- common/autotest_common.sh@658 -- # es=1 00:09:05.940 15:13:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:05.940 00:09:05.940 real 0m1.309s 00:09:05.940 user 0m0.800s 00:09:05.940 sys 0m0.298s 00:09:05.940 ************************************ 00:09:05.940 END TEST dd_flag_directory_forced_aio 00:09:05.940 ************************************ 00:09:05.940 15:13:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:05.940 15:13:14 -- common/autotest_common.sh@10 -- # set +x 00:09:05.940 15:13:14 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:09:05.940 15:13:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:05.940 15:13:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.941 15:13:14 -- common/autotest_common.sh@10 -- # set +x 00:09:05.941 ************************************ 00:09:05.941 START TEST dd_flag_nofollow_forced_aio 00:09:05.941 ************************************ 00:09:05.941 15:13:15 -- common/autotest_common.sh@1111 -- # nofollow 00:09:05.941 15:13:15 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:05.941 15:13:15 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:05.941 15:13:15 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:05.941 15:13:15 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:05.941 15:13:15 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:05.941 15:13:15 -- common/autotest_common.sh@638 -- # local es=0 00:09:05.941 15:13:15 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:05.941 15:13:15 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.941 15:13:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:05.941 15:13:15 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.941 15:13:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:05.941 15:13:15 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.941 15:13:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:05.941 15:13:15 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.941 15:13:15 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:05.941 15:13:15 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:05.941 [2024-04-24 15:13:15.112006] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:05.941 [2024-04-24 15:13:15.112116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63465 ] 00:09:06.199 [2024-04-24 15:13:15.250314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.199 [2024-04-24 15:13:15.385177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.457 [2024-04-24 15:13:15.487949] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:06.457 [2024-04-24 15:13:15.488044] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:06.457 [2024-04-24 15:13:15.488083] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.457 [2024-04-24 15:13:15.612330] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:06.716 15:13:15 -- common/autotest_common.sh@641 -- # es=216 00:09:06.716 15:13:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:06.716 15:13:15 -- common/autotest_common.sh@650 -- # es=88 00:09:06.716 15:13:15 -- common/autotest_common.sh@651 -- # case "$es" in 00:09:06.716 15:13:15 -- common/autotest_common.sh@658 -- # es=1 00:09:06.716 15:13:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:06.716 15:13:15 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:06.716 15:13:15 -- common/autotest_common.sh@638 -- # local es=0 00:09:06.716 15:13:15 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:06.716 15:13:15 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.716 15:13:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:06.716 15:13:15 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.716 15:13:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:06.716 15:13:15 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.716 15:13:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:06.716 15:13:15 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.716 15:13:15 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:06.716 15:13:15 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:06.716 [2024-04-24 15:13:15.793016] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:06.716 [2024-04-24 15:13:15.793113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63475 ] 00:09:06.716 [2024-04-24 15:13:15.925905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.975 [2024-04-24 15:13:16.050846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.975 [2024-04-24 15:13:16.148247] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:06.975 [2024-04-24 15:13:16.148309] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:06.975 [2024-04-24 15:13:16.148329] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:07.232 [2024-04-24 15:13:16.274002] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:07.232 15:13:16 -- common/autotest_common.sh@641 -- # es=216 00:09:07.232 15:13:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:07.232 15:13:16 -- common/autotest_common.sh@650 -- # es=88 00:09:07.232 15:13:16 -- common/autotest_common.sh@651 -- # case "$es" in 00:09:07.232 15:13:16 -- common/autotest_common.sh@658 -- # es=1 00:09:07.232 15:13:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:07.232 15:13:16 -- dd/posix.sh@46 -- # gen_bytes 512 00:09:07.232 15:13:16 -- dd/common.sh@98 -- # xtrace_disable 00:09:07.232 15:13:16 -- common/autotest_common.sh@10 -- # set +x 00:09:07.232 15:13:16 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:07.232 [2024-04-24 15:13:16.452512] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:07.232 [2024-04-24 15:13:16.452594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63482 ] 00:09:07.489 [2024-04-24 15:13:16.585758] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.489 [2024-04-24 15:13:16.706037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.037  Copying: 512/512 [B] (average 500 kBps) 00:09:08.037 00:09:08.037 15:13:17 -- dd/posix.sh@49 -- # [[ 8lfd3es3uc2cwvh0lwonz4opsztvjolau7swrcxiscc82ucuo0lzixoqdf7fw25q1q9qqm0e68pv0kwjow0hf9tetzt7at6p8fojpqnn71ijh2d1w8wvvqv7z7kx36u72f799j7qw7s0rwjtio4xbp8nxmqfpq9tiwyfaxug45f73ek7zroaobh35m2hx0ogx3xcwusf0xxhw4e99kkblmnu4b6554h6zoq9b4fc04r22iu02fvum0ycz4swxg0gwt0cnvypseplzyt38da02ubynwiggohhg6mgyb95u127dxm507shd3tb9sand86uak7mey1td7o6oqc4h0wqvlzo6yzvvqc4nak3ur1dcteh13gs4angnyukylafu1xrjc397y3jpht2cnt7s8qmkr09qwanun6vctoxl9hhgba427v3mgdj9rbl8bvwwybfqyop79yim1xtfdppg1u8x2brkhl21b4pqefha8u3ip30zzpem2762ckffxy7li6r == \8\l\f\d\3\e\s\3\u\c\2\c\w\v\h\0\l\w\o\n\z\4\o\p\s\z\t\v\j\o\l\a\u\7\s\w\r\c\x\i\s\c\c\8\2\u\c\u\o\0\l\z\i\x\o\q\d\f\7\f\w\2\5\q\1\q\9\q\q\m\0\e\6\8\p\v\0\k\w\j\o\w\0\h\f\9\t\e\t\z\t\7\a\t\6\p\8\f\o\j\p\q\n\n\7\1\i\j\h\2\d\1\w\8\w\v\v\q\v\7\z\7\k\x\3\6\u\7\2\f\7\9\9\j\7\q\w\7\s\0\r\w\j\t\i\o\4\x\b\p\8\n\x\m\q\f\p\q\9\t\i\w\y\f\a\x\u\g\4\5\f\7\3\e\k\7\z\r\o\a\o\b\h\3\5\m\2\h\x\0\o\g\x\3\x\c\w\u\s\f\0\x\x\h\w\4\e\9\9\k\k\b\l\m\n\u\4\b\6\5\5\4\h\6\z\o\q\9\b\4\f\c\0\4\r\2\2\i\u\0\2\f\v\u\m\0\y\c\z\4\s\w\x\g\0\g\w\t\0\c\n\v\y\p\s\e\p\l\z\y\t\3\8\d\a\0\2\u\b\y\n\w\i\g\g\o\h\h\g\6\m\g\y\b\9\5\u\1\2\7\d\x\m\5\0\7\s\h\d\3\t\b\9\s\a\n\d\8\6\u\a\k\7\m\e\y\1\t\d\7\o\6\o\q\c\4\h\0\w\q\v\l\z\o\6\y\z\v\v\q\c\4\n\a\k\3\u\r\1\d\c\t\e\h\1\3\g\s\4\a\n\g\n\y\u\k\y\l\a\f\u\1\x\r\j\c\3\9\7\y\3\j\p\h\t\2\c\n\t\7\s\8\q\m\k\r\0\9\q\w\a\n\u\n\6\v\c\t\o\x\l\9\h\h\g\b\a\4\2\7\v\3\m\g\d\j\9\r\b\l\8\b\v\w\w\y\b\f\q\y\o\p\7\9\y\i\m\1\x\t\f\d\p\p\g\1\u\8\x\2\b\r\k\h\l\2\1\b\4\p\q\e\f\h\a\8\u\3\i\p\3\0\z\z\p\e\m\2\7\6\2\c\k\f\f\x\y\7\l\i\6\r ]] 00:09:08.037 00:09:08.037 real 0m2.027s 00:09:08.037 user 0m1.253s 00:09:08.037 sys 0m0.435s 00:09:08.037 15:13:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:08.037 ************************************ 00:09:08.037 15:13:17 -- common/autotest_common.sh@10 -- # set +x 00:09:08.037 END TEST dd_flag_nofollow_forced_aio 00:09:08.037 ************************************ 00:09:08.037 15:13:17 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:09:08.037 15:13:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:08.037 15:13:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.037 15:13:17 -- common/autotest_common.sh@10 -- # set +x 00:09:08.037 ************************************ 00:09:08.037 START TEST dd_flag_noatime_forced_aio 00:09:08.037 ************************************ 00:09:08.037 15:13:17 -- common/autotest_common.sh@1111 -- # noatime 00:09:08.037 15:13:17 -- dd/posix.sh@53 -- # local atime_if 00:09:08.037 15:13:17 -- dd/posix.sh@54 -- # local atime_of 00:09:08.037 15:13:17 -- dd/posix.sh@58 -- # gen_bytes 512 00:09:08.037 15:13:17 -- dd/common.sh@98 -- # xtrace_disable 00:09:08.037 15:13:17 -- common/autotest_common.sh@10 -- # set +x 00:09:08.037 15:13:17 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:08.037 15:13:17 -- dd/posix.sh@60 -- # atime_if=1713971596 00:09:08.037 15:13:17 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:08.037 15:13:17 -- dd/posix.sh@61 -- # atime_of=1713971597 00:09:08.037 15:13:17 -- dd/posix.sh@66 -- # sleep 1 00:09:09.415 15:13:18 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:09.415 [2024-04-24 15:13:18.293787] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:09.415 [2024-04-24 15:13:18.293934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63532 ] 00:09:09.415 [2024-04-24 15:13:18.434354] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.415 [2024-04-24 15:13:18.572139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.739  Copying: 512/512 [B] (average 500 kBps) 00:09:09.739 00:09:09.739 15:13:18 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:09.739 15:13:18 -- dd/posix.sh@69 -- # (( atime_if == 1713971596 )) 00:09:09.739 15:13:18 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:09.739 15:13:18 -- dd/posix.sh@70 -- # (( atime_of == 1713971597 )) 00:09:09.739 15:13:18 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:09.997 [2024-04-24 15:13:18.995730] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:09.997 [2024-04-24 15:13:18.995875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63544 ] 00:09:09.997 [2024-04-24 15:13:19.137887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.256 [2024-04-24 15:13:19.260170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.515  Copying: 512/512 [B] (average 500 kBps) 00:09:10.515 00:09:10.515 15:13:19 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:10.515 15:13:19 -- dd/posix.sh@73 -- # (( atime_if < 1713971599 )) 00:09:10.515 00:09:10.515 real 0m2.412s 00:09:10.515 user 0m0.828s 00:09:10.515 sys 0m0.335s 00:09:10.515 15:13:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:10.515 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:09:10.515 ************************************ 00:09:10.515 END TEST dd_flag_noatime_forced_aio 00:09:10.515 ************************************ 00:09:10.515 15:13:19 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:09:10.515 15:13:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:10.515 15:13:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.515 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:09:10.515 ************************************ 00:09:10.515 START TEST dd_flags_misc_forced_aio 00:09:10.515 ************************************ 00:09:10.515 15:13:19 -- common/autotest_common.sh@1111 -- # io 00:09:10.515 15:13:19 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:09:10.515 15:13:19 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:09:10.515 15:13:19 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:09:10.515 15:13:19 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:10.515 15:13:19 -- dd/posix.sh@86 -- # gen_bytes 512 00:09:10.515 15:13:19 -- dd/common.sh@98 -- # xtrace_disable 00:09:10.515 15:13:19 -- common/autotest_common.sh@10 -- # set +x 00:09:10.515 15:13:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:10.515 15:13:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:10.774 [2024-04-24 15:13:19.780847] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:10.774 [2024-04-24 15:13:19.780932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63580 ] 00:09:10.774 [2024-04-24 15:13:19.910961] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.034 [2024-04-24 15:13:20.036532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.292  Copying: 512/512 [B] (average 500 kBps) 00:09:11.292 00:09:11.292 15:13:20 -- dd/posix.sh@93 -- # [[ u3egg7832j1ud8gtguuys4tcll8wfozawqsmx3s7j7rlbvuna65rgu1ehy7kxhuacyqrmnlmjz55kld0uh7m8oop3g4j4sbjl9zlvbprmfs0sr325akk6naxnfxnqadvu19uql3u3igpmznmce3hfctxjyhpalcz4kju9f92i01a8tr232szfx7ub4sdtzviee5bhcip2q1z2br10ajutkqso1h7ojoi818cbz5v08xqr8zedmzs9abonkgan42asc8oubhghbq29zmro6hqz7e3j4xwuwvelznzaecwoq1p3lx52jk0z4adkizfl9q7x5dnypz4s3ryvcu9m3fe0shj8nxl0wovv7bmkauc3vbxa9b876uyelf9f4axcwfcx4q992ega9zos4wb9uvzgsu6osxjo41xbnbjnbvak201vlb4dxth5s70334p6189az6utxypvq3hsaef96rfq4o5x17wqal8cjv8ihesapyjlekkls63g3yfkamvgxkq == \u\3\e\g\g\7\8\3\2\j\1\u\d\8\g\t\g\u\u\y\s\4\t\c\l\l\8\w\f\o\z\a\w\q\s\m\x\3\s\7\j\7\r\l\b\v\u\n\a\6\5\r\g\u\1\e\h\y\7\k\x\h\u\a\c\y\q\r\m\n\l\m\j\z\5\5\k\l\d\0\u\h\7\m\8\o\o\p\3\g\4\j\4\s\b\j\l\9\z\l\v\b\p\r\m\f\s\0\s\r\3\2\5\a\k\k\6\n\a\x\n\f\x\n\q\a\d\v\u\1\9\u\q\l\3\u\3\i\g\p\m\z\n\m\c\e\3\h\f\c\t\x\j\y\h\p\a\l\c\z\4\k\j\u\9\f\9\2\i\0\1\a\8\t\r\2\3\2\s\z\f\x\7\u\b\4\s\d\t\z\v\i\e\e\5\b\h\c\i\p\2\q\1\z\2\b\r\1\0\a\j\u\t\k\q\s\o\1\h\7\o\j\o\i\8\1\8\c\b\z\5\v\0\8\x\q\r\8\z\e\d\m\z\s\9\a\b\o\n\k\g\a\n\4\2\a\s\c\8\o\u\b\h\g\h\b\q\2\9\z\m\r\o\6\h\q\z\7\e\3\j\4\x\w\u\w\v\e\l\z\n\z\a\e\c\w\o\q\1\p\3\l\x\5\2\j\k\0\z\4\a\d\k\i\z\f\l\9\q\7\x\5\d\n\y\p\z\4\s\3\r\y\v\c\u\9\m\3\f\e\0\s\h\j\8\n\x\l\0\w\o\v\v\7\b\m\k\a\u\c\3\v\b\x\a\9\b\8\7\6\u\y\e\l\f\9\f\4\a\x\c\w\f\c\x\4\q\9\9\2\e\g\a\9\z\o\s\4\w\b\9\u\v\z\g\s\u\6\o\s\x\j\o\4\1\x\b\n\b\j\n\b\v\a\k\2\0\1\v\l\b\4\d\x\t\h\5\s\7\0\3\3\4\p\6\1\8\9\a\z\6\u\t\x\y\p\v\q\3\h\s\a\e\f\9\6\r\f\q\4\o\5\x\1\7\w\q\a\l\8\c\j\v\8\i\h\e\s\a\p\y\j\l\e\k\k\l\s\6\3\g\3\y\f\k\a\m\v\g\x\k\q ]] 00:09:11.292 15:13:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:11.292 15:13:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:11.292 [2024-04-24 15:13:20.440847] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:11.292 [2024-04-24 15:13:20.440936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63587 ] 00:09:11.551 [2024-04-24 15:13:20.574505] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.551 [2024-04-24 15:13:20.691045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.810  Copying: 512/512 [B] (average 500 kBps) 00:09:11.810 00:09:11.810 15:13:21 -- dd/posix.sh@93 -- # [[ u3egg7832j1ud8gtguuys4tcll8wfozawqsmx3s7j7rlbvuna65rgu1ehy7kxhuacyqrmnlmjz55kld0uh7m8oop3g4j4sbjl9zlvbprmfs0sr325akk6naxnfxnqadvu19uql3u3igpmznmce3hfctxjyhpalcz4kju9f92i01a8tr232szfx7ub4sdtzviee5bhcip2q1z2br10ajutkqso1h7ojoi818cbz5v08xqr8zedmzs9abonkgan42asc8oubhghbq29zmro6hqz7e3j4xwuwvelznzaecwoq1p3lx52jk0z4adkizfl9q7x5dnypz4s3ryvcu9m3fe0shj8nxl0wovv7bmkauc3vbxa9b876uyelf9f4axcwfcx4q992ega9zos4wb9uvzgsu6osxjo41xbnbjnbvak201vlb4dxth5s70334p6189az6utxypvq3hsaef96rfq4o5x17wqal8cjv8ihesapyjlekkls63g3yfkamvgxkq == \u\3\e\g\g\7\8\3\2\j\1\u\d\8\g\t\g\u\u\y\s\4\t\c\l\l\8\w\f\o\z\a\w\q\s\m\x\3\s\7\j\7\r\l\b\v\u\n\a\6\5\r\g\u\1\e\h\y\7\k\x\h\u\a\c\y\q\r\m\n\l\m\j\z\5\5\k\l\d\0\u\h\7\m\8\o\o\p\3\g\4\j\4\s\b\j\l\9\z\l\v\b\p\r\m\f\s\0\s\r\3\2\5\a\k\k\6\n\a\x\n\f\x\n\q\a\d\v\u\1\9\u\q\l\3\u\3\i\g\p\m\z\n\m\c\e\3\h\f\c\t\x\j\y\h\p\a\l\c\z\4\k\j\u\9\f\9\2\i\0\1\a\8\t\r\2\3\2\s\z\f\x\7\u\b\4\s\d\t\z\v\i\e\e\5\b\h\c\i\p\2\q\1\z\2\b\r\1\0\a\j\u\t\k\q\s\o\1\h\7\o\j\o\i\8\1\8\c\b\z\5\v\0\8\x\q\r\8\z\e\d\m\z\s\9\a\b\o\n\k\g\a\n\4\2\a\s\c\8\o\u\b\h\g\h\b\q\2\9\z\m\r\o\6\h\q\z\7\e\3\j\4\x\w\u\w\v\e\l\z\n\z\a\e\c\w\o\q\1\p\3\l\x\5\2\j\k\0\z\4\a\d\k\i\z\f\l\9\q\7\x\5\d\n\y\p\z\4\s\3\r\y\v\c\u\9\m\3\f\e\0\s\h\j\8\n\x\l\0\w\o\v\v\7\b\m\k\a\u\c\3\v\b\x\a\9\b\8\7\6\u\y\e\l\f\9\f\4\a\x\c\w\f\c\x\4\q\9\9\2\e\g\a\9\z\o\s\4\w\b\9\u\v\z\g\s\u\6\o\s\x\j\o\4\1\x\b\n\b\j\n\b\v\a\k\2\0\1\v\l\b\4\d\x\t\h\5\s\7\0\3\3\4\p\6\1\8\9\a\z\6\u\t\x\y\p\v\q\3\h\s\a\e\f\9\6\r\f\q\4\o\5\x\1\7\w\q\a\l\8\c\j\v\8\i\h\e\s\a\p\y\j\l\e\k\k\l\s\6\3\g\3\y\f\k\a\m\v\g\x\k\q ]] 00:09:11.810 15:13:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:11.810 15:13:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:12.068 [2024-04-24 15:13:21.105119] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:12.068 [2024-04-24 15:13:21.105246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63595 ] 00:09:12.068 [2024-04-24 15:13:21.245362] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.326 [2024-04-24 15:13:21.359937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.585  Copying: 512/512 [B] (average 166 kBps) 00:09:12.585 00:09:12.585 15:13:21 -- dd/posix.sh@93 -- # [[ u3egg7832j1ud8gtguuys4tcll8wfozawqsmx3s7j7rlbvuna65rgu1ehy7kxhuacyqrmnlmjz55kld0uh7m8oop3g4j4sbjl9zlvbprmfs0sr325akk6naxnfxnqadvu19uql3u3igpmznmce3hfctxjyhpalcz4kju9f92i01a8tr232szfx7ub4sdtzviee5bhcip2q1z2br10ajutkqso1h7ojoi818cbz5v08xqr8zedmzs9abonkgan42asc8oubhghbq29zmro6hqz7e3j4xwuwvelznzaecwoq1p3lx52jk0z4adkizfl9q7x5dnypz4s3ryvcu9m3fe0shj8nxl0wovv7bmkauc3vbxa9b876uyelf9f4axcwfcx4q992ega9zos4wb9uvzgsu6osxjo41xbnbjnbvak201vlb4dxth5s70334p6189az6utxypvq3hsaef96rfq4o5x17wqal8cjv8ihesapyjlekkls63g3yfkamvgxkq == \u\3\e\g\g\7\8\3\2\j\1\u\d\8\g\t\g\u\u\y\s\4\t\c\l\l\8\w\f\o\z\a\w\q\s\m\x\3\s\7\j\7\r\l\b\v\u\n\a\6\5\r\g\u\1\e\h\y\7\k\x\h\u\a\c\y\q\r\m\n\l\m\j\z\5\5\k\l\d\0\u\h\7\m\8\o\o\p\3\g\4\j\4\s\b\j\l\9\z\l\v\b\p\r\m\f\s\0\s\r\3\2\5\a\k\k\6\n\a\x\n\f\x\n\q\a\d\v\u\1\9\u\q\l\3\u\3\i\g\p\m\z\n\m\c\e\3\h\f\c\t\x\j\y\h\p\a\l\c\z\4\k\j\u\9\f\9\2\i\0\1\a\8\t\r\2\3\2\s\z\f\x\7\u\b\4\s\d\t\z\v\i\e\e\5\b\h\c\i\p\2\q\1\z\2\b\r\1\0\a\j\u\t\k\q\s\o\1\h\7\o\j\o\i\8\1\8\c\b\z\5\v\0\8\x\q\r\8\z\e\d\m\z\s\9\a\b\o\n\k\g\a\n\4\2\a\s\c\8\o\u\b\h\g\h\b\q\2\9\z\m\r\o\6\h\q\z\7\e\3\j\4\x\w\u\w\v\e\l\z\n\z\a\e\c\w\o\q\1\p\3\l\x\5\2\j\k\0\z\4\a\d\k\i\z\f\l\9\q\7\x\5\d\n\y\p\z\4\s\3\r\y\v\c\u\9\m\3\f\e\0\s\h\j\8\n\x\l\0\w\o\v\v\7\b\m\k\a\u\c\3\v\b\x\a\9\b\8\7\6\u\y\e\l\f\9\f\4\a\x\c\w\f\c\x\4\q\9\9\2\e\g\a\9\z\o\s\4\w\b\9\u\v\z\g\s\u\6\o\s\x\j\o\4\1\x\b\n\b\j\n\b\v\a\k\2\0\1\v\l\b\4\d\x\t\h\5\s\7\0\3\3\4\p\6\1\8\9\a\z\6\u\t\x\y\p\v\q\3\h\s\a\e\f\9\6\r\f\q\4\o\5\x\1\7\w\q\a\l\8\c\j\v\8\i\h\e\s\a\p\y\j\l\e\k\k\l\s\6\3\g\3\y\f\k\a\m\v\g\x\k\q ]] 00:09:12.585 15:13:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:12.585 15:13:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:12.585 [2024-04-24 15:13:21.763546] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:12.585 [2024-04-24 15:13:21.763643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63608 ] 00:09:12.853 [2024-04-24 15:13:21.896812] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.853 [2024-04-24 15:13:22.012004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.392  Copying: 512/512 [B] (average 250 kBps) 00:09:13.392 00:09:13.392 15:13:22 -- dd/posix.sh@93 -- # [[ u3egg7832j1ud8gtguuys4tcll8wfozawqsmx3s7j7rlbvuna65rgu1ehy7kxhuacyqrmnlmjz55kld0uh7m8oop3g4j4sbjl9zlvbprmfs0sr325akk6naxnfxnqadvu19uql3u3igpmznmce3hfctxjyhpalcz4kju9f92i01a8tr232szfx7ub4sdtzviee5bhcip2q1z2br10ajutkqso1h7ojoi818cbz5v08xqr8zedmzs9abonkgan42asc8oubhghbq29zmro6hqz7e3j4xwuwvelznzaecwoq1p3lx52jk0z4adkizfl9q7x5dnypz4s3ryvcu9m3fe0shj8nxl0wovv7bmkauc3vbxa9b876uyelf9f4axcwfcx4q992ega9zos4wb9uvzgsu6osxjo41xbnbjnbvak201vlb4dxth5s70334p6189az6utxypvq3hsaef96rfq4o5x17wqal8cjv8ihesapyjlekkls63g3yfkamvgxkq == \u\3\e\g\g\7\8\3\2\j\1\u\d\8\g\t\g\u\u\y\s\4\t\c\l\l\8\w\f\o\z\a\w\q\s\m\x\3\s\7\j\7\r\l\b\v\u\n\a\6\5\r\g\u\1\e\h\y\7\k\x\h\u\a\c\y\q\r\m\n\l\m\j\z\5\5\k\l\d\0\u\h\7\m\8\o\o\p\3\g\4\j\4\s\b\j\l\9\z\l\v\b\p\r\m\f\s\0\s\r\3\2\5\a\k\k\6\n\a\x\n\f\x\n\q\a\d\v\u\1\9\u\q\l\3\u\3\i\g\p\m\z\n\m\c\e\3\h\f\c\t\x\j\y\h\p\a\l\c\z\4\k\j\u\9\f\9\2\i\0\1\a\8\t\r\2\3\2\s\z\f\x\7\u\b\4\s\d\t\z\v\i\e\e\5\b\h\c\i\p\2\q\1\z\2\b\r\1\0\a\j\u\t\k\q\s\o\1\h\7\o\j\o\i\8\1\8\c\b\z\5\v\0\8\x\q\r\8\z\e\d\m\z\s\9\a\b\o\n\k\g\a\n\4\2\a\s\c\8\o\u\b\h\g\h\b\q\2\9\z\m\r\o\6\h\q\z\7\e\3\j\4\x\w\u\w\v\e\l\z\n\z\a\e\c\w\o\q\1\p\3\l\x\5\2\j\k\0\z\4\a\d\k\i\z\f\l\9\q\7\x\5\d\n\y\p\z\4\s\3\r\y\v\c\u\9\m\3\f\e\0\s\h\j\8\n\x\l\0\w\o\v\v\7\b\m\k\a\u\c\3\v\b\x\a\9\b\8\7\6\u\y\e\l\f\9\f\4\a\x\c\w\f\c\x\4\q\9\9\2\e\g\a\9\z\o\s\4\w\b\9\u\v\z\g\s\u\6\o\s\x\j\o\4\1\x\b\n\b\j\n\b\v\a\k\2\0\1\v\l\b\4\d\x\t\h\5\s\7\0\3\3\4\p\6\1\8\9\a\z\6\u\t\x\y\p\v\q\3\h\s\a\e\f\9\6\r\f\q\4\o\5\x\1\7\w\q\a\l\8\c\j\v\8\i\h\e\s\a\p\y\j\l\e\k\k\l\s\6\3\g\3\y\f\k\a\m\v\g\x\k\q ]] 00:09:13.392 15:13:22 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:13.392 15:13:22 -- dd/posix.sh@86 -- # gen_bytes 512 00:09:13.392 15:13:22 -- dd/common.sh@98 -- # xtrace_disable 00:09:13.392 15:13:22 -- common/autotest_common.sh@10 -- # set +x 00:09:13.392 15:13:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:13.392 15:13:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:13.392 [2024-04-24 15:13:22.429242] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:13.392 [2024-04-24 15:13:22.429345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63614 ] 00:09:13.392 [2024-04-24 15:13:22.565113] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.654 [2024-04-24 15:13:22.687388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.916  Copying: 512/512 [B] (average 500 kBps) 00:09:13.916 00:09:13.916 15:13:23 -- dd/posix.sh@93 -- # [[ i5g0ijh9z385uatr24hy2xuuhj3q7flwadbx52sr3oqspte42fivyg9lhas8kbx3mgw2484s8gxg1zhoqfhdqmg8p7pqhrsh8nd45qui993on1jq0agmkvy4bkjbaf3lmn9jmbd2adzuz1xc0vva31990vhw0mw157kjt5dvjm7oidx61s5j4nhz1z7pqcvuo3u8c9d4jwm62vgpj54sexf35qx5arvxhldp3izwiw3xjci9wkdtpnqrmc133oba7hw6y055gw45tkc8wcu21bu16xpuh695zaob6gy7qhtql16fiaf5f2e6ex3k0zomu4nhb0cj5n5rksenxmwknls39uz5ttepbjon5lxvyarzdd3dx2fosj48ujr46l44oiqyubskbrktvbnmwxj2qkcj3fic6wejlcfw9eg3l2b32lg36wjp5syurdmbrywbjpv1qjrc71vndsfad2outooouv47w0f9txh5d65k4lmuhw0v20qz9jyyjucbj5x2 == \i\5\g\0\i\j\h\9\z\3\8\5\u\a\t\r\2\4\h\y\2\x\u\u\h\j\3\q\7\f\l\w\a\d\b\x\5\2\s\r\3\o\q\s\p\t\e\4\2\f\i\v\y\g\9\l\h\a\s\8\k\b\x\3\m\g\w\2\4\8\4\s\8\g\x\g\1\z\h\o\q\f\h\d\q\m\g\8\p\7\p\q\h\r\s\h\8\n\d\4\5\q\u\i\9\9\3\o\n\1\j\q\0\a\g\m\k\v\y\4\b\k\j\b\a\f\3\l\m\n\9\j\m\b\d\2\a\d\z\u\z\1\x\c\0\v\v\a\3\1\9\9\0\v\h\w\0\m\w\1\5\7\k\j\t\5\d\v\j\m\7\o\i\d\x\6\1\s\5\j\4\n\h\z\1\z\7\p\q\c\v\u\o\3\u\8\c\9\d\4\j\w\m\6\2\v\g\p\j\5\4\s\e\x\f\3\5\q\x\5\a\r\v\x\h\l\d\p\3\i\z\w\i\w\3\x\j\c\i\9\w\k\d\t\p\n\q\r\m\c\1\3\3\o\b\a\7\h\w\6\y\0\5\5\g\w\4\5\t\k\c\8\w\c\u\2\1\b\u\1\6\x\p\u\h\6\9\5\z\a\o\b\6\g\y\7\q\h\t\q\l\1\6\f\i\a\f\5\f\2\e\6\e\x\3\k\0\z\o\m\u\4\n\h\b\0\c\j\5\n\5\r\k\s\e\n\x\m\w\k\n\l\s\3\9\u\z\5\t\t\e\p\b\j\o\n\5\l\x\v\y\a\r\z\d\d\3\d\x\2\f\o\s\j\4\8\u\j\r\4\6\l\4\4\o\i\q\y\u\b\s\k\b\r\k\t\v\b\n\m\w\x\j\2\q\k\c\j\3\f\i\c\6\w\e\j\l\c\f\w\9\e\g\3\l\2\b\3\2\l\g\3\6\w\j\p\5\s\y\u\r\d\m\b\r\y\w\b\j\p\v\1\q\j\r\c\7\1\v\n\d\s\f\a\d\2\o\u\t\o\o\o\u\v\4\7\w\0\f\9\t\x\h\5\d\6\5\k\4\l\m\u\h\w\0\v\2\0\q\z\9\j\y\y\j\u\c\b\j\5\x\2 ]] 00:09:13.916 15:13:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:13.916 15:13:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:13.916 [2024-04-24 15:13:23.109219] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:13.916 [2024-04-24 15:13:23.109387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63623 ] 00:09:14.175 [2024-04-24 15:13:23.248637] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.175 [2024-04-24 15:13:23.368919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.693  Copying: 512/512 [B] (average 500 kBps) 00:09:14.693 00:09:14.693 15:13:23 -- dd/posix.sh@93 -- # [[ i5g0ijh9z385uatr24hy2xuuhj3q7flwadbx52sr3oqspte42fivyg9lhas8kbx3mgw2484s8gxg1zhoqfhdqmg8p7pqhrsh8nd45qui993on1jq0agmkvy4bkjbaf3lmn9jmbd2adzuz1xc0vva31990vhw0mw157kjt5dvjm7oidx61s5j4nhz1z7pqcvuo3u8c9d4jwm62vgpj54sexf35qx5arvxhldp3izwiw3xjci9wkdtpnqrmc133oba7hw6y055gw45tkc8wcu21bu16xpuh695zaob6gy7qhtql16fiaf5f2e6ex3k0zomu4nhb0cj5n5rksenxmwknls39uz5ttepbjon5lxvyarzdd3dx2fosj48ujr46l44oiqyubskbrktvbnmwxj2qkcj3fic6wejlcfw9eg3l2b32lg36wjp5syurdmbrywbjpv1qjrc71vndsfad2outooouv47w0f9txh5d65k4lmuhw0v20qz9jyyjucbj5x2 == \i\5\g\0\i\j\h\9\z\3\8\5\u\a\t\r\2\4\h\y\2\x\u\u\h\j\3\q\7\f\l\w\a\d\b\x\5\2\s\r\3\o\q\s\p\t\e\4\2\f\i\v\y\g\9\l\h\a\s\8\k\b\x\3\m\g\w\2\4\8\4\s\8\g\x\g\1\z\h\o\q\f\h\d\q\m\g\8\p\7\p\q\h\r\s\h\8\n\d\4\5\q\u\i\9\9\3\o\n\1\j\q\0\a\g\m\k\v\y\4\b\k\j\b\a\f\3\l\m\n\9\j\m\b\d\2\a\d\z\u\z\1\x\c\0\v\v\a\3\1\9\9\0\v\h\w\0\m\w\1\5\7\k\j\t\5\d\v\j\m\7\o\i\d\x\6\1\s\5\j\4\n\h\z\1\z\7\p\q\c\v\u\o\3\u\8\c\9\d\4\j\w\m\6\2\v\g\p\j\5\4\s\e\x\f\3\5\q\x\5\a\r\v\x\h\l\d\p\3\i\z\w\i\w\3\x\j\c\i\9\w\k\d\t\p\n\q\r\m\c\1\3\3\o\b\a\7\h\w\6\y\0\5\5\g\w\4\5\t\k\c\8\w\c\u\2\1\b\u\1\6\x\p\u\h\6\9\5\z\a\o\b\6\g\y\7\q\h\t\q\l\1\6\f\i\a\f\5\f\2\e\6\e\x\3\k\0\z\o\m\u\4\n\h\b\0\c\j\5\n\5\r\k\s\e\n\x\m\w\k\n\l\s\3\9\u\z\5\t\t\e\p\b\j\o\n\5\l\x\v\y\a\r\z\d\d\3\d\x\2\f\o\s\j\4\8\u\j\r\4\6\l\4\4\o\i\q\y\u\b\s\k\b\r\k\t\v\b\n\m\w\x\j\2\q\k\c\j\3\f\i\c\6\w\e\j\l\c\f\w\9\e\g\3\l\2\b\3\2\l\g\3\6\w\j\p\5\s\y\u\r\d\m\b\r\y\w\b\j\p\v\1\q\j\r\c\7\1\v\n\d\s\f\a\d\2\o\u\t\o\o\o\u\v\4\7\w\0\f\9\t\x\h\5\d\6\5\k\4\l\m\u\h\w\0\v\2\0\q\z\9\j\y\y\j\u\c\b\j\5\x\2 ]] 00:09:14.693 15:13:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:14.693 15:13:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:14.693 [2024-04-24 15:13:23.756753] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:14.693 [2024-04-24 15:13:23.756851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63636 ] 00:09:14.693 [2024-04-24 15:13:23.889107] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.952 [2024-04-24 15:13:23.999646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.210  Copying: 512/512 [B] (average 250 kBps) 00:09:15.210 00:09:15.210 15:13:24 -- dd/posix.sh@93 -- # [[ i5g0ijh9z385uatr24hy2xuuhj3q7flwadbx52sr3oqspte42fivyg9lhas8kbx3mgw2484s8gxg1zhoqfhdqmg8p7pqhrsh8nd45qui993on1jq0agmkvy4bkjbaf3lmn9jmbd2adzuz1xc0vva31990vhw0mw157kjt5dvjm7oidx61s5j4nhz1z7pqcvuo3u8c9d4jwm62vgpj54sexf35qx5arvxhldp3izwiw3xjci9wkdtpnqrmc133oba7hw6y055gw45tkc8wcu21bu16xpuh695zaob6gy7qhtql16fiaf5f2e6ex3k0zomu4nhb0cj5n5rksenxmwknls39uz5ttepbjon5lxvyarzdd3dx2fosj48ujr46l44oiqyubskbrktvbnmwxj2qkcj3fic6wejlcfw9eg3l2b32lg36wjp5syurdmbrywbjpv1qjrc71vndsfad2outooouv47w0f9txh5d65k4lmuhw0v20qz9jyyjucbj5x2 == \i\5\g\0\i\j\h\9\z\3\8\5\u\a\t\r\2\4\h\y\2\x\u\u\h\j\3\q\7\f\l\w\a\d\b\x\5\2\s\r\3\o\q\s\p\t\e\4\2\f\i\v\y\g\9\l\h\a\s\8\k\b\x\3\m\g\w\2\4\8\4\s\8\g\x\g\1\z\h\o\q\f\h\d\q\m\g\8\p\7\p\q\h\r\s\h\8\n\d\4\5\q\u\i\9\9\3\o\n\1\j\q\0\a\g\m\k\v\y\4\b\k\j\b\a\f\3\l\m\n\9\j\m\b\d\2\a\d\z\u\z\1\x\c\0\v\v\a\3\1\9\9\0\v\h\w\0\m\w\1\5\7\k\j\t\5\d\v\j\m\7\o\i\d\x\6\1\s\5\j\4\n\h\z\1\z\7\p\q\c\v\u\o\3\u\8\c\9\d\4\j\w\m\6\2\v\g\p\j\5\4\s\e\x\f\3\5\q\x\5\a\r\v\x\h\l\d\p\3\i\z\w\i\w\3\x\j\c\i\9\w\k\d\t\p\n\q\r\m\c\1\3\3\o\b\a\7\h\w\6\y\0\5\5\g\w\4\5\t\k\c\8\w\c\u\2\1\b\u\1\6\x\p\u\h\6\9\5\z\a\o\b\6\g\y\7\q\h\t\q\l\1\6\f\i\a\f\5\f\2\e\6\e\x\3\k\0\z\o\m\u\4\n\h\b\0\c\j\5\n\5\r\k\s\e\n\x\m\w\k\n\l\s\3\9\u\z\5\t\t\e\p\b\j\o\n\5\l\x\v\y\a\r\z\d\d\3\d\x\2\f\o\s\j\4\8\u\j\r\4\6\l\4\4\o\i\q\y\u\b\s\k\b\r\k\t\v\b\n\m\w\x\j\2\q\k\c\j\3\f\i\c\6\w\e\j\l\c\f\w\9\e\g\3\l\2\b\3\2\l\g\3\6\w\j\p\5\s\y\u\r\d\m\b\r\y\w\b\j\p\v\1\q\j\r\c\7\1\v\n\d\s\f\a\d\2\o\u\t\o\o\o\u\v\4\7\w\0\f\9\t\x\h\5\d\6\5\k\4\l\m\u\h\w\0\v\2\0\q\z\9\j\y\y\j\u\c\b\j\5\x\2 ]] 00:09:15.210 15:13:24 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:15.210 15:13:24 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:15.210 [2024-04-24 15:13:24.406760] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:15.210 [2024-04-24 15:13:24.406853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63638 ] 00:09:15.469 [2024-04-24 15:13:24.544188] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.469 [2024-04-24 15:13:24.663857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.986  Copying: 512/512 [B] (average 250 kBps) 00:09:15.986 00:09:15.986 15:13:25 -- dd/posix.sh@93 -- # [[ i5g0ijh9z385uatr24hy2xuuhj3q7flwadbx52sr3oqspte42fivyg9lhas8kbx3mgw2484s8gxg1zhoqfhdqmg8p7pqhrsh8nd45qui993on1jq0agmkvy4bkjbaf3lmn9jmbd2adzuz1xc0vva31990vhw0mw157kjt5dvjm7oidx61s5j4nhz1z7pqcvuo3u8c9d4jwm62vgpj54sexf35qx5arvxhldp3izwiw3xjci9wkdtpnqrmc133oba7hw6y055gw45tkc8wcu21bu16xpuh695zaob6gy7qhtql16fiaf5f2e6ex3k0zomu4nhb0cj5n5rksenxmwknls39uz5ttepbjon5lxvyarzdd3dx2fosj48ujr46l44oiqyubskbrktvbnmwxj2qkcj3fic6wejlcfw9eg3l2b32lg36wjp5syurdmbrywbjpv1qjrc71vndsfad2outooouv47w0f9txh5d65k4lmuhw0v20qz9jyyjucbj5x2 == \i\5\g\0\i\j\h\9\z\3\8\5\u\a\t\r\2\4\h\y\2\x\u\u\h\j\3\q\7\f\l\w\a\d\b\x\5\2\s\r\3\o\q\s\p\t\e\4\2\f\i\v\y\g\9\l\h\a\s\8\k\b\x\3\m\g\w\2\4\8\4\s\8\g\x\g\1\z\h\o\q\f\h\d\q\m\g\8\p\7\p\q\h\r\s\h\8\n\d\4\5\q\u\i\9\9\3\o\n\1\j\q\0\a\g\m\k\v\y\4\b\k\j\b\a\f\3\l\m\n\9\j\m\b\d\2\a\d\z\u\z\1\x\c\0\v\v\a\3\1\9\9\0\v\h\w\0\m\w\1\5\7\k\j\t\5\d\v\j\m\7\o\i\d\x\6\1\s\5\j\4\n\h\z\1\z\7\p\q\c\v\u\o\3\u\8\c\9\d\4\j\w\m\6\2\v\g\p\j\5\4\s\e\x\f\3\5\q\x\5\a\r\v\x\h\l\d\p\3\i\z\w\i\w\3\x\j\c\i\9\w\k\d\t\p\n\q\r\m\c\1\3\3\o\b\a\7\h\w\6\y\0\5\5\g\w\4\5\t\k\c\8\w\c\u\2\1\b\u\1\6\x\p\u\h\6\9\5\z\a\o\b\6\g\y\7\q\h\t\q\l\1\6\f\i\a\f\5\f\2\e\6\e\x\3\k\0\z\o\m\u\4\n\h\b\0\c\j\5\n\5\r\k\s\e\n\x\m\w\k\n\l\s\3\9\u\z\5\t\t\e\p\b\j\o\n\5\l\x\v\y\a\r\z\d\d\3\d\x\2\f\o\s\j\4\8\u\j\r\4\6\l\4\4\o\i\q\y\u\b\s\k\b\r\k\t\v\b\n\m\w\x\j\2\q\k\c\j\3\f\i\c\6\w\e\j\l\c\f\w\9\e\g\3\l\2\b\3\2\l\g\3\6\w\j\p\5\s\y\u\r\d\m\b\r\y\w\b\j\p\v\1\q\j\r\c\7\1\v\n\d\s\f\a\d\2\o\u\t\o\o\o\u\v\4\7\w\0\f\9\t\x\h\5\d\6\5\k\4\l\m\u\h\w\0\v\2\0\q\z\9\j\y\y\j\u\c\b\j\5\x\2 ]] 00:09:15.986 00:09:15.986 real 0m5.323s 00:09:15.986 user 0m3.179s 00:09:15.986 sys 0m1.135s 00:09:15.986 ************************************ 00:09:15.986 END TEST dd_flags_misc_forced_aio 00:09:15.986 ************************************ 00:09:15.986 15:13:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:15.986 15:13:25 -- common/autotest_common.sh@10 -- # set +x 00:09:15.986 15:13:25 -- dd/posix.sh@1 -- # cleanup 00:09:15.986 15:13:25 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:15.986 15:13:25 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:15.986 00:09:15.986 real 0m24.607s 00:09:15.986 user 0m13.287s 00:09:15.986 sys 0m7.123s 00:09:15.986 15:13:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:15.986 ************************************ 00:09:15.986 END TEST spdk_dd_posix 00:09:15.986 ************************************ 00:09:15.986 15:13:25 -- common/autotest_common.sh@10 -- # set +x 00:09:15.986 15:13:25 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:15.986 15:13:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:15.986 15:13:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:15.986 15:13:25 -- common/autotest_common.sh@10 -- # set +x 00:09:15.986 ************************************ 00:09:15.986 START TEST spdk_dd_malloc 00:09:15.986 ************************************ 00:09:15.986 15:13:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:16.245 * Looking for test storage... 00:09:16.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:16.245 15:13:25 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.245 15:13:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.245 15:13:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.245 15:13:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.245 15:13:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.245 15:13:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.245 15:13:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.245 15:13:25 -- paths/export.sh@5 -- # export PATH 00:09:16.245 15:13:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.245 15:13:25 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:09:16.245 15:13:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:16.245 15:13:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:16.245 15:13:25 -- common/autotest_common.sh@10 -- # set +x 00:09:16.245 ************************************ 00:09:16.245 START TEST dd_malloc_copy 00:09:16.245 ************************************ 00:09:16.245 15:13:25 -- common/autotest_common.sh@1111 -- # malloc_copy 00:09:16.245 15:13:25 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:09:16.245 15:13:25 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:09:16.245 15:13:25 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:16.245 15:13:25 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:09:16.245 15:13:25 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:09:16.245 15:13:25 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:09:16.245 15:13:25 -- dd/malloc.sh@28 -- # gen_conf 00:09:16.245 15:13:25 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:09:16.245 15:13:25 -- dd/common.sh@31 -- # xtrace_disable 00:09:16.245 15:13:25 -- common/autotest_common.sh@10 -- # set +x 00:09:16.245 [2024-04-24 15:13:25.431269] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:16.245 [2024-04-24 15:13:25.431389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63722 ] 00:09:16.245 { 00:09:16.245 "subsystems": [ 00:09:16.245 { 00:09:16.245 "subsystem": "bdev", 00:09:16.245 "config": [ 00:09:16.245 { 00:09:16.245 "params": { 00:09:16.245 "block_size": 512, 00:09:16.245 "num_blocks": 1048576, 00:09:16.245 "name": "malloc0" 00:09:16.245 }, 00:09:16.245 "method": "bdev_malloc_create" 00:09:16.245 }, 00:09:16.245 { 00:09:16.245 "params": { 00:09:16.245 "block_size": 512, 00:09:16.245 "num_blocks": 1048576, 00:09:16.245 "name": "malloc1" 00:09:16.245 }, 00:09:16.245 "method": "bdev_malloc_create" 00:09:16.245 }, 00:09:16.245 { 00:09:16.245 "method": "bdev_wait_for_examine" 00:09:16.245 } 00:09:16.245 ] 00:09:16.245 } 00:09:16.245 ] 00:09:16.245 } 00:09:16.504 [2024-04-24 15:13:25.567164] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.504 [2024-04-24 15:13:25.689973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.453  Copying: 191/512 [MB] (191 MBps) Copying: 394/512 [MB] (202 MBps) Copying: 512/512 [MB] (average 198 MBps) 00:09:20.453 00:09:20.453 15:13:29 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:09:20.453 15:13:29 -- dd/malloc.sh@33 -- # gen_conf 00:09:20.453 15:13:29 -- dd/common.sh@31 -- # xtrace_disable 00:09:20.453 15:13:29 -- common/autotest_common.sh@10 -- # set +x 00:09:20.453 [2024-04-24 15:13:29.433274] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:20.453 [2024-04-24 15:13:29.433415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63775 ] 00:09:20.453 { 00:09:20.453 "subsystems": [ 00:09:20.453 { 00:09:20.453 "subsystem": "bdev", 00:09:20.453 "config": [ 00:09:20.453 { 00:09:20.453 "params": { 00:09:20.453 "block_size": 512, 00:09:20.453 "num_blocks": 1048576, 00:09:20.453 "name": "malloc0" 00:09:20.453 }, 00:09:20.453 "method": "bdev_malloc_create" 00:09:20.453 }, 00:09:20.453 { 00:09:20.453 "params": { 00:09:20.453 "block_size": 512, 00:09:20.453 "num_blocks": 1048576, 00:09:20.453 "name": "malloc1" 00:09:20.453 }, 00:09:20.453 "method": "bdev_malloc_create" 00:09:20.453 }, 00:09:20.453 { 00:09:20.453 "method": "bdev_wait_for_examine" 00:09:20.453 } 00:09:20.453 ] 00:09:20.453 } 00:09:20.453 ] 00:09:20.453 } 00:09:20.453 [2024-04-24 15:13:29.574622] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.453 [2024-04-24 15:13:29.693312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.420  Copying: 196/512 [MB] (196 MBps) Copying: 396/512 [MB] (200 MBps) Copying: 512/512 [MB] (average 198 MBps) 00:09:24.420 00:09:24.420 00:09:24.420 real 0m7.959s 00:09:24.420 user 0m6.908s 00:09:24.420 sys 0m0.889s 00:09:24.420 15:13:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:24.420 15:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:24.420 ************************************ 00:09:24.420 END TEST dd_malloc_copy 00:09:24.420 ************************************ 00:09:24.420 00:09:24.420 real 0m8.162s 00:09:24.420 user 0m6.983s 00:09:24.420 sys 0m1.008s 00:09:24.420 15:13:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:24.420 15:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:24.420 ************************************ 00:09:24.420 END TEST spdk_dd_malloc 00:09:24.420 ************************************ 00:09:24.420 15:13:33 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:24.420 15:13:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:24.420 15:13:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.420 15:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:24.420 ************************************ 00:09:24.420 START TEST spdk_dd_bdev_to_bdev 00:09:24.420 ************************************ 00:09:24.420 15:13:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:24.420 * Looking for test storage... 00:09:24.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:24.420 15:13:33 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.420 15:13:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.420 15:13:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.420 15:13:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.420 15:13:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.420 15:13:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.420 15:13:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.420 15:13:33 -- paths/export.sh@5 -- # export PATH 00:09:24.421 15:13:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:09:24.421 15:13:33 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:24.421 15:13:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:24.421 15:13:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.421 15:13:33 -- common/autotest_common.sh@10 -- # set +x 00:09:24.421 ************************************ 00:09:24.421 START TEST dd_inflate_file 00:09:24.421 ************************************ 00:09:24.421 15:13:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:24.680 [2024-04-24 15:13:33.699701] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:24.680 [2024-04-24 15:13:33.699806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63895 ] 00:09:24.680 [2024-04-24 15:13:33.838969] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.939 [2024-04-24 15:13:33.959598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.197  Copying: 64/64 [MB] (average 1422 MBps) 00:09:25.197 00:09:25.197 00:09:25.197 real 0m0.703s 00:09:25.197 user 0m0.451s 00:09:25.197 sys 0m0.312s 00:09:25.197 15:13:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:25.197 ************************************ 00:09:25.197 END TEST dd_inflate_file 00:09:25.198 ************************************ 00:09:25.198 15:13:34 -- common/autotest_common.sh@10 -- # set +x 00:09:25.198 15:13:34 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:09:25.198 15:13:34 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:09:25.198 15:13:34 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:25.198 15:13:34 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:09:25.198 15:13:34 -- dd/common.sh@31 -- # xtrace_disable 00:09:25.198 15:13:34 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:25.198 15:13:34 -- common/autotest_common.sh@10 -- # set +x 00:09:25.198 15:13:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:25.198 15:13:34 -- common/autotest_common.sh@10 -- # set +x 00:09:25.456 { 00:09:25.456 "subsystems": [ 00:09:25.456 { 00:09:25.456 "subsystem": "bdev", 00:09:25.456 "config": [ 00:09:25.456 { 00:09:25.456 "params": { 00:09:25.456 "trtype": "pcie", 00:09:25.456 "traddr": "0000:00:10.0", 00:09:25.456 "name": "Nvme0" 00:09:25.456 }, 00:09:25.456 "method": "bdev_nvme_attach_controller" 00:09:25.456 }, 00:09:25.456 { 00:09:25.456 "params": { 00:09:25.456 "trtype": "pcie", 00:09:25.456 "traddr": "0000:00:11.0", 00:09:25.456 "name": "Nvme1" 00:09:25.456 }, 00:09:25.456 "method": "bdev_nvme_attach_controller" 00:09:25.456 }, 00:09:25.456 { 00:09:25.456 "method": "bdev_wait_for_examine" 00:09:25.456 } 00:09:25.456 ] 00:09:25.457 } 00:09:25.457 ] 00:09:25.457 } 00:09:25.457 ************************************ 00:09:25.457 START TEST dd_copy_to_out_bdev 00:09:25.457 ************************************ 00:09:25.457 15:13:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:25.457 [2024-04-24 15:13:34.522639] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:25.457 [2024-04-24 15:13:34.522784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63940 ] 00:09:25.457 [2024-04-24 15:13:34.662877] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.715 [2024-04-24 15:13:34.784410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.350  Copying: 56/64 [MB] (56 MBps) Copying: 64/64 [MB] (average 57 MBps) 00:09:27.350 00:09:27.350 00:09:27.350 real 0m1.948s 00:09:27.350 user 0m1.715s 00:09:27.350 sys 0m1.490s 00:09:27.350 15:13:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:27.350 ************************************ 00:09:27.350 END TEST dd_copy_to_out_bdev 00:09:27.350 ************************************ 00:09:27.350 15:13:36 -- common/autotest_common.sh@10 -- # set +x 00:09:27.350 15:13:36 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:27.350 15:13:36 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:27.350 15:13:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:27.350 15:13:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.350 15:13:36 -- common/autotest_common.sh@10 -- # set +x 00:09:27.350 ************************************ 00:09:27.350 START TEST dd_offset_magic 00:09:27.350 ************************************ 00:09:27.350 15:13:36 -- common/autotest_common.sh@1111 -- # offset_magic 00:09:27.350 15:13:36 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:27.350 15:13:36 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:27.350 15:13:36 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:27.350 15:13:36 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:27.350 15:13:36 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:27.350 15:13:36 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:27.350 15:13:36 -- dd/common.sh@31 -- # xtrace_disable 00:09:27.350 15:13:36 -- common/autotest_common.sh@10 -- # set +x 00:09:27.608 [2024-04-24 15:13:36.599134] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:27.608 [2024-04-24 15:13:36.599240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63987 ] 00:09:27.608 { 00:09:27.608 "subsystems": [ 00:09:27.608 { 00:09:27.608 "subsystem": "bdev", 00:09:27.608 "config": [ 00:09:27.608 { 00:09:27.608 "params": { 00:09:27.608 "trtype": "pcie", 00:09:27.608 "traddr": "0000:00:10.0", 00:09:27.608 "name": "Nvme0" 00:09:27.608 }, 00:09:27.608 "method": "bdev_nvme_attach_controller" 00:09:27.608 }, 00:09:27.608 { 00:09:27.608 "params": { 00:09:27.608 "trtype": "pcie", 00:09:27.608 "traddr": "0000:00:11.0", 00:09:27.608 "name": "Nvme1" 00:09:27.608 }, 00:09:27.608 "method": "bdev_nvme_attach_controller" 00:09:27.608 }, 00:09:27.608 { 00:09:27.608 "method": "bdev_wait_for_examine" 00:09:27.608 } 00:09:27.608 ] 00:09:27.608 } 00:09:27.608 ] 00:09:27.608 } 00:09:27.608 [2024-04-24 15:13:36.738273] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.866 [2024-04-24 15:13:36.863078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.383  Copying: 65/65 [MB] (average 984 MBps) 00:09:28.383 00:09:28.383 15:13:37 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:28.383 15:13:37 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:28.383 15:13:37 -- dd/common.sh@31 -- # xtrace_disable 00:09:28.384 15:13:37 -- common/autotest_common.sh@10 -- # set +x 00:09:28.384 [2024-04-24 15:13:37.516628] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:28.384 [2024-04-24 15:13:37.517650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64007 ] 00:09:28.384 { 00:09:28.384 "subsystems": [ 00:09:28.384 { 00:09:28.384 "subsystem": "bdev", 00:09:28.384 "config": [ 00:09:28.384 { 00:09:28.384 "params": { 00:09:28.384 "trtype": "pcie", 00:09:28.384 "traddr": "0000:00:10.0", 00:09:28.384 "name": "Nvme0" 00:09:28.384 }, 00:09:28.384 "method": "bdev_nvme_attach_controller" 00:09:28.384 }, 00:09:28.384 { 00:09:28.384 "params": { 00:09:28.384 "trtype": "pcie", 00:09:28.384 "traddr": "0000:00:11.0", 00:09:28.384 "name": "Nvme1" 00:09:28.384 }, 00:09:28.384 "method": "bdev_nvme_attach_controller" 00:09:28.384 }, 00:09:28.384 { 00:09:28.384 "method": "bdev_wait_for_examine" 00:09:28.384 } 00:09:28.384 ] 00:09:28.384 } 00:09:28.384 ] 00:09:28.384 } 00:09:28.642 [2024-04-24 15:13:37.661810] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.642 [2024-04-24 15:13:37.777090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.158  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:29.158 00:09:29.158 15:13:38 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:29.158 15:13:38 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:29.158 15:13:38 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:29.158 15:13:38 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:29.158 15:13:38 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:29.158 15:13:38 -- dd/common.sh@31 -- # xtrace_disable 00:09:29.158 15:13:38 -- common/autotest_common.sh@10 -- # set +x 00:09:29.158 [2024-04-24 15:13:38.289705] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:29.158 [2024-04-24 15:13:38.290038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64024 ] 00:09:29.158 { 00:09:29.158 "subsystems": [ 00:09:29.158 { 00:09:29.158 "subsystem": "bdev", 00:09:29.158 "config": [ 00:09:29.158 { 00:09:29.158 "params": { 00:09:29.158 "trtype": "pcie", 00:09:29.158 "traddr": "0000:00:10.0", 00:09:29.158 "name": "Nvme0" 00:09:29.158 }, 00:09:29.158 "method": "bdev_nvme_attach_controller" 00:09:29.158 }, 00:09:29.158 { 00:09:29.158 "params": { 00:09:29.158 "trtype": "pcie", 00:09:29.158 "traddr": "0000:00:11.0", 00:09:29.158 "name": "Nvme1" 00:09:29.158 }, 00:09:29.158 "method": "bdev_nvme_attach_controller" 00:09:29.158 }, 00:09:29.158 { 00:09:29.158 "method": "bdev_wait_for_examine" 00:09:29.158 } 00:09:29.158 ] 00:09:29.158 } 00:09:29.158 ] 00:09:29.158 } 00:09:29.416 [2024-04-24 15:13:38.422175] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.416 [2024-04-24 15:13:38.540100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.932  Copying: 65/65 [MB] (average 1065 MBps) 00:09:29.932 00:09:29.932 15:13:39 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:29.932 15:13:39 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:29.932 15:13:39 -- dd/common.sh@31 -- # xtrace_disable 00:09:29.932 15:13:39 -- common/autotest_common.sh@10 -- # set +x 00:09:29.932 [2024-04-24 15:13:39.164973] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:29.932 [2024-04-24 15:13:39.165059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64044 ] 00:09:29.932 { 00:09:29.932 "subsystems": [ 00:09:29.932 { 00:09:29.932 "subsystem": "bdev", 00:09:29.932 "config": [ 00:09:29.932 { 00:09:29.932 "params": { 00:09:29.932 "trtype": "pcie", 00:09:29.932 "traddr": "0000:00:10.0", 00:09:29.932 "name": "Nvme0" 00:09:29.932 }, 00:09:29.932 "method": "bdev_nvme_attach_controller" 00:09:29.932 }, 00:09:29.932 { 00:09:29.932 "params": { 00:09:29.932 "trtype": "pcie", 00:09:29.932 "traddr": "0000:00:11.0", 00:09:29.932 "name": "Nvme1" 00:09:29.932 }, 00:09:29.932 "method": "bdev_nvme_attach_controller" 00:09:29.932 }, 00:09:29.932 { 00:09:29.932 "method": "bdev_wait_for_examine" 00:09:29.932 } 00:09:29.932 ] 00:09:29.932 } 00:09:29.932 ] 00:09:29.932 } 00:09:30.190 [2024-04-24 15:13:39.301010] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.190 [2024-04-24 15:13:39.418641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.733  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:30.733 00:09:30.733 ************************************ 00:09:30.733 END TEST dd_offset_magic 00:09:30.733 ************************************ 00:09:30.733 15:13:39 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:30.733 15:13:39 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:30.733 00:09:30.733 real 0m3.365s 00:09:30.733 user 0m2.504s 00:09:30.733 sys 0m0.948s 00:09:30.733 15:13:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:30.733 15:13:39 -- common/autotest_common.sh@10 -- # set +x 00:09:30.733 15:13:39 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:30.733 15:13:39 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:30.733 15:13:39 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:30.733 15:13:39 -- dd/common.sh@11 -- # local nvme_ref= 00:09:30.733 15:13:39 -- dd/common.sh@12 -- # local size=4194330 00:09:30.733 15:13:39 -- dd/common.sh@14 -- # local bs=1048576 00:09:30.733 15:13:39 -- dd/common.sh@15 -- # local count=5 00:09:30.733 15:13:39 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:30.733 15:13:39 -- dd/common.sh@18 -- # gen_conf 00:09:30.733 15:13:39 -- dd/common.sh@31 -- # xtrace_disable 00:09:30.733 15:13:39 -- common/autotest_common.sh@10 -- # set +x 00:09:30.992 [2024-04-24 15:13:40.003424] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:30.992 [2024-04-24 15:13:40.003546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64081 ] 00:09:30.992 { 00:09:30.992 "subsystems": [ 00:09:30.992 { 00:09:30.992 "subsystem": "bdev", 00:09:30.992 "config": [ 00:09:30.992 { 00:09:30.992 "params": { 00:09:30.992 "trtype": "pcie", 00:09:30.992 "traddr": "0000:00:10.0", 00:09:30.992 "name": "Nvme0" 00:09:30.992 }, 00:09:30.992 "method": "bdev_nvme_attach_controller" 00:09:30.992 }, 00:09:30.992 { 00:09:30.992 "params": { 00:09:30.992 "trtype": "pcie", 00:09:30.992 "traddr": "0000:00:11.0", 00:09:30.992 "name": "Nvme1" 00:09:30.992 }, 00:09:30.992 "method": "bdev_nvme_attach_controller" 00:09:30.992 }, 00:09:30.992 { 00:09:30.992 "method": "bdev_wait_for_examine" 00:09:30.992 } 00:09:30.992 ] 00:09:30.992 } 00:09:30.992 ] 00:09:30.992 } 00:09:30.992 [2024-04-24 15:13:40.139522] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.250 [2024-04-24 15:13:40.267594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.768  Copying: 5120/5120 [kB] (average 1250 MBps) 00:09:31.768 00:09:31.768 15:13:40 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:31.768 15:13:40 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:31.768 15:13:40 -- dd/common.sh@11 -- # local nvme_ref= 00:09:31.768 15:13:40 -- dd/common.sh@12 -- # local size=4194330 00:09:31.768 15:13:40 -- dd/common.sh@14 -- # local bs=1048576 00:09:31.768 15:13:40 -- dd/common.sh@15 -- # local count=5 00:09:31.768 15:13:40 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:31.768 15:13:40 -- dd/common.sh@18 -- # gen_conf 00:09:31.768 15:13:40 -- dd/common.sh@31 -- # xtrace_disable 00:09:31.768 15:13:40 -- common/autotest_common.sh@10 -- # set +x 00:09:31.768 [2024-04-24 15:13:40.829610] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:31.768 [2024-04-24 15:13:40.830012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64096 ] 00:09:31.768 { 00:09:31.768 "subsystems": [ 00:09:31.768 { 00:09:31.768 "subsystem": "bdev", 00:09:31.768 "config": [ 00:09:31.768 { 00:09:31.768 "params": { 00:09:31.768 "trtype": "pcie", 00:09:31.768 "traddr": "0000:00:10.0", 00:09:31.768 "name": "Nvme0" 00:09:31.768 }, 00:09:31.768 "method": "bdev_nvme_attach_controller" 00:09:31.768 }, 00:09:31.768 { 00:09:31.768 "params": { 00:09:31.768 "trtype": "pcie", 00:09:31.768 "traddr": "0000:00:11.0", 00:09:31.768 "name": "Nvme1" 00:09:31.768 }, 00:09:31.768 "method": "bdev_nvme_attach_controller" 00:09:31.768 }, 00:09:31.768 { 00:09:31.768 "method": "bdev_wait_for_examine" 00:09:31.768 } 00:09:31.768 ] 00:09:31.768 } 00:09:31.768 ] 00:09:31.768 } 00:09:31.768 [2024-04-24 15:13:40.972209] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.027 [2024-04-24 15:13:41.111548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.543  Copying: 5120/5120 [kB] (average 714 MBps) 00:09:32.543 00:09:32.543 15:13:41 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:32.543 ************************************ 00:09:32.543 END TEST spdk_dd_bdev_to_bdev 00:09:32.543 ************************************ 00:09:32.543 00:09:32.543 real 0m8.150s 00:09:32.543 user 0m6.051s 00:09:32.543 sys 0m3.589s 00:09:32.543 15:13:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:32.543 15:13:41 -- common/autotest_common.sh@10 -- # set +x 00:09:32.543 15:13:41 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:32.543 15:13:41 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:32.543 15:13:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:32.543 15:13:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.543 15:13:41 -- common/autotest_common.sh@10 -- # set +x 00:09:32.543 ************************************ 00:09:32.543 START TEST spdk_dd_uring 00:09:32.543 ************************************ 00:09:32.543 15:13:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:32.801 * Looking for test storage... 00:09:32.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:32.801 15:13:41 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.801 15:13:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.801 15:13:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.801 15:13:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.801 15:13:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.801 15:13:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.801 15:13:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.801 15:13:41 -- paths/export.sh@5 -- # export PATH 00:09:32.801 15:13:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.801 15:13:41 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:32.801 15:13:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:32.801 15:13:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.801 15:13:41 -- common/autotest_common.sh@10 -- # set +x 00:09:32.801 ************************************ 00:09:32.801 START TEST dd_uring_copy 00:09:32.801 ************************************ 00:09:32.801 15:13:41 -- common/autotest_common.sh@1111 -- # uring_zram_copy 00:09:32.801 15:13:41 -- dd/uring.sh@15 -- # local zram_dev_id 00:09:32.801 15:13:41 -- dd/uring.sh@16 -- # local magic 00:09:32.801 15:13:41 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:32.801 15:13:41 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:32.801 15:13:41 -- dd/uring.sh@19 -- # local verify_magic 00:09:32.801 15:13:41 -- dd/uring.sh@21 -- # init_zram 00:09:32.801 15:13:41 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:09:32.801 15:13:41 -- dd/common.sh@164 -- # return 00:09:32.801 15:13:41 -- dd/uring.sh@22 -- # create_zram_dev 00:09:32.801 15:13:41 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:09:32.801 15:13:41 -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:32.801 15:13:41 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:32.801 15:13:41 -- dd/common.sh@181 -- # local id=1 00:09:32.801 15:13:41 -- dd/common.sh@182 -- # local size=512M 00:09:32.801 15:13:41 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:09:32.801 15:13:41 -- dd/common.sh@186 -- # echo 512M 00:09:32.801 15:13:41 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:32.801 15:13:41 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:32.801 15:13:41 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:32.801 15:13:41 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:32.801 15:13:41 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:32.801 15:13:41 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:32.801 15:13:41 -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:32.801 15:13:41 -- dd/common.sh@98 -- # xtrace_disable 00:09:32.801 15:13:41 -- common/autotest_common.sh@10 -- # set +x 00:09:32.801 15:13:41 -- dd/uring.sh@41 -- # magic=9lmy8h5dtunahzgx9jl1dprkwk59immheb23mxvv4hmpivwq8xxxvmtx51y5vosassgzdymw9tuozwq3rgnl8hgen56ozym2605z1405wevv3li43pk7spdslbvy6iz86qwoo70b5fpemsgl7j5u66jy6zds9a01ff53mwwmlq8fenkxhl0xb9zbn3fqf4e8hkbc9ocjgau6tynkfzjj9z6ubb427hrtur30l9ut0van7crwllrouir9m3ylzz3xfq0ufnsj1ijh641v2bv5a5ujcjogzqtyigpte1gfsn1gpqyaebizp7epix3ia28ddpb6c9b3jahpjs3l460r616s1io05xlfqku78008us0hahdtciwlpts36ovz89nrbupm2axyvubspfi5fybcx9i5aqj97jcn615owsrt1r4ol45ych4ggebxokbgcnx3bb52aq98jpnrpexri9vr8yyz9cavm0o8hp5rcibt0q0r5sxut03msz51jn7bd0tnlal7ptadf3zprwbodd72z94g45vb3ecas230fk8yke50rj61x13amtr10edd6iavp2ykatuhmy99f4l8qbysagrghlqfivz1m0hqjonsjrvj8ms991gxpwjscgw4jb4t2qhbfnixd3junft8gjyiqospbt9oubg1k6w5wjgm6i8evbzy42xarcf44wuab0tw169f49pyo36q2w49u984dzqzvt0de7gsjzxnz5ecgchnvuv9z16fjcrovu3ma58q66xvldtz10fon2d0xoetyiwo794865fwa25798kfxxu2jnd4puzdozcakugich1hycv1m2qwhmdg4b5hdysbwcpbltbbk69d99677ape8c99edu8jq6vulzd1ony7z8w4dtej4lb8zm6rpnm0ux7hf2hjjrjksja35o623k0brldgpg2d4et3465lnzuvrtgwpfgipqnyczzrryg61znigqztlibnv89tttzu0dlni2en2ghxithojj802v2jmlq 00:09:32.801 15:13:41 -- dd/uring.sh@42 -- # echo 9lmy8h5dtunahzgx9jl1dprkwk59immheb23mxvv4hmpivwq8xxxvmtx51y5vosassgzdymw9tuozwq3rgnl8hgen56ozym2605z1405wevv3li43pk7spdslbvy6iz86qwoo70b5fpemsgl7j5u66jy6zds9a01ff53mwwmlq8fenkxhl0xb9zbn3fqf4e8hkbc9ocjgau6tynkfzjj9z6ubb427hrtur30l9ut0van7crwllrouir9m3ylzz3xfq0ufnsj1ijh641v2bv5a5ujcjogzqtyigpte1gfsn1gpqyaebizp7epix3ia28ddpb6c9b3jahpjs3l460r616s1io05xlfqku78008us0hahdtciwlpts36ovz89nrbupm2axyvubspfi5fybcx9i5aqj97jcn615owsrt1r4ol45ych4ggebxokbgcnx3bb52aq98jpnrpexri9vr8yyz9cavm0o8hp5rcibt0q0r5sxut03msz51jn7bd0tnlal7ptadf3zprwbodd72z94g45vb3ecas230fk8yke50rj61x13amtr10edd6iavp2ykatuhmy99f4l8qbysagrghlqfivz1m0hqjonsjrvj8ms991gxpwjscgw4jb4t2qhbfnixd3junft8gjyiqospbt9oubg1k6w5wjgm6i8evbzy42xarcf44wuab0tw169f49pyo36q2w49u984dzqzvt0de7gsjzxnz5ecgchnvuv9z16fjcrovu3ma58q66xvldtz10fon2d0xoetyiwo794865fwa25798kfxxu2jnd4puzdozcakugich1hycv1m2qwhmdg4b5hdysbwcpbltbbk69d99677ape8c99edu8jq6vulzd1ony7z8w4dtej4lb8zm6rpnm0ux7hf2hjjrjksja35o623k0brldgpg2d4et3465lnzuvrtgwpfgipqnyczzrryg61znigqztlibnv89tttzu0dlni2en2ghxithojj802v2jmlq 00:09:32.801 15:13:41 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:32.801 [2024-04-24 15:13:42.014087] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:32.801 [2024-04-24 15:13:42.014195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64182 ] 00:09:33.059 [2024-04-24 15:13:42.158752] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.059 [2024-04-24 15:13:42.285660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.263  Copying: 511/511 [MB] (average 1312 MBps) 00:09:34.264 00:09:34.264 15:13:43 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:34.264 15:13:43 -- dd/uring.sh@54 -- # gen_conf 00:09:34.264 15:13:43 -- dd/common.sh@31 -- # xtrace_disable 00:09:34.264 15:13:43 -- common/autotest_common.sh@10 -- # set +x 00:09:34.522 [2024-04-24 15:13:43.509714] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:34.522 [2024-04-24 15:13:43.509799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64198 ] 00:09:34.522 { 00:09:34.522 "subsystems": [ 00:09:34.522 { 00:09:34.522 "subsystem": "bdev", 00:09:34.522 "config": [ 00:09:34.522 { 00:09:34.522 "params": { 00:09:34.522 "block_size": 512, 00:09:34.522 "num_blocks": 1048576, 00:09:34.522 "name": "malloc0" 00:09:34.522 }, 00:09:34.522 "method": "bdev_malloc_create" 00:09:34.522 }, 00:09:34.522 { 00:09:34.522 "params": { 00:09:34.522 "filename": "/dev/zram1", 00:09:34.522 "name": "uring0" 00:09:34.522 }, 00:09:34.522 "method": "bdev_uring_create" 00:09:34.522 }, 00:09:34.522 { 00:09:34.522 "method": "bdev_wait_for_examine" 00:09:34.522 } 00:09:34.522 ] 00:09:34.522 } 00:09:34.522 ] 00:09:34.522 } 00:09:34.522 [2024-04-24 15:13:43.644381] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.780 [2024-04-24 15:13:43.776539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.917  Copying: 213/512 [MB] (213 MBps) Copying: 426/512 [MB] (213 MBps) Copying: 512/512 [MB] (average 213 MBps) 00:09:37.917 00:09:37.917 15:13:46 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:37.917 15:13:46 -- dd/uring.sh@60 -- # gen_conf 00:09:37.917 15:13:46 -- dd/common.sh@31 -- # xtrace_disable 00:09:37.917 15:13:46 -- common/autotest_common.sh@10 -- # set +x 00:09:37.917 [2024-04-24 15:13:46.986882] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:37.917 [2024-04-24 15:13:46.987009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64246 ] 00:09:37.917 { 00:09:37.917 "subsystems": [ 00:09:37.917 { 00:09:37.917 "subsystem": "bdev", 00:09:37.917 "config": [ 00:09:37.917 { 00:09:37.917 "params": { 00:09:37.917 "block_size": 512, 00:09:37.917 "num_blocks": 1048576, 00:09:37.917 "name": "malloc0" 00:09:37.917 }, 00:09:37.917 "method": "bdev_malloc_create" 00:09:37.917 }, 00:09:37.917 { 00:09:37.917 "params": { 00:09:37.917 "filename": "/dev/zram1", 00:09:37.917 "name": "uring0" 00:09:37.917 }, 00:09:37.917 "method": "bdev_uring_create" 00:09:37.917 }, 00:09:37.917 { 00:09:37.917 "method": "bdev_wait_for_examine" 00:09:37.917 } 00:09:37.917 ] 00:09:37.917 } 00:09:37.917 ] 00:09:37.917 } 00:09:37.917 [2024-04-24 15:13:47.122060] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.175 [2024-04-24 15:13:47.247471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.709  Copying: 179/512 [MB] (179 MBps) Copying: 348/512 [MB] (169 MBps) Copying: 512/512 [MB] (average 174 MBps) 00:09:41.709 00:09:41.709 15:13:50 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:41.709 15:13:50 -- dd/uring.sh@66 -- # [[ 9lmy8h5dtunahzgx9jl1dprkwk59immheb23mxvv4hmpivwq8xxxvmtx51y5vosassgzdymw9tuozwq3rgnl8hgen56ozym2605z1405wevv3li43pk7spdslbvy6iz86qwoo70b5fpemsgl7j5u66jy6zds9a01ff53mwwmlq8fenkxhl0xb9zbn3fqf4e8hkbc9ocjgau6tynkfzjj9z6ubb427hrtur30l9ut0van7crwllrouir9m3ylzz3xfq0ufnsj1ijh641v2bv5a5ujcjogzqtyigpte1gfsn1gpqyaebizp7epix3ia28ddpb6c9b3jahpjs3l460r616s1io05xlfqku78008us0hahdtciwlpts36ovz89nrbupm2axyvubspfi5fybcx9i5aqj97jcn615owsrt1r4ol45ych4ggebxokbgcnx3bb52aq98jpnrpexri9vr8yyz9cavm0o8hp5rcibt0q0r5sxut03msz51jn7bd0tnlal7ptadf3zprwbodd72z94g45vb3ecas230fk8yke50rj61x13amtr10edd6iavp2ykatuhmy99f4l8qbysagrghlqfivz1m0hqjonsjrvj8ms991gxpwjscgw4jb4t2qhbfnixd3junft8gjyiqospbt9oubg1k6w5wjgm6i8evbzy42xarcf44wuab0tw169f49pyo36q2w49u984dzqzvt0de7gsjzxnz5ecgchnvuv9z16fjcrovu3ma58q66xvldtz10fon2d0xoetyiwo794865fwa25798kfxxu2jnd4puzdozcakugich1hycv1m2qwhmdg4b5hdysbwcpbltbbk69d99677ape8c99edu8jq6vulzd1ony7z8w4dtej4lb8zm6rpnm0ux7hf2hjjrjksja35o623k0brldgpg2d4et3465lnzuvrtgwpfgipqnyczzrryg61znigqztlibnv89tttzu0dlni2en2ghxithojj802v2jmlq == \9\l\m\y\8\h\5\d\t\u\n\a\h\z\g\x\9\j\l\1\d\p\r\k\w\k\5\9\i\m\m\h\e\b\2\3\m\x\v\v\4\h\m\p\i\v\w\q\8\x\x\x\v\m\t\x\5\1\y\5\v\o\s\a\s\s\g\z\d\y\m\w\9\t\u\o\z\w\q\3\r\g\n\l\8\h\g\e\n\5\6\o\z\y\m\2\6\0\5\z\1\4\0\5\w\e\v\v\3\l\i\4\3\p\k\7\s\p\d\s\l\b\v\y\6\i\z\8\6\q\w\o\o\7\0\b\5\f\p\e\m\s\g\l\7\j\5\u\6\6\j\y\6\z\d\s\9\a\0\1\f\f\5\3\m\w\w\m\l\q\8\f\e\n\k\x\h\l\0\x\b\9\z\b\n\3\f\q\f\4\e\8\h\k\b\c\9\o\c\j\g\a\u\6\t\y\n\k\f\z\j\j\9\z\6\u\b\b\4\2\7\h\r\t\u\r\3\0\l\9\u\t\0\v\a\n\7\c\r\w\l\l\r\o\u\i\r\9\m\3\y\l\z\z\3\x\f\q\0\u\f\n\s\j\1\i\j\h\6\4\1\v\2\b\v\5\a\5\u\j\c\j\o\g\z\q\t\y\i\g\p\t\e\1\g\f\s\n\1\g\p\q\y\a\e\b\i\z\p\7\e\p\i\x\3\i\a\2\8\d\d\p\b\6\c\9\b\3\j\a\h\p\j\s\3\l\4\6\0\r\6\1\6\s\1\i\o\0\5\x\l\f\q\k\u\7\8\0\0\8\u\s\0\h\a\h\d\t\c\i\w\l\p\t\s\3\6\o\v\z\8\9\n\r\b\u\p\m\2\a\x\y\v\u\b\s\p\f\i\5\f\y\b\c\x\9\i\5\a\q\j\9\7\j\c\n\6\1\5\o\w\s\r\t\1\r\4\o\l\4\5\y\c\h\4\g\g\e\b\x\o\k\b\g\c\n\x\3\b\b\5\2\a\q\9\8\j\p\n\r\p\e\x\r\i\9\v\r\8\y\y\z\9\c\a\v\m\0\o\8\h\p\5\r\c\i\b\t\0\q\0\r\5\s\x\u\t\0\3\m\s\z\5\1\j\n\7\b\d\0\t\n\l\a\l\7\p\t\a\d\f\3\z\p\r\w\b\o\d\d\7\2\z\9\4\g\4\5\v\b\3\e\c\a\s\2\3\0\f\k\8\y\k\e\5\0\r\j\6\1\x\1\3\a\m\t\r\1\0\e\d\d\6\i\a\v\p\2\y\k\a\t\u\h\m\y\9\9\f\4\l\8\q\b\y\s\a\g\r\g\h\l\q\f\i\v\z\1\m\0\h\q\j\o\n\s\j\r\v\j\8\m\s\9\9\1\g\x\p\w\j\s\c\g\w\4\j\b\4\t\2\q\h\b\f\n\i\x\d\3\j\u\n\f\t\8\g\j\y\i\q\o\s\p\b\t\9\o\u\b\g\1\k\6\w\5\w\j\g\m\6\i\8\e\v\b\z\y\4\2\x\a\r\c\f\4\4\w\u\a\b\0\t\w\1\6\9\f\4\9\p\y\o\3\6\q\2\w\4\9\u\9\8\4\d\z\q\z\v\t\0\d\e\7\g\s\j\z\x\n\z\5\e\c\g\c\h\n\v\u\v\9\z\1\6\f\j\c\r\o\v\u\3\m\a\5\8\q\6\6\x\v\l\d\t\z\1\0\f\o\n\2\d\0\x\o\e\t\y\i\w\o\7\9\4\8\6\5\f\w\a\2\5\7\9\8\k\f\x\x\u\2\j\n\d\4\p\u\z\d\o\z\c\a\k\u\g\i\c\h\1\h\y\c\v\1\m\2\q\w\h\m\d\g\4\b\5\h\d\y\s\b\w\c\p\b\l\t\b\b\k\6\9\d\9\9\6\7\7\a\p\e\8\c\9\9\e\d\u\8\j\q\6\v\u\l\z\d\1\o\n\y\7\z\8\w\4\d\t\e\j\4\l\b\8\z\m\6\r\p\n\m\0\u\x\7\h\f\2\h\j\j\r\j\k\s\j\a\3\5\o\6\2\3\k\0\b\r\l\d\g\p\g\2\d\4\e\t\3\4\6\5\l\n\z\u\v\r\t\g\w\p\f\g\i\p\q\n\y\c\z\z\r\r\y\g\6\1\z\n\i\g\q\z\t\l\i\b\n\v\8\9\t\t\t\z\u\0\d\l\n\i\2\e\n\2\g\h\x\i\t\h\o\j\j\8\0\2\v\2\j\m\l\q ]] 00:09:41.709 15:13:50 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:41.710 15:13:50 -- dd/uring.sh@69 -- # [[ 9lmy8h5dtunahzgx9jl1dprkwk59immheb23mxvv4hmpivwq8xxxvmtx51y5vosassgzdymw9tuozwq3rgnl8hgen56ozym2605z1405wevv3li43pk7spdslbvy6iz86qwoo70b5fpemsgl7j5u66jy6zds9a01ff53mwwmlq8fenkxhl0xb9zbn3fqf4e8hkbc9ocjgau6tynkfzjj9z6ubb427hrtur30l9ut0van7crwllrouir9m3ylzz3xfq0ufnsj1ijh641v2bv5a5ujcjogzqtyigpte1gfsn1gpqyaebizp7epix3ia28ddpb6c9b3jahpjs3l460r616s1io05xlfqku78008us0hahdtciwlpts36ovz89nrbupm2axyvubspfi5fybcx9i5aqj97jcn615owsrt1r4ol45ych4ggebxokbgcnx3bb52aq98jpnrpexri9vr8yyz9cavm0o8hp5rcibt0q0r5sxut03msz51jn7bd0tnlal7ptadf3zprwbodd72z94g45vb3ecas230fk8yke50rj61x13amtr10edd6iavp2ykatuhmy99f4l8qbysagrghlqfivz1m0hqjonsjrvj8ms991gxpwjscgw4jb4t2qhbfnixd3junft8gjyiqospbt9oubg1k6w5wjgm6i8evbzy42xarcf44wuab0tw169f49pyo36q2w49u984dzqzvt0de7gsjzxnz5ecgchnvuv9z16fjcrovu3ma58q66xvldtz10fon2d0xoetyiwo794865fwa25798kfxxu2jnd4puzdozcakugich1hycv1m2qwhmdg4b5hdysbwcpbltbbk69d99677ape8c99edu8jq6vulzd1ony7z8w4dtej4lb8zm6rpnm0ux7hf2hjjrjksja35o623k0brldgpg2d4et3465lnzuvrtgwpfgipqnyczzrryg61znigqztlibnv89tttzu0dlni2en2ghxithojj802v2jmlq == \9\l\m\y\8\h\5\d\t\u\n\a\h\z\g\x\9\j\l\1\d\p\r\k\w\k\5\9\i\m\m\h\e\b\2\3\m\x\v\v\4\h\m\p\i\v\w\q\8\x\x\x\v\m\t\x\5\1\y\5\v\o\s\a\s\s\g\z\d\y\m\w\9\t\u\o\z\w\q\3\r\g\n\l\8\h\g\e\n\5\6\o\z\y\m\2\6\0\5\z\1\4\0\5\w\e\v\v\3\l\i\4\3\p\k\7\s\p\d\s\l\b\v\y\6\i\z\8\6\q\w\o\o\7\0\b\5\f\p\e\m\s\g\l\7\j\5\u\6\6\j\y\6\z\d\s\9\a\0\1\f\f\5\3\m\w\w\m\l\q\8\f\e\n\k\x\h\l\0\x\b\9\z\b\n\3\f\q\f\4\e\8\h\k\b\c\9\o\c\j\g\a\u\6\t\y\n\k\f\z\j\j\9\z\6\u\b\b\4\2\7\h\r\t\u\r\3\0\l\9\u\t\0\v\a\n\7\c\r\w\l\l\r\o\u\i\r\9\m\3\y\l\z\z\3\x\f\q\0\u\f\n\s\j\1\i\j\h\6\4\1\v\2\b\v\5\a\5\u\j\c\j\o\g\z\q\t\y\i\g\p\t\e\1\g\f\s\n\1\g\p\q\y\a\e\b\i\z\p\7\e\p\i\x\3\i\a\2\8\d\d\p\b\6\c\9\b\3\j\a\h\p\j\s\3\l\4\6\0\r\6\1\6\s\1\i\o\0\5\x\l\f\q\k\u\7\8\0\0\8\u\s\0\h\a\h\d\t\c\i\w\l\p\t\s\3\6\o\v\z\8\9\n\r\b\u\p\m\2\a\x\y\v\u\b\s\p\f\i\5\f\y\b\c\x\9\i\5\a\q\j\9\7\j\c\n\6\1\5\o\w\s\r\t\1\r\4\o\l\4\5\y\c\h\4\g\g\e\b\x\o\k\b\g\c\n\x\3\b\b\5\2\a\q\9\8\j\p\n\r\p\e\x\r\i\9\v\r\8\y\y\z\9\c\a\v\m\0\o\8\h\p\5\r\c\i\b\t\0\q\0\r\5\s\x\u\t\0\3\m\s\z\5\1\j\n\7\b\d\0\t\n\l\a\l\7\p\t\a\d\f\3\z\p\r\w\b\o\d\d\7\2\z\9\4\g\4\5\v\b\3\e\c\a\s\2\3\0\f\k\8\y\k\e\5\0\r\j\6\1\x\1\3\a\m\t\r\1\0\e\d\d\6\i\a\v\p\2\y\k\a\t\u\h\m\y\9\9\f\4\l\8\q\b\y\s\a\g\r\g\h\l\q\f\i\v\z\1\m\0\h\q\j\o\n\s\j\r\v\j\8\m\s\9\9\1\g\x\p\w\j\s\c\g\w\4\j\b\4\t\2\q\h\b\f\n\i\x\d\3\j\u\n\f\t\8\g\j\y\i\q\o\s\p\b\t\9\o\u\b\g\1\k\6\w\5\w\j\g\m\6\i\8\e\v\b\z\y\4\2\x\a\r\c\f\4\4\w\u\a\b\0\t\w\1\6\9\f\4\9\p\y\o\3\6\q\2\w\4\9\u\9\8\4\d\z\q\z\v\t\0\d\e\7\g\s\j\z\x\n\z\5\e\c\g\c\h\n\v\u\v\9\z\1\6\f\j\c\r\o\v\u\3\m\a\5\8\q\6\6\x\v\l\d\t\z\1\0\f\o\n\2\d\0\x\o\e\t\y\i\w\o\7\9\4\8\6\5\f\w\a\2\5\7\9\8\k\f\x\x\u\2\j\n\d\4\p\u\z\d\o\z\c\a\k\u\g\i\c\h\1\h\y\c\v\1\m\2\q\w\h\m\d\g\4\b\5\h\d\y\s\b\w\c\p\b\l\t\b\b\k\6\9\d\9\9\6\7\7\a\p\e\8\c\9\9\e\d\u\8\j\q\6\v\u\l\z\d\1\o\n\y\7\z\8\w\4\d\t\e\j\4\l\b\8\z\m\6\r\p\n\m\0\u\x\7\h\f\2\h\j\j\r\j\k\s\j\a\3\5\o\6\2\3\k\0\b\r\l\d\g\p\g\2\d\4\e\t\3\4\6\5\l\n\z\u\v\r\t\g\w\p\f\g\i\p\q\n\y\c\z\z\r\r\y\g\6\1\z\n\i\g\q\z\t\l\i\b\n\v\8\9\t\t\t\z\u\0\d\l\n\i\2\e\n\2\g\h\x\i\t\h\o\j\j\8\0\2\v\2\j\m\l\q ]] 00:09:41.710 15:13:50 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:42.278 15:13:51 -- dd/uring.sh@75 -- # gen_conf 00:09:42.278 15:13:51 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:42.278 15:13:51 -- dd/common.sh@31 -- # xtrace_disable 00:09:42.278 15:13:51 -- common/autotest_common.sh@10 -- # set +x 00:09:42.278 [2024-04-24 15:13:51.393833] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:42.278 [2024-04-24 15:13:51.393935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64327 ] 00:09:42.278 { 00:09:42.278 "subsystems": [ 00:09:42.278 { 00:09:42.278 "subsystem": "bdev", 00:09:42.278 "config": [ 00:09:42.278 { 00:09:42.278 "params": { 00:09:42.278 "block_size": 512, 00:09:42.278 "num_blocks": 1048576, 00:09:42.278 "name": "malloc0" 00:09:42.278 }, 00:09:42.278 "method": "bdev_malloc_create" 00:09:42.278 }, 00:09:42.278 { 00:09:42.278 "params": { 00:09:42.278 "filename": "/dev/zram1", 00:09:42.278 "name": "uring0" 00:09:42.278 }, 00:09:42.278 "method": "bdev_uring_create" 00:09:42.278 }, 00:09:42.278 { 00:09:42.278 "method": "bdev_wait_for_examine" 00:09:42.278 } 00:09:42.278 ] 00:09:42.278 } 00:09:42.278 ] 00:09:42.278 } 00:09:42.537 [2024-04-24 15:13:51.532009] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.537 [2024-04-24 15:13:51.664393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.957  Copying: 144/512 [MB] (144 MBps) Copying: 290/512 [MB] (145 MBps) Copying: 434/512 [MB] (144 MBps) Copying: 512/512 [MB] (average 145 MBps) 00:09:46.957 00:09:46.957 15:13:55 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:46.957 15:13:55 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:46.957 15:13:55 -- dd/uring.sh@87 -- # : 00:09:46.957 15:13:55 -- dd/uring.sh@87 -- # : 00:09:46.957 15:13:55 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:46.957 15:13:55 -- dd/uring.sh@87 -- # gen_conf 00:09:46.957 15:13:55 -- dd/common.sh@31 -- # xtrace_disable 00:09:46.957 15:13:55 -- common/autotest_common.sh@10 -- # set +x 00:09:46.957 [2024-04-24 15:13:55.968596] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:46.957 [2024-04-24 15:13:55.968702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64388 ] 00:09:46.957 { 00:09:46.957 "subsystems": [ 00:09:46.957 { 00:09:46.957 "subsystem": "bdev", 00:09:46.957 "config": [ 00:09:46.957 { 00:09:46.957 "params": { 00:09:46.957 "block_size": 512, 00:09:46.957 "num_blocks": 1048576, 00:09:46.957 "name": "malloc0" 00:09:46.957 }, 00:09:46.957 "method": "bdev_malloc_create" 00:09:46.957 }, 00:09:46.957 { 00:09:46.957 "params": { 00:09:46.957 "filename": "/dev/zram1", 00:09:46.957 "name": "uring0" 00:09:46.957 }, 00:09:46.957 "method": "bdev_uring_create" 00:09:46.957 }, 00:09:46.957 { 00:09:46.957 "params": { 00:09:46.957 "name": "uring0" 00:09:46.957 }, 00:09:46.957 "method": "bdev_uring_delete" 00:09:46.957 }, 00:09:46.957 { 00:09:46.957 "method": "bdev_wait_for_examine" 00:09:46.957 } 00:09:46.957 ] 00:09:46.957 } 00:09:46.957 ] 00:09:46.957 } 00:09:46.957 [2024-04-24 15:13:56.101618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.216 [2024-04-24 15:13:56.229798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.735  Copying: 0/0 [B] (average 0 Bps) 00:09:47.735 00:09:47.735 15:13:56 -- dd/uring.sh@94 -- # : 00:09:47.735 15:13:56 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:47.735 15:13:56 -- dd/uring.sh@94 -- # gen_conf 00:09:47.735 15:13:56 -- common/autotest_common.sh@638 -- # local es=0 00:09:47.735 15:13:56 -- dd/common.sh@31 -- # xtrace_disable 00:09:47.735 15:13:56 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:47.735 15:13:56 -- common/autotest_common.sh@10 -- # set +x 00:09:47.735 15:13:56 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.735 15:13:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:47.735 15:13:56 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.735 15:13:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:47.735 15:13:56 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.735 15:13:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:47.735 15:13:56 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.735 15:13:56 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:47.735 15:13:56 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:47.994 [2024-04-24 15:13:57.018062] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:47.994 [2024-04-24 15:13:57.018166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64417 ] 00:09:47.994 { 00:09:47.994 "subsystems": [ 00:09:47.994 { 00:09:47.994 "subsystem": "bdev", 00:09:47.994 "config": [ 00:09:47.994 { 00:09:47.994 "params": { 00:09:47.994 "block_size": 512, 00:09:47.994 "num_blocks": 1048576, 00:09:47.994 "name": "malloc0" 00:09:47.994 }, 00:09:47.994 "method": "bdev_malloc_create" 00:09:47.994 }, 00:09:47.994 { 00:09:47.994 "params": { 00:09:47.994 "filename": "/dev/zram1", 00:09:47.994 "name": "uring0" 00:09:47.994 }, 00:09:47.994 "method": "bdev_uring_create" 00:09:47.994 }, 00:09:47.994 { 00:09:47.994 "params": { 00:09:47.994 "name": "uring0" 00:09:47.994 }, 00:09:47.994 "method": "bdev_uring_delete" 00:09:47.994 }, 00:09:47.994 { 00:09:47.994 "method": "bdev_wait_for_examine" 00:09:47.994 } 00:09:47.995 ] 00:09:47.995 } 00:09:47.995 ] 00:09:47.995 } 00:09:47.995 [2024-04-24 15:13:57.160501] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.254 [2024-04-24 15:13:57.282475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.512 [2024-04-24 15:13:57.558339] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:48.512 [2024-04-24 15:13:57.558425] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:48.512 [2024-04-24 15:13:57.558452] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:09:48.512 [2024-04-24 15:13:57.558463] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:48.771 [2024-04-24 15:13:57.894143] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:49.030 15:13:58 -- common/autotest_common.sh@641 -- # es=237 00:09:49.030 15:13:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:49.030 15:13:58 -- common/autotest_common.sh@650 -- # es=109 00:09:49.030 15:13:58 -- common/autotest_common.sh@651 -- # case "$es" in 00:09:49.030 15:13:58 -- common/autotest_common.sh@658 -- # es=1 00:09:49.030 15:13:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:49.030 15:13:58 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:49.030 15:13:58 -- dd/common.sh@172 -- # local id=1 00:09:49.030 15:13:58 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:09:49.030 15:13:58 -- dd/common.sh@176 -- # echo 1 00:09:49.030 15:13:58 -- dd/common.sh@177 -- # echo 1 00:09:49.030 15:13:58 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:49.289 00:09:49.289 real 0m16.348s 00:09:49.289 user 0m11.124s 00:09:49.289 sys 0m12.949s 00:09:49.289 15:13:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:49.289 15:13:58 -- common/autotest_common.sh@10 -- # set +x 00:09:49.289 ************************************ 00:09:49.289 END TEST dd_uring_copy 00:09:49.289 ************************************ 00:09:49.289 00:09:49.289 real 0m16.562s 00:09:49.289 user 0m11.206s 00:09:49.289 sys 0m13.068s 00:09:49.289 15:13:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:49.289 15:13:58 -- common/autotest_common.sh@10 -- # set +x 00:09:49.289 ************************************ 00:09:49.289 END TEST spdk_dd_uring 00:09:49.289 ************************************ 00:09:49.289 15:13:58 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:49.289 15:13:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:49.289 15:13:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:49.289 15:13:58 -- common/autotest_common.sh@10 -- # set +x 00:09:49.289 ************************************ 00:09:49.289 START TEST spdk_dd_sparse 00:09:49.289 ************************************ 00:09:49.289 15:13:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:49.289 * Looking for test storage... 00:09:49.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:49.289 15:13:58 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:49.289 15:13:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.289 15:13:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.289 15:13:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.289 15:13:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.289 15:13:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.289 15:13:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.289 15:13:58 -- paths/export.sh@5 -- # export PATH 00:09:49.289 15:13:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.549 15:13:58 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:49.549 15:13:58 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:49.549 15:13:58 -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:49.549 15:13:58 -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:49.549 15:13:58 -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:49.549 15:13:58 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:49.549 15:13:58 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:49.549 15:13:58 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:49.549 15:13:58 -- dd/sparse.sh@118 -- # prepare 00:09:49.549 15:13:58 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:49.549 15:13:58 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:49.549 1+0 records in 00:09:49.549 1+0 records out 00:09:49.549 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00605396 s, 693 MB/s 00:09:49.549 15:13:58 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:49.549 1+0 records in 00:09:49.549 1+0 records out 00:09:49.549 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00665171 s, 631 MB/s 00:09:49.549 15:13:58 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:49.549 1+0 records in 00:09:49.549 1+0 records out 00:09:49.549 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00505559 s, 830 MB/s 00:09:49.549 15:13:58 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:49.549 15:13:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:49.549 15:13:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:49.549 15:13:58 -- common/autotest_common.sh@10 -- # set +x 00:09:49.549 ************************************ 00:09:49.549 START TEST dd_sparse_file_to_file 00:09:49.549 ************************************ 00:09:49.549 15:13:58 -- common/autotest_common.sh@1111 -- # file_to_file 00:09:49.549 15:13:58 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:49.549 15:13:58 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:49.549 15:13:58 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:49.549 15:13:58 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:49.549 15:13:58 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:49.549 15:13:58 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:49.549 15:13:58 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:49.549 15:13:58 -- dd/sparse.sh@41 -- # gen_conf 00:09:49.549 15:13:58 -- dd/common.sh@31 -- # xtrace_disable 00:09:49.549 15:13:58 -- common/autotest_common.sh@10 -- # set +x 00:09:49.549 [2024-04-24 15:13:58.700674] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:49.549 [2024-04-24 15:13:58.700777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64520 ] 00:09:49.549 { 00:09:49.549 "subsystems": [ 00:09:49.549 { 00:09:49.549 "subsystem": "bdev", 00:09:49.549 "config": [ 00:09:49.549 { 00:09:49.549 "params": { 00:09:49.549 "block_size": 4096, 00:09:49.549 "filename": "dd_sparse_aio_disk", 00:09:49.549 "name": "dd_aio" 00:09:49.549 }, 00:09:49.549 "method": "bdev_aio_create" 00:09:49.549 }, 00:09:49.549 { 00:09:49.549 "params": { 00:09:49.549 "lvs_name": "dd_lvstore", 00:09:49.549 "bdev_name": "dd_aio" 00:09:49.549 }, 00:09:49.550 "method": "bdev_lvol_create_lvstore" 00:09:49.550 }, 00:09:49.550 { 00:09:49.550 "method": "bdev_wait_for_examine" 00:09:49.550 } 00:09:49.550 ] 00:09:49.550 } 00:09:49.550 ] 00:09:49.550 } 00:09:49.809 [2024-04-24 15:13:58.842604] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.809 [2024-04-24 15:13:58.962452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.326  Copying: 12/36 [MB] (average 1000 MBps) 00:09:50.326 00:09:50.326 15:13:59 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:50.326 15:13:59 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:50.326 15:13:59 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:50.326 15:13:59 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:50.326 15:13:59 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:50.326 15:13:59 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:50.326 15:13:59 -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:50.326 15:13:59 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:50.326 15:13:59 -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:50.326 15:13:59 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:50.326 00:09:50.326 real 0m0.775s 00:09:50.326 user 0m0.511s 00:09:50.326 sys 0m0.362s 00:09:50.326 15:13:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:50.326 15:13:59 -- common/autotest_common.sh@10 -- # set +x 00:09:50.326 ************************************ 00:09:50.326 END TEST dd_sparse_file_to_file 00:09:50.326 ************************************ 00:09:50.326 15:13:59 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:50.326 15:13:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:50.326 15:13:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:50.326 15:13:59 -- common/autotest_common.sh@10 -- # set +x 00:09:50.326 ************************************ 00:09:50.326 START TEST dd_sparse_file_to_bdev 00:09:50.326 ************************************ 00:09:50.326 15:13:59 -- common/autotest_common.sh@1111 -- # file_to_bdev 00:09:50.326 15:13:59 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:50.326 15:13:59 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:50.326 15:13:59 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:09:50.326 15:13:59 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:50.326 15:13:59 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:50.326 15:13:59 -- dd/sparse.sh@73 -- # gen_conf 00:09:50.326 15:13:59 -- dd/common.sh@31 -- # xtrace_disable 00:09:50.326 15:13:59 -- common/autotest_common.sh@10 -- # set +x 00:09:50.585 [2024-04-24 15:13:59.596614] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:50.585 [2024-04-24 15:13:59.596711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64572 ] 00:09:50.585 { 00:09:50.585 "subsystems": [ 00:09:50.585 { 00:09:50.585 "subsystem": "bdev", 00:09:50.585 "config": [ 00:09:50.585 { 00:09:50.585 "params": { 00:09:50.585 "block_size": 4096, 00:09:50.585 "filename": "dd_sparse_aio_disk", 00:09:50.585 "name": "dd_aio" 00:09:50.585 }, 00:09:50.585 "method": "bdev_aio_create" 00:09:50.585 }, 00:09:50.585 { 00:09:50.585 "params": { 00:09:50.585 "lvs_name": "dd_lvstore", 00:09:50.585 "lvol_name": "dd_lvol", 00:09:50.585 "size": 37748736, 00:09:50.585 "thin_provision": true 00:09:50.585 }, 00:09:50.585 "method": "bdev_lvol_create" 00:09:50.585 }, 00:09:50.585 { 00:09:50.585 "method": "bdev_wait_for_examine" 00:09:50.585 } 00:09:50.585 ] 00:09:50.585 } 00:09:50.585 ] 00:09:50.585 } 00:09:50.585 [2024-04-24 15:13:59.731370] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.845 [2024-04-24 15:13:59.853668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.845 [2024-04-24 15:13:59.964824] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:09:50.845  Copying: 12/36 [MB] (average 480 MBps)[2024-04-24 15:14:00.009314] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:09:51.103 00:09:51.103 00:09:51.103 00:09:51.103 real 0m0.751s 00:09:51.103 user 0m0.503s 00:09:51.103 sys 0m0.360s 00:09:51.103 15:14:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:51.103 15:14:00 -- common/autotest_common.sh@10 -- # set +x 00:09:51.103 ************************************ 00:09:51.103 END TEST dd_sparse_file_to_bdev 00:09:51.103 ************************************ 00:09:51.103 15:14:00 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:51.103 15:14:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:51.104 15:14:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.104 15:14:00 -- common/autotest_common.sh@10 -- # set +x 00:09:51.362 ************************************ 00:09:51.362 START TEST dd_sparse_bdev_to_file 00:09:51.362 ************************************ 00:09:51.362 15:14:00 -- common/autotest_common.sh@1111 -- # bdev_to_file 00:09:51.362 15:14:00 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:51.362 15:14:00 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:51.362 15:14:00 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:51.362 15:14:00 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:51.362 15:14:00 -- dd/sparse.sh@91 -- # gen_conf 00:09:51.362 15:14:00 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:51.362 15:14:00 -- dd/common.sh@31 -- # xtrace_disable 00:09:51.362 15:14:00 -- common/autotest_common.sh@10 -- # set +x 00:09:51.362 [2024-04-24 15:14:00.459668] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:51.362 [2024-04-24 15:14:00.459773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64614 ] 00:09:51.362 { 00:09:51.362 "subsystems": [ 00:09:51.362 { 00:09:51.362 "subsystem": "bdev", 00:09:51.362 "config": [ 00:09:51.362 { 00:09:51.362 "params": { 00:09:51.362 "block_size": 4096, 00:09:51.362 "filename": "dd_sparse_aio_disk", 00:09:51.362 "name": "dd_aio" 00:09:51.362 }, 00:09:51.362 "method": "bdev_aio_create" 00:09:51.362 }, 00:09:51.362 { 00:09:51.362 "method": "bdev_wait_for_examine" 00:09:51.362 } 00:09:51.362 ] 00:09:51.362 } 00:09:51.362 ] 00:09:51.362 } 00:09:51.362 [2024-04-24 15:14:00.598776] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.620 [2024-04-24 15:14:00.727679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.152  Copying: 12/36 [MB] (average 1090 MBps) 00:09:52.152 00:09:52.152 15:14:01 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:52.152 15:14:01 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:52.152 15:14:01 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:52.152 15:14:01 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:52.152 15:14:01 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:52.152 15:14:01 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:52.152 15:14:01 -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:52.152 15:14:01 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:52.152 15:14:01 -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:52.152 15:14:01 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:52.152 00:09:52.152 real 0m0.782s 00:09:52.152 user 0m0.514s 00:09:52.152 sys 0m0.374s 00:09:52.152 15:14:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:52.152 15:14:01 -- common/autotest_common.sh@10 -- # set +x 00:09:52.152 ************************************ 00:09:52.152 END TEST dd_sparse_bdev_to_file 00:09:52.152 ************************************ 00:09:52.152 15:14:01 -- dd/sparse.sh@1 -- # cleanup 00:09:52.152 15:14:01 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:52.152 15:14:01 -- dd/sparse.sh@12 -- # rm file_zero1 00:09:52.152 15:14:01 -- dd/sparse.sh@13 -- # rm file_zero2 00:09:52.152 15:14:01 -- dd/sparse.sh@14 -- # rm file_zero3 00:09:52.152 00:09:52.152 real 0m2.812s 00:09:52.152 user 0m1.704s 00:09:52.152 sys 0m1.376s 00:09:52.152 15:14:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:52.152 ************************************ 00:09:52.152 END TEST spdk_dd_sparse 00:09:52.152 15:14:01 -- common/autotest_common.sh@10 -- # set +x 00:09:52.152 ************************************ 00:09:52.152 15:14:01 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:52.152 15:14:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.152 15:14:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.152 15:14:01 -- common/autotest_common.sh@10 -- # set +x 00:09:52.152 ************************************ 00:09:52.152 START TEST spdk_dd_negative 00:09:52.152 ************************************ 00:09:52.152 15:14:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:52.409 * Looking for test storage... 00:09:52.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:52.409 15:14:01 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:52.409 15:14:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.409 15:14:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.409 15:14:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.410 15:14:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.410 15:14:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.410 15:14:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.410 15:14:01 -- paths/export.sh@5 -- # export PATH 00:09:52.410 15:14:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.410 15:14:01 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:52.410 15:14:01 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:52.410 15:14:01 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:52.410 15:14:01 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:52.410 15:14:01 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:09:52.410 15:14:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.410 15:14:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.410 15:14:01 -- common/autotest_common.sh@10 -- # set +x 00:09:52.410 ************************************ 00:09:52.410 START TEST dd_invalid_arguments 00:09:52.410 ************************************ 00:09:52.410 15:14:01 -- common/autotest_common.sh@1111 -- # invalid_arguments 00:09:52.410 15:14:01 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:52.410 15:14:01 -- common/autotest_common.sh@638 -- # local es=0 00:09:52.410 15:14:01 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:52.410 15:14:01 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.410 15:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:52.410 15:14:01 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.410 15:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:52.410 15:14:01 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.410 15:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:52.410 15:14:01 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.410 15:14:01 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:52.410 15:14:01 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:52.410 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:52.410 00:09:52.410 CPU options: 00:09:52.410 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:52.410 (like [0,1,10]) 00:09:52.410 --lcores lcore to CPU mapping list. The list is in the format: 00:09:52.410 [<,lcores[@CPUs]>...] 00:09:52.410 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:52.410 Within the group, '-' is used for range separator, 00:09:52.410 ',' is used for single number separator. 00:09:52.410 '( )' can be omitted for single element group, 00:09:52.410 '@' can be omitted if cpus and lcores have the same value 00:09:52.410 --disable-cpumask-locks Disable CPU core lock files. 00:09:52.410 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:52.410 pollers in the app support interrupt mode) 00:09:52.410 -p, --main-core main (primary) core for DPDK 00:09:52.410 00:09:52.410 Configuration options: 00:09:52.410 -c, --config, --json JSON config file 00:09:52.410 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:52.410 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:52.410 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:52.410 --rpcs-allowed comma-separated list of permitted RPCS 00:09:52.410 --json-ignore-init-errors don't exit on invalid config entry 00:09:52.410 00:09:52.410 Memory options: 00:09:52.410 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:52.410 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:52.410 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:52.410 -R, --huge-unlink unlink huge files after initialization 00:09:52.410 -n, --mem-channels number of memory channels used for DPDK 00:09:52.410 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:52.410 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:52.410 --no-huge run without using hugepages 00:09:52.410 -i, --shm-id shared memory ID (optional) 00:09:52.410 -g, --single-file-segments force creating just one hugetlbfs file 00:09:52.410 00:09:52.410 PCI options: 00:09:52.410 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:52.410 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:52.410 -u, --no-pci disable PCI access 00:09:52.410 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:52.410 00:09:52.410 Log options: 00:09:52.410 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:52.410 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:52.410 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:52.410 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:52.410 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:09:52.410 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:09:52.410 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:09:52.410 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:09:52.410 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:09:52.410 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:09:52.410 virtio_vfio_user, vmd) 00:09:52.410 --silence-noticelog disable notice level logging to stderr 00:09:52.410 00:09:52.410 Trace options: 00:09:52.410 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:52.410 setting 0 to disable trace (default 32768) 00:09:52.410 Tracepoints vary in size and can use more than one trace entry. 00:09:52.410 -e, --tpoint-group [:] 00:09:52.410 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:09:52.410 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:52.410 [2024-04-24 15:14:01.592299] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:09:52.410 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:09:52.410 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:52.410 a tracepoint group. First tpoint inside a group can be enabled by 00:09:52.410 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:52.410 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:52.410 in /include/spdk_internal/trace_defs.h 00:09:52.410 00:09:52.410 Other options: 00:09:52.410 -h, --help show this usage 00:09:52.410 -v, --version print SPDK version 00:09:52.410 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:52.410 --env-context Opaque context for use of the env implementation 00:09:52.410 00:09:52.410 Application specific: 00:09:52.410 [--------- DD Options ---------] 00:09:52.410 --if Input file. Must specify either --if or --ib. 00:09:52.410 --ib Input bdev. Must specifier either --if or --ib 00:09:52.410 --of Output file. Must specify either --of or --ob. 00:09:52.410 --ob Output bdev. Must specify either --of or --ob. 00:09:52.410 --iflag Input file flags. 00:09:52.410 --oflag Output file flags. 00:09:52.410 --bs I/O unit size (default: 4096) 00:09:52.410 --qd Queue depth (default: 2) 00:09:52.410 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:52.410 --skip Skip this many I/O units at start of input. (default: 0) 00:09:52.410 --seek Skip this many I/O units at start of output. (default: 0) 00:09:52.410 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:52.410 --sparse Enable hole skipping in input target 00:09:52.410 Available iflag and oflag values: 00:09:52.410 append - append mode 00:09:52.410 direct - use direct I/O for data 00:09:52.410 directory - fail unless a directory 00:09:52.410 dsync - use synchronized I/O for data 00:09:52.410 noatime - do not update access time 00:09:52.410 noctty - do not assign controlling terminal from file 00:09:52.410 nofollow - do not follow symlinks 00:09:52.410 nonblock - use non-blocking I/O 00:09:52.410 sync - use synchronized I/O for data and metadata 00:09:52.410 15:14:01 -- common/autotest_common.sh@641 -- # es=2 00:09:52.410 15:14:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:52.410 15:14:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:52.410 15:14:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:52.411 00:09:52.411 real 0m0.063s 00:09:52.411 user 0m0.035s 00:09:52.411 sys 0m0.027s 00:09:52.411 15:14:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:52.411 15:14:01 -- common/autotest_common.sh@10 -- # set +x 00:09:52.411 ************************************ 00:09:52.411 END TEST dd_invalid_arguments 00:09:52.411 ************************************ 00:09:52.411 15:14:01 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:09:52.411 15:14:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.411 15:14:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.411 15:14:01 -- common/autotest_common.sh@10 -- # set +x 00:09:52.669 ************************************ 00:09:52.669 START TEST dd_double_input 00:09:52.669 ************************************ 00:09:52.669 15:14:01 -- common/autotest_common.sh@1111 -- # double_input 00:09:52.669 15:14:01 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:52.669 15:14:01 -- common/autotest_common.sh@638 -- # local es=0 00:09:52.669 15:14:01 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:52.669 15:14:01 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.669 15:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:52.669 15:14:01 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.669 15:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:52.669 15:14:01 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.669 15:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:52.669 15:14:01 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.669 15:14:01 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:52.669 15:14:01 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:52.669 [2024-04-24 15:14:01.775291] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:52.669 15:14:01 -- common/autotest_common.sh@641 -- # es=22 00:09:52.669 15:14:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:52.669 15:14:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:52.669 15:14:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:52.669 00:09:52.669 real 0m0.073s 00:09:52.669 user 0m0.054s 00:09:52.669 sys 0m0.018s 00:09:52.669 15:14:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:52.669 ************************************ 00:09:52.669 END TEST dd_double_input 00:09:52.669 15:14:01 -- common/autotest_common.sh@10 -- # set +x 00:09:52.669 ************************************ 00:09:52.669 15:14:01 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:09:52.669 15:14:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.669 15:14:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.669 15:14:01 -- common/autotest_common.sh@10 -- # set +x 00:09:52.927 ************************************ 00:09:52.927 START TEST dd_double_output 00:09:52.927 ************************************ 00:09:52.927 15:14:01 -- common/autotest_common.sh@1111 -- # double_output 00:09:52.927 15:14:01 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:52.927 15:14:01 -- common/autotest_common.sh@638 -- # local es=0 00:09:52.927 15:14:01 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:52.927 15:14:01 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.927 15:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:52.927 15:14:01 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.927 15:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:52.927 15:14:01 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.927 15:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:52.927 15:14:01 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.927 15:14:01 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:52.927 15:14:01 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:52.927 [2024-04-24 15:14:01.975691] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:52.927 15:14:01 -- common/autotest_common.sh@641 -- # es=22 00:09:52.927 15:14:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:52.927 15:14:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:52.927 15:14:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:52.927 00:09:52.927 real 0m0.077s 00:09:52.927 user 0m0.039s 00:09:52.927 sys 0m0.036s 00:09:52.927 15:14:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:52.927 15:14:01 -- common/autotest_common.sh@10 -- # set +x 00:09:52.927 ************************************ 00:09:52.927 END TEST dd_double_output 00:09:52.927 ************************************ 00:09:52.927 15:14:02 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:09:52.927 15:14:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.927 15:14:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.927 15:14:02 -- common/autotest_common.sh@10 -- # set +x 00:09:52.927 ************************************ 00:09:52.927 START TEST dd_no_input 00:09:52.927 ************************************ 00:09:52.927 15:14:02 -- common/autotest_common.sh@1111 -- # no_input 00:09:52.927 15:14:02 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:52.927 15:14:02 -- common/autotest_common.sh@638 -- # local es=0 00:09:52.927 15:14:02 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:52.927 15:14:02 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.927 15:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:52.927 15:14:02 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.927 15:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:52.927 15:14:02 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.927 15:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:52.927 15:14:02 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.927 15:14:02 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:52.927 15:14:02 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:52.927 [2024-04-24 15:14:02.166564] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:09:53.323 15:14:02 -- common/autotest_common.sh@641 -- # es=22 00:09:53.323 15:14:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:53.323 15:14:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:53.323 15:14:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:53.323 00:09:53.323 real 0m0.074s 00:09:53.324 user 0m0.046s 00:09:53.324 sys 0m0.028s 00:09:53.324 15:14:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:53.324 15:14:02 -- common/autotest_common.sh@10 -- # set +x 00:09:53.324 ************************************ 00:09:53.324 END TEST dd_no_input 00:09:53.324 ************************************ 00:09:53.324 15:14:02 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:09:53.324 15:14:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:53.324 15:14:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.324 15:14:02 -- common/autotest_common.sh@10 -- # set +x 00:09:53.324 ************************************ 00:09:53.324 START TEST dd_no_output 00:09:53.324 ************************************ 00:09:53.324 15:14:02 -- common/autotest_common.sh@1111 -- # no_output 00:09:53.324 15:14:02 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:53.324 15:14:02 -- common/autotest_common.sh@638 -- # local es=0 00:09:53.324 15:14:02 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:53.324 15:14:02 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.324 15:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.324 15:14:02 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.324 15:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.324 15:14:02 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.324 15:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.324 15:14:02 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.324 15:14:02 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:53.324 15:14:02 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:53.324 [2024-04-24 15:14:02.358289] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:09:53.324 15:14:02 -- common/autotest_common.sh@641 -- # es=22 00:09:53.324 15:14:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:53.324 15:14:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:53.324 15:14:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:53.324 00:09:53.324 real 0m0.073s 00:09:53.324 user 0m0.044s 00:09:53.324 sys 0m0.028s 00:09:53.324 15:14:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:53.324 15:14:02 -- common/autotest_common.sh@10 -- # set +x 00:09:53.324 ************************************ 00:09:53.324 END TEST dd_no_output 00:09:53.324 ************************************ 00:09:53.324 15:14:02 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:53.324 15:14:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:53.324 15:14:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.324 15:14:02 -- common/autotest_common.sh@10 -- # set +x 00:09:53.324 ************************************ 00:09:53.324 START TEST dd_wrong_blocksize 00:09:53.324 ************************************ 00:09:53.324 15:14:02 -- common/autotest_common.sh@1111 -- # wrong_blocksize 00:09:53.324 15:14:02 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:53.324 15:14:02 -- common/autotest_common.sh@638 -- # local es=0 00:09:53.324 15:14:02 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:53.324 15:14:02 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.324 15:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.324 15:14:02 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.324 15:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.324 15:14:02 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.324 15:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.324 15:14:02 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.324 15:14:02 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:53.324 15:14:02 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:53.324 [2024-04-24 15:14:02.553295] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:09:53.582 15:14:02 -- common/autotest_common.sh@641 -- # es=22 00:09:53.582 15:14:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:53.582 15:14:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:53.582 15:14:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:53.582 00:09:53.582 real 0m0.073s 00:09:53.582 user 0m0.045s 00:09:53.582 sys 0m0.027s 00:09:53.582 15:14:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:53.582 15:14:02 -- common/autotest_common.sh@10 -- # set +x 00:09:53.582 ************************************ 00:09:53.582 END TEST dd_wrong_blocksize 00:09:53.582 ************************************ 00:09:53.582 15:14:02 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:53.582 15:14:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:53.582 15:14:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.582 15:14:02 -- common/autotest_common.sh@10 -- # set +x 00:09:53.582 ************************************ 00:09:53.582 START TEST dd_smaller_blocksize 00:09:53.582 ************************************ 00:09:53.582 15:14:02 -- common/autotest_common.sh@1111 -- # smaller_blocksize 00:09:53.582 15:14:02 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:53.582 15:14:02 -- common/autotest_common.sh@638 -- # local es=0 00:09:53.582 15:14:02 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:53.582 15:14:02 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.582 15:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.582 15:14:02 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.582 15:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.582 15:14:02 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.582 15:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.582 15:14:02 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:53.582 15:14:02 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:53.582 15:14:02 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:53.582 [2024-04-24 15:14:02.741098] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:53.582 [2024-04-24 15:14:02.741202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64874 ] 00:09:53.839 [2024-04-24 15:14:02.884272] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.839 [2024-04-24 15:14:03.022288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.411 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:54.411 [2024-04-24 15:14:03.425732] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:54.411 [2024-04-24 15:14:03.425825] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:54.411 [2024-04-24 15:14:03.552946] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:54.669 15:14:03 -- common/autotest_common.sh@641 -- # es=244 00:09:54.669 15:14:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:54.669 15:14:03 -- common/autotest_common.sh@650 -- # es=116 00:09:54.669 15:14:03 -- common/autotest_common.sh@651 -- # case "$es" in 00:09:54.669 15:14:03 -- common/autotest_common.sh@658 -- # es=1 00:09:54.669 15:14:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:54.669 00:09:54.669 real 0m1.006s 00:09:54.669 user 0m0.482s 00:09:54.669 sys 0m0.415s 00:09:54.669 15:14:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:54.669 15:14:03 -- common/autotest_common.sh@10 -- # set +x 00:09:54.669 ************************************ 00:09:54.669 END TEST dd_smaller_blocksize 00:09:54.669 ************************************ 00:09:54.669 15:14:03 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:09:54.669 15:14:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:54.669 15:14:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.669 15:14:03 -- common/autotest_common.sh@10 -- # set +x 00:09:54.669 ************************************ 00:09:54.669 START TEST dd_invalid_count 00:09:54.669 ************************************ 00:09:54.669 15:14:03 -- common/autotest_common.sh@1111 -- # invalid_count 00:09:54.669 15:14:03 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:54.669 15:14:03 -- common/autotest_common.sh@638 -- # local es=0 00:09:54.669 15:14:03 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:54.669 15:14:03 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.669 15:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:54.669 15:14:03 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.669 15:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:54.669 15:14:03 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.669 15:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:54.669 15:14:03 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.669 15:14:03 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:54.669 15:14:03 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:54.669 [2024-04-24 15:14:03.858261] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:09:54.669 15:14:03 -- common/autotest_common.sh@641 -- # es=22 00:09:54.669 15:14:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:54.669 15:14:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:54.669 15:14:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:54.669 00:09:54.669 real 0m0.075s 00:09:54.669 user 0m0.046s 00:09:54.669 sys 0m0.027s 00:09:54.669 15:14:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:54.669 15:14:03 -- common/autotest_common.sh@10 -- # set +x 00:09:54.669 ************************************ 00:09:54.670 END TEST dd_invalid_count 00:09:54.670 ************************************ 00:09:54.929 15:14:03 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:09:54.929 15:14:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:54.929 15:14:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.929 15:14:03 -- common/autotest_common.sh@10 -- # set +x 00:09:54.929 ************************************ 00:09:54.929 START TEST dd_invalid_oflag 00:09:54.929 ************************************ 00:09:54.929 15:14:03 -- common/autotest_common.sh@1111 -- # invalid_oflag 00:09:54.929 15:14:03 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:54.929 15:14:03 -- common/autotest_common.sh@638 -- # local es=0 00:09:54.929 15:14:03 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:54.929 15:14:03 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.929 15:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:54.929 15:14:03 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.929 15:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:54.929 15:14:03 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.929 15:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:54.929 15:14:03 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.929 15:14:03 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:54.929 15:14:03 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:54.929 [2024-04-24 15:14:04.037526] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:09:54.929 15:14:04 -- common/autotest_common.sh@641 -- # es=22 00:09:54.929 15:14:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:54.929 15:14:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:54.929 15:14:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:54.929 00:09:54.929 real 0m0.071s 00:09:54.929 user 0m0.045s 00:09:54.929 sys 0m0.025s 00:09:54.929 15:14:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:54.929 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:54.929 ************************************ 00:09:54.929 END TEST dd_invalid_oflag 00:09:54.929 ************************************ 00:09:54.929 15:14:04 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:09:54.929 15:14:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:54.929 15:14:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.929 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:54.929 ************************************ 00:09:54.929 START TEST dd_invalid_iflag 00:09:54.929 ************************************ 00:09:54.929 15:14:04 -- common/autotest_common.sh@1111 -- # invalid_iflag 00:09:54.929 15:14:04 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:54.929 15:14:04 -- common/autotest_common.sh@638 -- # local es=0 00:09:54.929 15:14:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:54.929 15:14:04 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.929 15:14:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:54.929 15:14:04 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.929 15:14:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:54.929 15:14:04 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.187 15:14:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:55.187 15:14:04 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.187 15:14:04 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:55.187 15:14:04 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:55.187 [2024-04-24 15:14:04.219859] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:09:55.187 15:14:04 -- common/autotest_common.sh@641 -- # es=22 00:09:55.187 15:14:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:55.187 15:14:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:55.187 15:14:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:55.187 00:09:55.187 real 0m0.071s 00:09:55.187 user 0m0.043s 00:09:55.187 sys 0m0.027s 00:09:55.187 15:14:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:55.187 ************************************ 00:09:55.187 END TEST dd_invalid_iflag 00:09:55.187 ************************************ 00:09:55.187 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:55.187 15:14:04 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:09:55.187 15:14:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:55.187 15:14:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.187 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:55.187 ************************************ 00:09:55.187 START TEST dd_unknown_flag 00:09:55.187 ************************************ 00:09:55.187 15:14:04 -- common/autotest_common.sh@1111 -- # unknown_flag 00:09:55.187 15:14:04 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:55.187 15:14:04 -- common/autotest_common.sh@638 -- # local es=0 00:09:55.187 15:14:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:55.187 15:14:04 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.187 15:14:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:55.187 15:14:04 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.187 15:14:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:55.187 15:14:04 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.187 15:14:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:55.187 15:14:04 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.187 15:14:04 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:55.187 15:14:04 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:55.187 [2024-04-24 15:14:04.410952] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:55.187 [2024-04-24 15:14:04.411630] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64987 ] 00:09:55.445 [2024-04-24 15:14:04.553494] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.704 [2024-04-24 15:14:04.689945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.704 [2024-04-24 15:14:04.788807] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:55.704 [2024-04-24 15:14:04.788886] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:55.705 [2024-04-24 15:14:04.788954] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:55.705 [2024-04-24 15:14:04.788971] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:55.705 [2024-04-24 15:14:04.789254] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:55.705 [2024-04-24 15:14:04.789275] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:55.705 [2024-04-24 15:14:04.789331] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:55.705 [2024-04-24 15:14:04.789344] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:55.705 [2024-04-24 15:14:04.918224] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:55.964 15:14:05 -- common/autotest_common.sh@641 -- # es=234 00:09:55.964 15:14:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:55.964 15:14:05 -- common/autotest_common.sh@650 -- # es=106 00:09:55.964 15:14:05 -- common/autotest_common.sh@651 -- # case "$es" in 00:09:55.964 15:14:05 -- common/autotest_common.sh@658 -- # es=1 00:09:55.964 15:14:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:55.964 00:09:55.964 real 0m0.695s 00:09:55.964 user 0m0.428s 00:09:55.964 sys 0m0.167s 00:09:55.964 ************************************ 00:09:55.964 END TEST dd_unknown_flag 00:09:55.964 ************************************ 00:09:55.964 15:14:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:55.964 15:14:05 -- common/autotest_common.sh@10 -- # set +x 00:09:55.964 15:14:05 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:09:55.964 15:14:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:55.964 15:14:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.964 15:14:05 -- common/autotest_common.sh@10 -- # set +x 00:09:55.964 ************************************ 00:09:55.964 START TEST dd_invalid_json 00:09:55.964 ************************************ 00:09:55.964 15:14:05 -- common/autotest_common.sh@1111 -- # invalid_json 00:09:55.964 15:14:05 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:55.964 15:14:05 -- dd/negative_dd.sh@95 -- # : 00:09:55.964 15:14:05 -- common/autotest_common.sh@638 -- # local es=0 00:09:55.964 15:14:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:55.964 15:14:05 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.964 15:14:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:55.964 15:14:05 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.964 15:14:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:55.964 15:14:05 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.964 15:14:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:55.964 15:14:05 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:55.964 15:14:05 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:55.964 15:14:05 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:56.222 [2024-04-24 15:14:05.228228] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:56.222 [2024-04-24 15:14:05.228326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65024 ] 00:09:56.223 [2024-04-24 15:14:05.369595] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.481 [2024-04-24 15:14:05.492910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.481 [2024-04-24 15:14:05.493042] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:56.481 [2024-04-24 15:14:05.493060] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:56.481 [2024-04-24 15:14:05.493070] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:56.481 [2024-04-24 15:14:05.493108] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:56.481 15:14:05 -- common/autotest_common.sh@641 -- # es=234 00:09:56.481 15:14:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:56.481 15:14:05 -- common/autotest_common.sh@650 -- # es=106 00:09:56.481 15:14:05 -- common/autotest_common.sh@651 -- # case "$es" in 00:09:56.481 15:14:05 -- common/autotest_common.sh@658 -- # es=1 00:09:56.481 15:14:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:56.481 00:09:56.481 real 0m0.459s 00:09:56.481 user 0m0.288s 00:09:56.481 sys 0m0.069s 00:09:56.481 ************************************ 00:09:56.481 END TEST dd_invalid_json 00:09:56.481 ************************************ 00:09:56.481 15:14:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:56.481 15:14:05 -- common/autotest_common.sh@10 -- # set +x 00:09:56.481 ************************************ 00:09:56.481 END TEST spdk_dd_negative 00:09:56.481 ************************************ 00:09:56.481 00:09:56.481 real 0m4.306s 00:09:56.481 user 0m2.123s 00:09:56.481 sys 0m1.672s 00:09:56.481 15:14:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:56.481 15:14:05 -- common/autotest_common.sh@10 -- # set +x 00:09:56.481 ************************************ 00:09:56.481 END TEST spdk_dd 00:09:56.481 ************************************ 00:09:56.481 00:09:56.481 real 1m26.240s 00:09:56.481 user 0m56.923s 00:09:56.481 sys 0m35.445s 00:09:56.481 15:14:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:56.481 15:14:05 -- common/autotest_common.sh@10 -- # set +x 00:09:56.791 15:14:05 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:56.791 15:14:05 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:09:56.791 15:14:05 -- spdk/autotest.sh@258 -- # timing_exit lib 00:09:56.791 15:14:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:56.791 15:14:05 -- common/autotest_common.sh@10 -- # set +x 00:09:56.791 15:14:05 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:09:56.791 15:14:05 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:09:56.791 15:14:05 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:09:56.791 15:14:05 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:09:56.791 15:14:05 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:09:56.791 15:14:05 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:09:56.791 15:14:05 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:56.791 15:14:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:56.791 15:14:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.791 15:14:05 -- common/autotest_common.sh@10 -- # set +x 00:09:56.791 ************************************ 00:09:56.791 START TEST nvmf_tcp 00:09:56.791 ************************************ 00:09:56.791 15:14:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:56.791 * Looking for test storage... 00:09:56.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:56.791 15:14:05 -- nvmf/nvmf.sh@10 -- # uname -s 00:09:56.791 15:14:05 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:56.791 15:14:05 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.791 15:14:05 -- nvmf/common.sh@7 -- # uname -s 00:09:56.792 15:14:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.792 15:14:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.792 15:14:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.792 15:14:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.792 15:14:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.792 15:14:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.792 15:14:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.792 15:14:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.792 15:14:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.792 15:14:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.792 15:14:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:09:56.792 15:14:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:09:56.792 15:14:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.792 15:14:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.792 15:14:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:56.792 15:14:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.792 15:14:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.792 15:14:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.792 15:14:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.792 15:14:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.792 15:14:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.792 15:14:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.792 15:14:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.792 15:14:05 -- paths/export.sh@5 -- # export PATH 00:09:56.792 15:14:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.792 15:14:05 -- nvmf/common.sh@47 -- # : 0 00:09:56.792 15:14:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:56.792 15:14:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:56.792 15:14:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.792 15:14:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.792 15:14:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.792 15:14:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:56.792 15:14:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:56.792 15:14:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:56.792 15:14:05 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:56.792 15:14:05 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:56.792 15:14:05 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:56.792 15:14:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:56.792 15:14:05 -- common/autotest_common.sh@10 -- # set +x 00:09:56.792 15:14:05 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:09:56.792 15:14:05 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:56.792 15:14:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:56.792 15:14:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.792 15:14:05 -- common/autotest_common.sh@10 -- # set +x 00:09:57.051 ************************************ 00:09:57.051 START TEST nvmf_host_management 00:09:57.051 ************************************ 00:09:57.051 15:14:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:57.051 * Looking for test storage... 00:09:57.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:57.051 15:14:06 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.051 15:14:06 -- nvmf/common.sh@7 -- # uname -s 00:09:57.051 15:14:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.051 15:14:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.052 15:14:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.052 15:14:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.052 15:14:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.052 15:14:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.052 15:14:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.052 15:14:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.052 15:14:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.052 15:14:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.052 15:14:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:09:57.052 15:14:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:09:57.052 15:14:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.052 15:14:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.052 15:14:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.052 15:14:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.052 15:14:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.052 15:14:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.052 15:14:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.052 15:14:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.052 15:14:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.052 15:14:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.052 15:14:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.052 15:14:06 -- paths/export.sh@5 -- # export PATH 00:09:57.052 15:14:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.052 15:14:06 -- nvmf/common.sh@47 -- # : 0 00:09:57.052 15:14:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.052 15:14:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.052 15:14:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.052 15:14:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.052 15:14:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.052 15:14:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.052 15:14:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.052 15:14:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.052 15:14:06 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.052 15:14:06 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.052 15:14:06 -- target/host_management.sh@105 -- # nvmftestinit 00:09:57.052 15:14:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:57.052 15:14:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.052 15:14:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:57.052 15:14:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:57.052 15:14:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:57.052 15:14:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.052 15:14:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.052 15:14:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.052 15:14:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:09:57.052 15:14:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:09:57.052 15:14:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:09:57.052 15:14:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:09:57.052 15:14:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:09:57.052 15:14:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:09:57.052 15:14:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.052 15:14:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.052 15:14:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:57.052 15:14:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:57.052 15:14:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.052 15:14:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.052 15:14:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.052 15:14:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.052 15:14:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.052 15:14:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.052 15:14:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.052 15:14:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.052 15:14:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:57.052 Cannot find device "nvmf_init_br" 00:09:57.052 15:14:06 -- nvmf/common.sh@154 -- # true 00:09:57.052 15:14:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:57.052 Cannot find device "nvmf_tgt_br" 00:09:57.052 15:14:06 -- nvmf/common.sh@155 -- # true 00:09:57.052 15:14:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.052 Cannot find device "nvmf_tgt_br2" 00:09:57.052 15:14:06 -- nvmf/common.sh@156 -- # true 00:09:57.052 15:14:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:57.052 Cannot find device "nvmf_init_br" 00:09:57.052 15:14:06 -- nvmf/common.sh@157 -- # true 00:09:57.052 15:14:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:57.052 Cannot find device "nvmf_tgt_br" 00:09:57.052 15:14:06 -- nvmf/common.sh@158 -- # true 00:09:57.052 15:14:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:57.052 Cannot find device "nvmf_tgt_br2" 00:09:57.052 15:14:06 -- nvmf/common.sh@159 -- # true 00:09:57.052 15:14:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:57.052 Cannot find device "nvmf_br" 00:09:57.052 15:14:06 -- nvmf/common.sh@160 -- # true 00:09:57.052 15:14:06 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:57.052 Cannot find device "nvmf_init_if" 00:09:57.052 15:14:06 -- nvmf/common.sh@161 -- # true 00:09:57.052 15:14:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.052 15:14:06 -- nvmf/common.sh@162 -- # true 00:09:57.052 15:14:06 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.311 15:14:06 -- nvmf/common.sh@163 -- # true 00:09:57.311 15:14:06 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.311 15:14:06 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.311 15:14:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.311 15:14:06 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.311 15:14:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.311 15:14:06 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.311 15:14:06 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.311 15:14:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:57.311 15:14:06 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:57.311 15:14:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:57.311 15:14:06 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:57.311 15:14:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:57.311 15:14:06 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:57.311 15:14:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.311 15:14:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:57.311 15:14:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:57.311 15:14:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:57.311 15:14:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:57.311 15:14:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:57.311 15:14:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:57.311 15:14:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:57.311 15:14:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.311 15:14:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.311 15:14:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:57.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:09:57.311 00:09:57.311 --- 10.0.0.2 ping statistics --- 00:09:57.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.311 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:09:57.311 15:14:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:57.311 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.312 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:09:57.312 00:09:57.312 --- 10.0.0.3 ping statistics --- 00:09:57.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.312 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:57.570 15:14:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:09:57.570 00:09:57.570 --- 10.0.0.1 ping statistics --- 00:09:57.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.570 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:57.570 15:14:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.570 15:14:06 -- nvmf/common.sh@422 -- # return 0 00:09:57.570 15:14:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:57.570 15:14:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.570 15:14:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:57.570 15:14:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:57.570 15:14:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.570 15:14:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:57.570 15:14:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:57.570 15:14:06 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:09:57.570 15:14:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:57.570 15:14:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:57.570 15:14:06 -- common/autotest_common.sh@10 -- # set +x 00:09:57.570 ************************************ 00:09:57.570 START TEST nvmf_host_management 00:09:57.570 ************************************ 00:09:57.570 15:14:06 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:09:57.570 15:14:06 -- target/host_management.sh@69 -- # starttarget 00:09:57.570 15:14:06 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:57.570 15:14:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:57.570 15:14:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:57.570 15:14:06 -- common/autotest_common.sh@10 -- # set +x 00:09:57.570 15:14:06 -- nvmf/common.sh@470 -- # nvmfpid=65310 00:09:57.570 15:14:06 -- nvmf/common.sh@471 -- # waitforlisten 65310 00:09:57.570 15:14:06 -- common/autotest_common.sh@817 -- # '[' -z 65310 ']' 00:09:57.570 15:14:06 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:57.570 15:14:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.570 15:14:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:57.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.570 15:14:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.570 15:14:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:57.570 15:14:06 -- common/autotest_common.sh@10 -- # set +x 00:09:57.570 [2024-04-24 15:14:06.734538] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:57.570 [2024-04-24 15:14:06.734748] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.830 [2024-04-24 15:14:06.891141] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.830 [2024-04-24 15:14:07.026815] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.830 [2024-04-24 15:14:07.026899] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.830 [2024-04-24 15:14:07.026925] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.830 [2024-04-24 15:14:07.026947] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.830 [2024-04-24 15:14:07.026956] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.830 [2024-04-24 15:14:07.027770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.830 [2024-04-24 15:14:07.027870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.830 [2024-04-24 15:14:07.028004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:57.830 [2024-04-24 15:14:07.028011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.765 15:14:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:58.765 15:14:07 -- common/autotest_common.sh@850 -- # return 0 00:09:58.765 15:14:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:58.765 15:14:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:58.765 15:14:07 -- common/autotest_common.sh@10 -- # set +x 00:09:58.765 15:14:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.765 15:14:07 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.765 15:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:58.765 15:14:07 -- common/autotest_common.sh@10 -- # set +x 00:09:58.765 [2024-04-24 15:14:07.827941] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.765 15:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.765 15:14:07 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:58.765 15:14:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:58.765 15:14:07 -- common/autotest_common.sh@10 -- # set +x 00:09:58.765 15:14:07 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:58.765 15:14:07 -- target/host_management.sh@23 -- # cat 00:09:58.765 15:14:07 -- target/host_management.sh@30 -- # rpc_cmd 00:09:58.765 15:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:58.765 15:14:07 -- common/autotest_common.sh@10 -- # set +x 00:09:58.765 Malloc0 00:09:58.765 [2024-04-24 15:14:07.910782] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.765 15:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.765 15:14:07 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:58.765 15:14:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:58.765 15:14:07 -- common/autotest_common.sh@10 -- # set +x 00:09:58.765 15:14:07 -- target/host_management.sh@73 -- # perfpid=65364 00:09:58.765 15:14:07 -- target/host_management.sh@74 -- # waitforlisten 65364 /var/tmp/bdevperf.sock 00:09:58.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:58.765 15:14:07 -- common/autotest_common.sh@817 -- # '[' -z 65364 ']' 00:09:58.765 15:14:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:58.765 15:14:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:58.765 15:14:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:58.765 15:14:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:58.765 15:14:07 -- common/autotest_common.sh@10 -- # set +x 00:09:58.765 15:14:07 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:58.765 15:14:07 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:58.765 15:14:07 -- nvmf/common.sh@521 -- # config=() 00:09:58.765 15:14:07 -- nvmf/common.sh@521 -- # local subsystem config 00:09:58.766 15:14:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:09:58.766 15:14:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:09:58.766 { 00:09:58.766 "params": { 00:09:58.766 "name": "Nvme$subsystem", 00:09:58.766 "trtype": "$TEST_TRANSPORT", 00:09:58.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:58.766 "adrfam": "ipv4", 00:09:58.766 "trsvcid": "$NVMF_PORT", 00:09:58.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:58.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:58.766 "hdgst": ${hdgst:-false}, 00:09:58.766 "ddgst": ${ddgst:-false} 00:09:58.766 }, 00:09:58.766 "method": "bdev_nvme_attach_controller" 00:09:58.766 } 00:09:58.766 EOF 00:09:58.766 )") 00:09:58.766 15:14:07 -- nvmf/common.sh@543 -- # cat 00:09:58.766 15:14:07 -- nvmf/common.sh@545 -- # jq . 00:09:58.766 15:14:07 -- nvmf/common.sh@546 -- # IFS=, 00:09:58.766 15:14:07 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:09:58.766 "params": { 00:09:58.766 "name": "Nvme0", 00:09:58.766 "trtype": "tcp", 00:09:58.766 "traddr": "10.0.0.2", 00:09:58.766 "adrfam": "ipv4", 00:09:58.766 "trsvcid": "4420", 00:09:58.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:58.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:58.766 "hdgst": false, 00:09:58.766 "ddgst": false 00:09:58.766 }, 00:09:58.766 "method": "bdev_nvme_attach_controller" 00:09:58.766 }' 00:09:59.024 [2024-04-24 15:14:08.016035] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:09:59.024 [2024-04-24 15:14:08.016155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65364 ] 00:09:59.024 [2024-04-24 15:14:08.157466] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.283 [2024-04-24 15:14:08.289574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.283 Running I/O for 10 seconds... 00:09:59.878 15:14:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:59.878 15:14:09 -- common/autotest_common.sh@850 -- # return 0 00:09:59.878 15:14:09 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:59.878 15:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:59.878 15:14:09 -- common/autotest_common.sh@10 -- # set +x 00:09:59.878 15:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:59.878 15:14:09 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:59.878 15:14:09 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:59.878 15:14:09 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:59.878 15:14:09 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:59.878 15:14:09 -- target/host_management.sh@52 -- # local ret=1 00:09:59.879 15:14:09 -- target/host_management.sh@53 -- # local i 00:09:59.879 15:14:09 -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:59.879 15:14:09 -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:59.879 15:14:09 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:59.879 15:14:09 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:59.879 15:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:59.879 15:14:09 -- common/autotest_common.sh@10 -- # set +x 00:09:59.879 15:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:59.879 15:14:09 -- target/host_management.sh@55 -- # read_io_count=771 00:09:59.879 15:14:09 -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:09:59.879 15:14:09 -- target/host_management.sh@59 -- # ret=0 00:09:59.879 15:14:09 -- target/host_management.sh@60 -- # break 00:09:59.879 15:14:09 -- target/host_management.sh@64 -- # return 0 00:09:59.879 15:14:09 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:59.879 15:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:59.879 15:14:09 -- common/autotest_common.sh@10 -- # set +x 00:09:59.879 [2024-04-24 15:14:09.092793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.092996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5640 is same with the state(5) to be set 00:09:59.879 [2024-04-24 15:14:09.093541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.093986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.093995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.094006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.094015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.094026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.094035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.094046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.094055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.094066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.094075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.094091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.094100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.094112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.094121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.094132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.094141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.094152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.094161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.094173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.094182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.094193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.094202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.094213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.094222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.094233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.094242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.879 [2024-04-24 15:14:09.094253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.879 [2024-04-24 15:14:09.094262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:59.880 [2024-04-24 15:14:09.094968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.880 [2024-04-24 15:14:09.094978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222cae0 is same with the state(5) to be set 00:09:59.880 [2024-04-24 15:14:09.095056] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x222cae0 was disconnected and freed. reset controller. 00:09:59.880 task offset: 106496 on job bdev=Nvme0n1 fails 00:09:59.880 00:09:59.880 Latency(us) 00:09:59.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.880 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:59.880 Job: Nvme0n1 ended in about 0.61 seconds with error 00:09:59.880 Verification LBA range: start 0x0 length 0x400 00:09:59.880 Nvme0n1 : 0.61 1364.91 85.31 104.99 0.00 42136.10 2874.65 47185.92 00:09:59.880 =================================================================================================================== 00:09:59.880 Total : 1364.91 85.31 104.99 0.00 42136.10 2874.65 47185.92 00:09:59.880 [2024-04-24 15:14:09.096257] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:59.880 15:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:59.880 15:14:09 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:59.880 15:14:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:59.880 15:14:09 -- common/autotest_common.sh@10 -- # set +x 00:09:59.880 [2024-04-24 15:14:09.098752] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.880 [2024-04-24 15:14:09.098777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22071b0 (9): Bad file descriptor 00:09:59.880 [2024-04-24 15:14:09.103051] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:59.880 15:14:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:59.880 15:14:09 -- target/host_management.sh@87 -- # sleep 1 00:10:01.251 15:14:10 -- target/host_management.sh@91 -- # kill -9 65364 00:10:01.251 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65364) - No such process 00:10:01.251 15:14:10 -- target/host_management.sh@91 -- # true 00:10:01.251 15:14:10 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:01.251 15:14:10 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:01.251 15:14:10 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:01.251 15:14:10 -- nvmf/common.sh@521 -- # config=() 00:10:01.251 15:14:10 -- nvmf/common.sh@521 -- # local subsystem config 00:10:01.251 15:14:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:01.251 15:14:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:01.251 { 00:10:01.251 "params": { 00:10:01.251 "name": "Nvme$subsystem", 00:10:01.251 "trtype": "$TEST_TRANSPORT", 00:10:01.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.251 "adrfam": "ipv4", 00:10:01.251 "trsvcid": "$NVMF_PORT", 00:10:01.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.251 "hdgst": ${hdgst:-false}, 00:10:01.251 "ddgst": ${ddgst:-false} 00:10:01.251 }, 00:10:01.251 "method": "bdev_nvme_attach_controller" 00:10:01.251 } 00:10:01.251 EOF 00:10:01.251 )") 00:10:01.251 15:14:10 -- nvmf/common.sh@543 -- # cat 00:10:01.251 15:14:10 -- nvmf/common.sh@545 -- # jq . 00:10:01.251 15:14:10 -- nvmf/common.sh@546 -- # IFS=, 00:10:01.251 15:14:10 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:01.251 "params": { 00:10:01.251 "name": "Nvme0", 00:10:01.251 "trtype": "tcp", 00:10:01.251 "traddr": "10.0.0.2", 00:10:01.251 "adrfam": "ipv4", 00:10:01.251 "trsvcid": "4420", 00:10:01.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:01.251 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:01.251 "hdgst": false, 00:10:01.251 "ddgst": false 00:10:01.251 }, 00:10:01.251 "method": "bdev_nvme_attach_controller" 00:10:01.251 }' 00:10:01.251 [2024-04-24 15:14:10.176952] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:10:01.251 [2024-04-24 15:14:10.177080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65402 ] 00:10:01.251 [2024-04-24 15:14:10.317992] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.251 [2024-04-24 15:14:10.457894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.508 Running I/O for 1 seconds... 00:10:02.443 00:10:02.443 Latency(us) 00:10:02.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.443 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:02.443 Verification LBA range: start 0x0 length 0x400 00:10:02.443 Nvme0n1 : 1.02 1443.08 90.19 0.00 0.00 43468.57 4170.47 42896.29 00:10:02.443 =================================================================================================================== 00:10:02.443 Total : 1443.08 90.19 0.00 0.00 43468.57 4170.47 42896.29 00:10:02.701 15:14:11 -- target/host_management.sh@102 -- # stoptarget 00:10:02.701 15:14:11 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:02.701 15:14:11 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:02.701 15:14:11 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:02.701 15:14:11 -- target/host_management.sh@40 -- # nvmftestfini 00:10:02.701 15:14:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:02.701 15:14:11 -- nvmf/common.sh@117 -- # sync 00:10:02.959 15:14:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:02.959 15:14:12 -- nvmf/common.sh@120 -- # set +e 00:10:02.959 15:14:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:02.959 15:14:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:02.959 rmmod nvme_tcp 00:10:02.959 rmmod nvme_fabrics 00:10:02.959 rmmod nvme_keyring 00:10:02.959 15:14:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:02.959 15:14:12 -- nvmf/common.sh@124 -- # set -e 00:10:02.959 15:14:12 -- nvmf/common.sh@125 -- # return 0 00:10:02.959 15:14:12 -- nvmf/common.sh@478 -- # '[' -n 65310 ']' 00:10:02.959 15:14:12 -- nvmf/common.sh@479 -- # killprocess 65310 00:10:02.959 15:14:12 -- common/autotest_common.sh@936 -- # '[' -z 65310 ']' 00:10:02.959 15:14:12 -- common/autotest_common.sh@940 -- # kill -0 65310 00:10:02.959 15:14:12 -- common/autotest_common.sh@941 -- # uname 00:10:02.959 15:14:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:02.959 15:14:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65310 00:10:02.959 15:14:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:02.959 15:14:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:02.959 killing process with pid 65310 00:10:02.959 15:14:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65310' 00:10:02.959 15:14:12 -- common/autotest_common.sh@955 -- # kill 65310 00:10:02.959 15:14:12 -- common/autotest_common.sh@960 -- # wait 65310 00:10:03.218 [2024-04-24 15:14:12.346718] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:03.218 15:14:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:03.218 15:14:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:03.218 15:14:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:03.218 15:14:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:03.218 15:14:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:03.218 15:14:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.218 15:14:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.218 15:14:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.218 15:14:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:03.218 00:10:03.218 real 0m5.755s 00:10:03.218 user 0m24.030s 00:10:03.218 sys 0m1.389s 00:10:03.218 15:14:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:03.218 15:14:12 -- common/autotest_common.sh@10 -- # set +x 00:10:03.218 ************************************ 00:10:03.218 END TEST nvmf_host_management 00:10:03.218 ************************************ 00:10:03.218 15:14:12 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:03.218 00:10:03.218 real 0m6.384s 00:10:03.218 user 0m24.185s 00:10:03.218 sys 0m1.656s 00:10:03.218 15:14:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:03.218 15:14:12 -- common/autotest_common.sh@10 -- # set +x 00:10:03.218 ************************************ 00:10:03.218 END TEST nvmf_host_management 00:10:03.218 ************************************ 00:10:03.478 15:14:12 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:03.478 15:14:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:03.478 15:14:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:03.478 15:14:12 -- common/autotest_common.sh@10 -- # set +x 00:10:03.478 ************************************ 00:10:03.478 START TEST nvmf_lvol 00:10:03.478 ************************************ 00:10:03.478 15:14:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:03.478 * Looking for test storage... 00:10:03.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:03.478 15:14:12 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:03.478 15:14:12 -- nvmf/common.sh@7 -- # uname -s 00:10:03.478 15:14:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.478 15:14:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.478 15:14:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.478 15:14:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.478 15:14:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.478 15:14:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.478 15:14:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.478 15:14:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.478 15:14:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.478 15:14:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.478 15:14:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:10:03.478 15:14:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:10:03.478 15:14:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.478 15:14:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.478 15:14:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:03.478 15:14:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.478 15:14:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:03.478 15:14:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.478 15:14:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.478 15:14:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.478 15:14:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.478 15:14:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.478 15:14:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.478 15:14:12 -- paths/export.sh@5 -- # export PATH 00:10:03.478 15:14:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.478 15:14:12 -- nvmf/common.sh@47 -- # : 0 00:10:03.478 15:14:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:03.478 15:14:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:03.478 15:14:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.478 15:14:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.478 15:14:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.478 15:14:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:03.478 15:14:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:03.478 15:14:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:03.478 15:14:12 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:03.478 15:14:12 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:03.478 15:14:12 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:03.478 15:14:12 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:03.478 15:14:12 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.478 15:14:12 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:03.478 15:14:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:03.478 15:14:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.478 15:14:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:03.478 15:14:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:03.478 15:14:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:03.478 15:14:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.478 15:14:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.478 15:14:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.478 15:14:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:10:03.478 15:14:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:10:03.478 15:14:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:10:03.478 15:14:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:10:03.478 15:14:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:10:03.478 15:14:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:10:03.478 15:14:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.478 15:14:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.478 15:14:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:03.478 15:14:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:03.478 15:14:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:03.478 15:14:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:03.478 15:14:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:03.478 15:14:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.478 15:14:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:03.478 15:14:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:03.478 15:14:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:03.478 15:14:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:03.478 15:14:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:03.478 15:14:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:03.478 Cannot find device "nvmf_tgt_br" 00:10:03.478 15:14:12 -- nvmf/common.sh@155 -- # true 00:10:03.478 15:14:12 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.478 Cannot find device "nvmf_tgt_br2" 00:10:03.478 15:14:12 -- nvmf/common.sh@156 -- # true 00:10:03.478 15:14:12 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:03.737 15:14:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:03.737 Cannot find device "nvmf_tgt_br" 00:10:03.737 15:14:12 -- nvmf/common.sh@158 -- # true 00:10:03.737 15:14:12 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:03.737 Cannot find device "nvmf_tgt_br2" 00:10:03.737 15:14:12 -- nvmf/common.sh@159 -- # true 00:10:03.737 15:14:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:03.737 15:14:12 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:03.737 15:14:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:03.737 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.737 15:14:12 -- nvmf/common.sh@162 -- # true 00:10:03.737 15:14:12 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:03.737 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.737 15:14:12 -- nvmf/common.sh@163 -- # true 00:10:03.737 15:14:12 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:03.737 15:14:12 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:03.737 15:14:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:03.737 15:14:12 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:03.737 15:14:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:03.737 15:14:12 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:03.737 15:14:12 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:03.737 15:14:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:03.737 15:14:12 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:03.737 15:14:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:03.737 15:14:12 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:03.737 15:14:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:03.737 15:14:12 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:03.737 15:14:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:03.737 15:14:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:03.737 15:14:12 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:03.737 15:14:12 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:03.737 15:14:12 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:03.737 15:14:12 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:03.737 15:14:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:03.737 15:14:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:03.737 15:14:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:03.737 15:14:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:03.737 15:14:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:03.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:10:03.737 00:10:03.737 --- 10.0.0.2 ping statistics --- 00:10:03.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.737 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:10:03.737 15:14:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:03.737 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:03.737 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:10:03.737 00:10:03.737 --- 10.0.0.3 ping statistics --- 00:10:03.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.737 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:03.737 15:14:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:03.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:03.995 00:10:03.995 --- 10.0.0.1 ping statistics --- 00:10:03.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.995 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:03.995 15:14:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.995 15:14:12 -- nvmf/common.sh@422 -- # return 0 00:10:03.995 15:14:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:03.995 15:14:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.995 15:14:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:03.995 15:14:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:03.995 15:14:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.995 15:14:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:03.995 15:14:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:03.995 15:14:13 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:03.995 15:14:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:03.995 15:14:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:03.995 15:14:13 -- common/autotest_common.sh@10 -- # set +x 00:10:03.995 15:14:13 -- nvmf/common.sh@470 -- # nvmfpid=65641 00:10:03.995 15:14:13 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:03.995 15:14:13 -- nvmf/common.sh@471 -- # waitforlisten 65641 00:10:03.995 15:14:13 -- common/autotest_common.sh@817 -- # '[' -z 65641 ']' 00:10:03.995 15:14:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.995 15:14:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:03.995 15:14:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.995 15:14:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:03.995 15:14:13 -- common/autotest_common.sh@10 -- # set +x 00:10:03.995 [2024-04-24 15:14:13.053700] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:10:03.995 [2024-04-24 15:14:13.053807] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.995 [2024-04-24 15:14:13.188637] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:04.253 [2024-04-24 15:14:13.315970] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.253 [2024-04-24 15:14:13.316046] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.253 [2024-04-24 15:14:13.316073] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.253 [2024-04-24 15:14:13.316101] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.253 [2024-04-24 15:14:13.316109] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.253 [2024-04-24 15:14:13.316472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.253 [2024-04-24 15:14:13.316590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.253 [2024-04-24 15:14:13.316596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.187 15:14:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:05.187 15:14:14 -- common/autotest_common.sh@850 -- # return 0 00:10:05.187 15:14:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:05.187 15:14:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:05.187 15:14:14 -- common/autotest_common.sh@10 -- # set +x 00:10:05.187 15:14:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.187 15:14:14 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:05.187 [2024-04-24 15:14:14.405945] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.444 15:14:14 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.703 15:14:14 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:05.703 15:14:14 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.961 15:14:15 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:05.961 15:14:15 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:06.218 15:14:15 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:06.475 15:14:15 -- target/nvmf_lvol.sh@29 -- # lvs=0ad5ccd7-12a6-4764-85ce-50e76b774715 00:10:06.475 15:14:15 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0ad5ccd7-12a6-4764-85ce-50e76b774715 lvol 20 00:10:06.733 15:14:15 -- target/nvmf_lvol.sh@32 -- # lvol=2914085b-3857-452d-9e8c-5e865675f1e7 00:10:06.733 15:14:15 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:06.993 15:14:16 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2914085b-3857-452d-9e8c-5e865675f1e7 00:10:07.251 15:14:16 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:07.508 [2024-04-24 15:14:16.592762] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.508 15:14:16 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:07.766 15:14:16 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:07.766 15:14:16 -- target/nvmf_lvol.sh@42 -- # perf_pid=65717 00:10:07.766 15:14:16 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:08.706 15:14:17 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 2914085b-3857-452d-9e8c-5e865675f1e7 MY_SNAPSHOT 00:10:08.964 15:14:18 -- target/nvmf_lvol.sh@47 -- # snapshot=f80ab77d-5542-4258-940d-ef1732181306 00:10:08.964 15:14:18 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 2914085b-3857-452d-9e8c-5e865675f1e7 30 00:10:09.531 15:14:18 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f80ab77d-5542-4258-940d-ef1732181306 MY_CLONE 00:10:09.789 15:14:18 -- target/nvmf_lvol.sh@49 -- # clone=a43dd13c-9533-4a2b-a159-9f0ca3afdea9 00:10:09.789 15:14:18 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate a43dd13c-9533-4a2b-a159-9f0ca3afdea9 00:10:10.049 15:14:19 -- target/nvmf_lvol.sh@53 -- # wait 65717 00:10:18.161 Initializing NVMe Controllers 00:10:18.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:18.161 Controller IO queue size 128, less than required. 00:10:18.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:18.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:18.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:18.161 Initialization complete. Launching workers. 00:10:18.161 ======================================================== 00:10:18.161 Latency(us) 00:10:18.161 Device Information : IOPS MiB/s Average min max 00:10:18.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9307.60 36.36 13763.21 2509.22 79850.60 00:10:18.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9483.90 37.05 13507.28 3119.96 71278.31 00:10:18.161 ======================================================== 00:10:18.161 Total : 18791.49 73.40 13634.05 2509.22 79850.60 00:10:18.161 00:10:18.161 15:14:27 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:18.419 15:14:27 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2914085b-3857-452d-9e8c-5e865675f1e7 00:10:18.678 15:14:27 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0ad5ccd7-12a6-4764-85ce-50e76b774715 00:10:18.936 15:14:28 -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:18.936 15:14:28 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:18.936 15:14:28 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:18.936 15:14:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:18.936 15:14:28 -- nvmf/common.sh@117 -- # sync 00:10:18.936 15:14:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:18.936 15:14:28 -- nvmf/common.sh@120 -- # set +e 00:10:18.936 15:14:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:18.936 15:14:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:18.936 rmmod nvme_tcp 00:10:18.936 rmmod nvme_fabrics 00:10:18.936 rmmod nvme_keyring 00:10:18.936 15:14:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:18.936 15:14:28 -- nvmf/common.sh@124 -- # set -e 00:10:18.936 15:14:28 -- nvmf/common.sh@125 -- # return 0 00:10:18.936 15:14:28 -- nvmf/common.sh@478 -- # '[' -n 65641 ']' 00:10:18.936 15:14:28 -- nvmf/common.sh@479 -- # killprocess 65641 00:10:18.936 15:14:28 -- common/autotest_common.sh@936 -- # '[' -z 65641 ']' 00:10:18.936 15:14:28 -- common/autotest_common.sh@940 -- # kill -0 65641 00:10:18.936 15:14:28 -- common/autotest_common.sh@941 -- # uname 00:10:18.936 15:14:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:18.936 15:14:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65641 00:10:18.936 killing process with pid 65641 00:10:18.936 15:14:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:18.936 15:14:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:18.936 15:14:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65641' 00:10:18.936 15:14:28 -- common/autotest_common.sh@955 -- # kill 65641 00:10:18.936 15:14:28 -- common/autotest_common.sh@960 -- # wait 65641 00:10:19.503 15:14:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:19.503 15:14:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:19.503 15:14:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:19.503 15:14:28 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:19.503 15:14:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:19.503 15:14:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.503 15:14:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.503 15:14:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.503 15:14:28 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:19.503 ************************************ 00:10:19.503 END TEST nvmf_lvol 00:10:19.503 ************************************ 00:10:19.503 00:10:19.503 real 0m15.933s 00:10:19.503 user 1m5.480s 00:10:19.503 sys 0m4.947s 00:10:19.503 15:14:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:19.503 15:14:28 -- common/autotest_common.sh@10 -- # set +x 00:10:19.503 15:14:28 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:19.503 15:14:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:19.503 15:14:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:19.503 15:14:28 -- common/autotest_common.sh@10 -- # set +x 00:10:19.503 ************************************ 00:10:19.503 START TEST nvmf_lvs_grow 00:10:19.503 ************************************ 00:10:19.503 15:14:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:19.503 * Looking for test storage... 00:10:19.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:19.503 15:14:28 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:19.503 15:14:28 -- nvmf/common.sh@7 -- # uname -s 00:10:19.503 15:14:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.503 15:14:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.503 15:14:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.503 15:14:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.503 15:14:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.503 15:14:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.503 15:14:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.503 15:14:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.503 15:14:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.503 15:14:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.503 15:14:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:10:19.503 15:14:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:10:19.503 15:14:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.503 15:14:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.503 15:14:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:19.503 15:14:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.503 15:14:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:19.503 15:14:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.503 15:14:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.503 15:14:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.504 15:14:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.504 15:14:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.504 15:14:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.504 15:14:28 -- paths/export.sh@5 -- # export PATH 00:10:19.504 15:14:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.504 15:14:28 -- nvmf/common.sh@47 -- # : 0 00:10:19.504 15:14:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.504 15:14:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.504 15:14:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.504 15:14:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.504 15:14:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.504 15:14:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.504 15:14:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.504 15:14:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.504 15:14:28 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.504 15:14:28 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:19.504 15:14:28 -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:19.504 15:14:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:19.504 15:14:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.504 15:14:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:19.504 15:14:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:19.504 15:14:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:19.504 15:14:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.504 15:14:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.504 15:14:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.795 15:14:28 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:10:19.795 15:14:28 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:10:19.795 15:14:28 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:10:19.795 15:14:28 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:10:19.795 15:14:28 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:10:19.795 15:14:28 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:10:19.795 15:14:28 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.795 15:14:28 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.795 15:14:28 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:19.795 15:14:28 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:19.795 15:14:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:19.795 15:14:28 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:19.795 15:14:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:19.795 15:14:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.795 15:14:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:19.795 15:14:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:19.795 15:14:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:19.795 15:14:28 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:19.795 15:14:28 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:19.795 15:14:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:19.795 Cannot find device "nvmf_tgt_br" 00:10:19.795 15:14:28 -- nvmf/common.sh@155 -- # true 00:10:19.795 15:14:28 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:19.795 Cannot find device "nvmf_tgt_br2" 00:10:19.795 15:14:28 -- nvmf/common.sh@156 -- # true 00:10:19.795 15:14:28 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:19.795 15:14:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:19.795 Cannot find device "nvmf_tgt_br" 00:10:19.795 15:14:28 -- nvmf/common.sh@158 -- # true 00:10:19.795 15:14:28 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:19.795 Cannot find device "nvmf_tgt_br2" 00:10:19.795 15:14:28 -- nvmf/common.sh@159 -- # true 00:10:19.795 15:14:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:19.795 15:14:28 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:19.795 15:14:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:19.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.795 15:14:28 -- nvmf/common.sh@162 -- # true 00:10:19.795 15:14:28 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:19.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.795 15:14:28 -- nvmf/common.sh@163 -- # true 00:10:19.795 15:14:28 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:19.795 15:14:28 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:19.795 15:14:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:19.795 15:14:28 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:19.795 15:14:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:19.795 15:14:28 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:19.795 15:14:28 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:19.795 15:14:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:19.795 15:14:28 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:19.795 15:14:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:19.795 15:14:29 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:19.795 15:14:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:19.795 15:14:29 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:19.795 15:14:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:20.063 15:14:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:20.063 15:14:29 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:20.063 15:14:29 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:20.063 15:14:29 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:20.063 15:14:29 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:20.063 15:14:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:20.063 15:14:29 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:20.063 15:14:29 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:20.063 15:14:29 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:20.063 15:14:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:20.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:10:20.063 00:10:20.063 --- 10.0.0.2 ping statistics --- 00:10:20.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.063 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:10:20.063 15:14:29 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:20.063 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:20.063 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:10:20.063 00:10:20.063 --- 10.0.0.3 ping statistics --- 00:10:20.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.063 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:20.063 15:14:29 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:20.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:20.063 00:10:20.063 --- 10.0.0.1 ping statistics --- 00:10:20.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.063 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:20.063 15:14:29 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.063 15:14:29 -- nvmf/common.sh@422 -- # return 0 00:10:20.063 15:14:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:20.063 15:14:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.063 15:14:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:20.063 15:14:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:20.063 15:14:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.063 15:14:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:20.063 15:14:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:20.063 15:14:29 -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:20.063 15:14:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:20.063 15:14:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:20.063 15:14:29 -- common/autotest_common.sh@10 -- # set +x 00:10:20.063 15:14:29 -- nvmf/common.sh@470 -- # nvmfpid=66050 00:10:20.063 15:14:29 -- nvmf/common.sh@471 -- # waitforlisten 66050 00:10:20.063 15:14:29 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:20.063 15:14:29 -- common/autotest_common.sh@817 -- # '[' -z 66050 ']' 00:10:20.063 15:14:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.063 15:14:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:20.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.063 15:14:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.063 15:14:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:20.063 15:14:29 -- common/autotest_common.sh@10 -- # set +x 00:10:20.063 [2024-04-24 15:14:29.218157] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:10:20.063 [2024-04-24 15:14:29.218277] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.322 [2024-04-24 15:14:29.357902] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.322 [2024-04-24 15:14:29.484136] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.322 [2024-04-24 15:14:29.484206] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.322 [2024-04-24 15:14:29.484222] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.322 [2024-04-24 15:14:29.484233] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.322 [2024-04-24 15:14:29.484242] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.322 [2024-04-24 15:14:29.484284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.260 15:14:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:21.260 15:14:30 -- common/autotest_common.sh@850 -- # return 0 00:10:21.260 15:14:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:21.260 15:14:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:21.260 15:14:30 -- common/autotest_common.sh@10 -- # set +x 00:10:21.260 15:14:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.260 15:14:30 -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:21.518 [2024-04-24 15:14:30.515420] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.518 15:14:30 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:21.518 15:14:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:21.518 15:14:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:21.518 15:14:30 -- common/autotest_common.sh@10 -- # set +x 00:10:21.518 ************************************ 00:10:21.518 START TEST lvs_grow_clean 00:10:21.518 ************************************ 00:10:21.518 15:14:30 -- common/autotest_common.sh@1111 -- # lvs_grow 00:10:21.518 15:14:30 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:21.518 15:14:30 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:21.518 15:14:30 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:21.518 15:14:30 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:21.518 15:14:30 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:21.518 15:14:30 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:21.518 15:14:30 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:21.518 15:14:30 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:21.518 15:14:30 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:21.777 15:14:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:21.777 15:14:30 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:22.035 15:14:31 -- target/nvmf_lvs_grow.sh@28 -- # lvs=0dbca96a-9880-48e8-a95f-883999cb876a 00:10:22.035 15:14:31 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dbca96a-9880-48e8-a95f-883999cb876a 00:10:22.035 15:14:31 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:22.294 15:14:31 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:22.294 15:14:31 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:22.294 15:14:31 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0dbca96a-9880-48e8-a95f-883999cb876a lvol 150 00:10:22.553 15:14:31 -- target/nvmf_lvs_grow.sh@33 -- # lvol=3e1a85bb-6db9-4668-a60b-1a20770a75e0 00:10:22.553 15:14:31 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:22.553 15:14:31 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:23.120 [2024-04-24 15:14:32.078395] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:23.120 [2024-04-24 15:14:32.078502] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:23.120 true 00:10:23.120 15:14:32 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dbca96a-9880-48e8-a95f-883999cb876a 00:10:23.120 15:14:32 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:23.378 15:14:32 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:23.378 15:14:32 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:23.656 15:14:32 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3e1a85bb-6db9-4668-a60b-1a20770a75e0 00:10:23.918 15:14:32 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:24.177 [2024-04-24 15:14:33.199126] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.177 15:14:33 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:24.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:24.435 15:14:33 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66143 00:10:24.435 15:14:33 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:24.435 15:14:33 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:24.435 15:14:33 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66143 /var/tmp/bdevperf.sock 00:10:24.435 15:14:33 -- common/autotest_common.sh@817 -- # '[' -z 66143 ']' 00:10:24.435 15:14:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:24.435 15:14:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:24.435 15:14:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:24.435 15:14:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:24.435 15:14:33 -- common/autotest_common.sh@10 -- # set +x 00:10:24.435 [2024-04-24 15:14:33.571751] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:10:24.435 [2024-04-24 15:14:33.572500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66143 ] 00:10:24.696 [2024-04-24 15:14:33.716102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.696 [2024-04-24 15:14:33.877550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.636 15:14:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:25.636 15:14:34 -- common/autotest_common.sh@850 -- # return 0 00:10:25.636 15:14:34 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:25.895 Nvme0n1 00:10:25.895 15:14:34 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:26.154 [ 00:10:26.154 { 00:10:26.154 "name": "Nvme0n1", 00:10:26.154 "aliases": [ 00:10:26.154 "3e1a85bb-6db9-4668-a60b-1a20770a75e0" 00:10:26.154 ], 00:10:26.154 "product_name": "NVMe disk", 00:10:26.154 "block_size": 4096, 00:10:26.154 "num_blocks": 38912, 00:10:26.155 "uuid": "3e1a85bb-6db9-4668-a60b-1a20770a75e0", 00:10:26.155 "assigned_rate_limits": { 00:10:26.155 "rw_ios_per_sec": 0, 00:10:26.155 "rw_mbytes_per_sec": 0, 00:10:26.155 "r_mbytes_per_sec": 0, 00:10:26.155 "w_mbytes_per_sec": 0 00:10:26.155 }, 00:10:26.155 "claimed": false, 00:10:26.155 "zoned": false, 00:10:26.155 "supported_io_types": { 00:10:26.155 "read": true, 00:10:26.155 "write": true, 00:10:26.155 "unmap": true, 00:10:26.155 "write_zeroes": true, 00:10:26.155 "flush": true, 00:10:26.155 "reset": true, 00:10:26.155 "compare": true, 00:10:26.155 "compare_and_write": true, 00:10:26.155 "abort": true, 00:10:26.155 "nvme_admin": true, 00:10:26.155 "nvme_io": true 00:10:26.155 }, 00:10:26.155 "memory_domains": [ 00:10:26.155 { 00:10:26.155 "dma_device_id": "system", 00:10:26.155 "dma_device_type": 1 00:10:26.155 } 00:10:26.155 ], 00:10:26.155 "driver_specific": { 00:10:26.155 "nvme": [ 00:10:26.155 { 00:10:26.155 "trid": { 00:10:26.155 "trtype": "TCP", 00:10:26.155 "adrfam": "IPv4", 00:10:26.155 "traddr": "10.0.0.2", 00:10:26.155 "trsvcid": "4420", 00:10:26.155 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:26.155 }, 00:10:26.155 "ctrlr_data": { 00:10:26.155 "cntlid": 1, 00:10:26.155 "vendor_id": "0x8086", 00:10:26.155 "model_number": "SPDK bdev Controller", 00:10:26.155 "serial_number": "SPDK0", 00:10:26.155 "firmware_revision": "24.05", 00:10:26.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:26.155 "oacs": { 00:10:26.155 "security": 0, 00:10:26.155 "format": 0, 00:10:26.155 "firmware": 0, 00:10:26.155 "ns_manage": 0 00:10:26.155 }, 00:10:26.155 "multi_ctrlr": true, 00:10:26.155 "ana_reporting": false 00:10:26.155 }, 00:10:26.155 "vs": { 00:10:26.155 "nvme_version": "1.3" 00:10:26.155 }, 00:10:26.155 "ns_data": { 00:10:26.155 "id": 1, 00:10:26.155 "can_share": true 00:10:26.155 } 00:10:26.155 } 00:10:26.155 ], 00:10:26.155 "mp_policy": "active_passive" 00:10:26.155 } 00:10:26.155 } 00:10:26.155 ] 00:10:26.155 15:14:35 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:26.155 15:14:35 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66171 00:10:26.155 15:14:35 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:26.155 Running I/O for 10 seconds... 00:10:27.532 Latency(us) 00:10:27.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:27.532 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:10:27.532 =================================================================================================================== 00:10:27.532 Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:10:27.532 00:10:28.107 15:14:37 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0dbca96a-9880-48e8-a95f-883999cb876a 00:10:28.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.381 Nvme0n1 : 2.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:10:28.381 =================================================================================================================== 00:10:28.381 Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:10:28.381 00:10:28.381 true 00:10:28.382 15:14:37 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dbca96a-9880-48e8-a95f-883999cb876a 00:10:28.382 15:14:37 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:28.640 15:14:37 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:28.640 15:14:37 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:28.640 15:14:37 -- target/nvmf_lvs_grow.sh@65 -- # wait 66171 00:10:29.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:29.208 Nvme0n1 : 3.00 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:10:29.208 =================================================================================================================== 00:10:29.208 Total : 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:10:29.208 00:10:30.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.143 Nvme0n1 : 4.00 6635.75 25.92 0.00 0.00 0.00 0.00 0.00 00:10:30.143 =================================================================================================================== 00:10:30.143 Total : 6635.75 25.92 0.00 0.00 0.00 0.00 0.00 00:10:30.143 00:10:31.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.520 Nvme0n1 : 5.00 6680.20 26.09 0.00 0.00 0.00 0.00 0.00 00:10:31.520 =================================================================================================================== 00:10:31.520 Total : 6680.20 26.09 0.00 0.00 0.00 0.00 0.00 00:10:31.520 00:10:32.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.454 Nvme0n1 : 6.00 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:10:32.454 =================================================================================================================== 00:10:32.454 Total : 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:10:32.454 00:10:33.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.388 Nvme0n1 : 7.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:10:33.388 =================================================================================================================== 00:10:33.388 Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:10:33.388 00:10:34.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.324 Nvme0n1 : 8.00 6746.88 26.35 0.00 0.00 0.00 0.00 0.00 00:10:34.324 =================================================================================================================== 00:10:34.324 Total : 6746.88 26.35 0.00 0.00 0.00 0.00 0.00 00:10:34.324 00:10:35.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.259 Nvme0n1 : 9.00 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:10:35.259 =================================================================================================================== 00:10:35.259 Total : 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:10:35.259 00:10:36.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.195 Nvme0n1 : 10.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:10:36.195 =================================================================================================================== 00:10:36.195 Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:10:36.195 00:10:36.195 00:10:36.195 Latency(us) 00:10:36.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.195 Nvme0n1 : 10.02 6794.35 26.54 0.00 0.00 18832.82 15252.01 50760.61 00:10:36.195 =================================================================================================================== 00:10:36.195 Total : 6794.35 26.54 0.00 0.00 18832.82 15252.01 50760.61 00:10:36.195 0 00:10:36.195 15:14:45 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66143 00:10:36.195 15:14:45 -- common/autotest_common.sh@936 -- # '[' -z 66143 ']' 00:10:36.195 15:14:45 -- common/autotest_common.sh@940 -- # kill -0 66143 00:10:36.195 15:14:45 -- common/autotest_common.sh@941 -- # uname 00:10:36.195 15:14:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:36.195 15:14:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66143 00:10:36.455 killing process with pid 66143 00:10:36.455 Received shutdown signal, test time was about 10.000000 seconds 00:10:36.455 00:10:36.455 Latency(us) 00:10:36.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.455 =================================================================================================================== 00:10:36.455 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:36.455 15:14:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:36.455 15:14:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:36.455 15:14:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66143' 00:10:36.455 15:14:45 -- common/autotest_common.sh@955 -- # kill 66143 00:10:36.455 15:14:45 -- common/autotest_common.sh@960 -- # wait 66143 00:10:36.713 15:14:45 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:36.972 15:14:45 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:37.230 15:14:46 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:37.230 15:14:46 -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dbca96a-9880-48e8-a95f-883999cb876a 00:10:37.230 15:14:46 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:37.230 15:14:46 -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:37.230 15:14:46 -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:37.489 [2024-04-24 15:14:46.718062] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:37.747 15:14:46 -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dbca96a-9880-48e8-a95f-883999cb876a 00:10:37.747 15:14:46 -- common/autotest_common.sh@638 -- # local es=0 00:10:37.747 15:14:46 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dbca96a-9880-48e8-a95f-883999cb876a 00:10:37.748 15:14:46 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:37.748 15:14:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:37.748 15:14:46 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:37.748 15:14:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:37.748 15:14:46 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:37.748 15:14:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:37.748 15:14:46 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:37.748 15:14:46 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:37.748 15:14:46 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dbca96a-9880-48e8-a95f-883999cb876a 00:10:38.006 request: 00:10:38.006 { 00:10:38.006 "uuid": "0dbca96a-9880-48e8-a95f-883999cb876a", 00:10:38.006 "method": "bdev_lvol_get_lvstores", 00:10:38.006 "req_id": 1 00:10:38.006 } 00:10:38.006 Got JSON-RPC error response 00:10:38.006 response: 00:10:38.006 { 00:10:38.006 "code": -19, 00:10:38.006 "message": "No such device" 00:10:38.006 } 00:10:38.006 15:14:47 -- common/autotest_common.sh@641 -- # es=1 00:10:38.006 15:14:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:38.006 15:14:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:38.006 15:14:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:38.006 15:14:47 -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:38.264 aio_bdev 00:10:38.264 15:14:47 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3e1a85bb-6db9-4668-a60b-1a20770a75e0 00:10:38.264 15:14:47 -- common/autotest_common.sh@885 -- # local bdev_name=3e1a85bb-6db9-4668-a60b-1a20770a75e0 00:10:38.265 15:14:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:38.265 15:14:47 -- common/autotest_common.sh@887 -- # local i 00:10:38.265 15:14:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:38.265 15:14:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:38.265 15:14:47 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:38.534 15:14:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e1a85bb-6db9-4668-a60b-1a20770a75e0 -t 2000 00:10:38.793 [ 00:10:38.793 { 00:10:38.793 "name": "3e1a85bb-6db9-4668-a60b-1a20770a75e0", 00:10:38.793 "aliases": [ 00:10:38.793 "lvs/lvol" 00:10:38.793 ], 00:10:38.793 "product_name": "Logical Volume", 00:10:38.793 "block_size": 4096, 00:10:38.793 "num_blocks": 38912, 00:10:38.793 "uuid": "3e1a85bb-6db9-4668-a60b-1a20770a75e0", 00:10:38.793 "assigned_rate_limits": { 00:10:38.793 "rw_ios_per_sec": 0, 00:10:38.793 "rw_mbytes_per_sec": 0, 00:10:38.793 "r_mbytes_per_sec": 0, 00:10:38.793 "w_mbytes_per_sec": 0 00:10:38.793 }, 00:10:38.793 "claimed": false, 00:10:38.793 "zoned": false, 00:10:38.793 "supported_io_types": { 00:10:38.793 "read": true, 00:10:38.793 "write": true, 00:10:38.793 "unmap": true, 00:10:38.793 "write_zeroes": true, 00:10:38.793 "flush": false, 00:10:38.793 "reset": true, 00:10:38.793 "compare": false, 00:10:38.793 "compare_and_write": false, 00:10:38.793 "abort": false, 00:10:38.793 "nvme_admin": false, 00:10:38.793 "nvme_io": false 00:10:38.793 }, 00:10:38.793 "driver_specific": { 00:10:38.793 "lvol": { 00:10:38.793 "lvol_store_uuid": "0dbca96a-9880-48e8-a95f-883999cb876a", 00:10:38.793 "base_bdev": "aio_bdev", 00:10:38.793 "thin_provision": false, 00:10:38.793 "snapshot": false, 00:10:38.793 "clone": false, 00:10:38.793 "esnap_clone": false 00:10:38.793 } 00:10:38.793 } 00:10:38.793 } 00:10:38.793 ] 00:10:38.793 15:14:47 -- common/autotest_common.sh@893 -- # return 0 00:10:38.793 15:14:47 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dbca96a-9880-48e8-a95f-883999cb876a 00:10:38.793 15:14:47 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:39.050 15:14:48 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:39.050 15:14:48 -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dbca96a-9880-48e8-a95f-883999cb876a 00:10:39.050 15:14:48 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:39.307 15:14:48 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:39.307 15:14:48 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3e1a85bb-6db9-4668-a60b-1a20770a75e0 00:10:39.565 15:14:48 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0dbca96a-9880-48e8-a95f-883999cb876a 00:10:39.822 15:14:48 -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:40.080 15:14:49 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:40.338 ************************************ 00:10:40.338 END TEST lvs_grow_clean 00:10:40.338 ************************************ 00:10:40.338 00:10:40.338 real 0m18.915s 00:10:40.338 user 0m17.636s 00:10:40.338 sys 0m2.908s 00:10:40.338 15:14:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:40.338 15:14:49 -- common/autotest_common.sh@10 -- # set +x 00:10:40.734 15:14:49 -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:40.734 15:14:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:40.734 15:14:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:40.734 15:14:49 -- common/autotest_common.sh@10 -- # set +x 00:10:40.734 ************************************ 00:10:40.734 START TEST lvs_grow_dirty 00:10:40.734 ************************************ 00:10:40.734 15:14:49 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:10:40.734 15:14:49 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:40.734 15:14:49 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:40.734 15:14:49 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:40.734 15:14:49 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:40.734 15:14:49 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:40.734 15:14:49 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:40.734 15:14:49 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:40.734 15:14:49 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:40.734 15:14:49 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:40.734 15:14:49 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:40.734 15:14:49 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:41.300 15:14:50 -- target/nvmf_lvs_grow.sh@28 -- # lvs=03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 00:10:41.300 15:14:50 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 00:10:41.300 15:14:50 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:41.300 15:14:50 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:41.300 15:14:50 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:41.300 15:14:50 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 lvol 150 00:10:41.866 15:14:50 -- target/nvmf_lvs_grow.sh@33 -- # lvol=dfed425f-3128-4c16-9033-36b77fe4e36d 00:10:41.866 15:14:50 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:41.867 15:14:50 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:41.867 [2024-04-24 15:14:51.088551] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:41.867 [2024-04-24 15:14:51.088703] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:41.867 true 00:10:42.150 15:14:51 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:42.150 15:14:51 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 00:10:42.428 15:14:51 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:42.428 15:14:51 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:42.687 15:14:51 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dfed425f-3128-4c16-9033-36b77fe4e36d 00:10:42.687 15:14:51 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:42.945 [2024-04-24 15:14:52.125182] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.945 15:14:52 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:43.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:43.513 15:14:52 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66427 00:10:43.513 15:14:52 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:43.513 15:14:52 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:43.513 15:14:52 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66427 /var/tmp/bdevperf.sock 00:10:43.513 15:14:52 -- common/autotest_common.sh@817 -- # '[' -z 66427 ']' 00:10:43.513 15:14:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:43.513 15:14:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:43.513 15:14:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:43.513 15:14:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:43.513 15:14:52 -- common/autotest_common.sh@10 -- # set +x 00:10:43.513 [2024-04-24 15:14:52.524867] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:10:43.513 [2024-04-24 15:14:52.525206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66427 ] 00:10:43.513 [2024-04-24 15:14:52.659198] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.771 [2024-04-24 15:14:52.806793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.338 15:14:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:44.338 15:14:53 -- common/autotest_common.sh@850 -- # return 0 00:10:44.338 15:14:53 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:44.596 Nvme0n1 00:10:44.596 15:14:53 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:44.853 [ 00:10:44.853 { 00:10:44.853 "name": "Nvme0n1", 00:10:44.853 "aliases": [ 00:10:44.853 "dfed425f-3128-4c16-9033-36b77fe4e36d" 00:10:44.853 ], 00:10:44.853 "product_name": "NVMe disk", 00:10:44.853 "block_size": 4096, 00:10:44.853 "num_blocks": 38912, 00:10:44.853 "uuid": "dfed425f-3128-4c16-9033-36b77fe4e36d", 00:10:44.853 "assigned_rate_limits": { 00:10:44.853 "rw_ios_per_sec": 0, 00:10:44.853 "rw_mbytes_per_sec": 0, 00:10:44.853 "r_mbytes_per_sec": 0, 00:10:44.853 "w_mbytes_per_sec": 0 00:10:44.853 }, 00:10:44.853 "claimed": false, 00:10:44.853 "zoned": false, 00:10:44.853 "supported_io_types": { 00:10:44.853 "read": true, 00:10:44.853 "write": true, 00:10:44.853 "unmap": true, 00:10:44.853 "write_zeroes": true, 00:10:44.853 "flush": true, 00:10:44.853 "reset": true, 00:10:44.853 "compare": true, 00:10:44.853 "compare_and_write": true, 00:10:44.853 "abort": true, 00:10:44.853 "nvme_admin": true, 00:10:44.853 "nvme_io": true 00:10:44.853 }, 00:10:44.853 "memory_domains": [ 00:10:44.853 { 00:10:44.853 "dma_device_id": "system", 00:10:44.853 "dma_device_type": 1 00:10:44.853 } 00:10:44.853 ], 00:10:44.853 "driver_specific": { 00:10:44.853 "nvme": [ 00:10:44.853 { 00:10:44.853 "trid": { 00:10:44.853 "trtype": "TCP", 00:10:44.853 "adrfam": "IPv4", 00:10:44.853 "traddr": "10.0.0.2", 00:10:44.853 "trsvcid": "4420", 00:10:44.853 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:44.853 }, 00:10:44.853 "ctrlr_data": { 00:10:44.853 "cntlid": 1, 00:10:44.853 "vendor_id": "0x8086", 00:10:44.853 "model_number": "SPDK bdev Controller", 00:10:44.853 "serial_number": "SPDK0", 00:10:44.853 "firmware_revision": "24.05", 00:10:44.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:44.853 "oacs": { 00:10:44.853 "security": 0, 00:10:44.853 "format": 0, 00:10:44.853 "firmware": 0, 00:10:44.853 "ns_manage": 0 00:10:44.853 }, 00:10:44.853 "multi_ctrlr": true, 00:10:44.853 "ana_reporting": false 00:10:44.853 }, 00:10:44.853 "vs": { 00:10:44.853 "nvme_version": "1.3" 00:10:44.853 }, 00:10:44.853 "ns_data": { 00:10:44.853 "id": 1, 00:10:44.853 "can_share": true 00:10:44.853 } 00:10:44.853 } 00:10:44.853 ], 00:10:44.853 "mp_policy": "active_passive" 00:10:44.853 } 00:10:44.853 } 00:10:44.853 ] 00:10:44.853 15:14:54 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66450 00:10:44.853 15:14:54 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:44.854 15:14:54 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:45.112 Running I/O for 10 seconds... 00:10:46.044 Latency(us) 00:10:46.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.044 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:10:46.044 =================================================================================================================== 00:10:46.044 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:10:46.044 00:10:46.978 15:14:56 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 00:10:46.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.978 Nvme0n1 : 2.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:10:46.978 =================================================================================================================== 00:10:46.978 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:10:46.978 00:10:47.236 true 00:10:47.236 15:14:56 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 00:10:47.236 15:14:56 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:47.494 15:14:56 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:47.494 15:14:56 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:47.494 15:14:56 -- target/nvmf_lvs_grow.sh@65 -- # wait 66450 00:10:48.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:48.060 Nvme0n1 : 3.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:10:48.060 =================================================================================================================== 00:10:48.060 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:10:48.060 00:10:48.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:48.994 Nvme0n1 : 4.00 7143.75 27.91 0.00 0.00 0.00 0.00 0.00 00:10:48.994 =================================================================================================================== 00:10:48.994 Total : 7143.75 27.91 0.00 0.00 0.00 0.00 0.00 00:10:48.994 00:10:49.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:49.930 Nvme0n1 : 5.00 7137.40 27.88 0.00 0.00 0.00 0.00 0.00 00:10:49.930 =================================================================================================================== 00:10:49.930 Total : 7137.40 27.88 0.00 0.00 0.00 0.00 0.00 00:10:49.930 00:10:51.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.306 Nvme0n1 : 6.00 6969.67 27.23 0.00 0.00 0.00 0.00 0.00 00:10:51.306 =================================================================================================================== 00:10:51.306 Total : 6969.67 27.23 0.00 0.00 0.00 0.00 0.00 00:10:51.306 00:10:52.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.278 Nvme0n1 : 7.00 6971.86 27.23 0.00 0.00 0.00 0.00 0.00 00:10:52.278 =================================================================================================================== 00:10:52.278 Total : 6971.86 27.23 0.00 0.00 0.00 0.00 0.00 00:10:52.278 00:10:53.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.211 Nvme0n1 : 8.00 6941.75 27.12 0.00 0.00 0.00 0.00 0.00 00:10:53.211 =================================================================================================================== 00:10:53.211 Total : 6941.75 27.12 0.00 0.00 0.00 0.00 0.00 00:10:53.211 00:10:54.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.146 Nvme0n1 : 9.00 6932.44 27.08 0.00 0.00 0.00 0.00 0.00 00:10:54.146 =================================================================================================================== 00:10:54.146 Total : 6932.44 27.08 0.00 0.00 0.00 0.00 0.00 00:10:54.146 00:10:55.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.117 Nvme0n1 : 10.00 6925.00 27.05 0.00 0.00 0.00 0.00 0.00 00:10:55.117 =================================================================================================================== 00:10:55.117 Total : 6925.00 27.05 0.00 0.00 0.00 0.00 0.00 00:10:55.117 00:10:55.117 00:10:55.117 Latency(us) 00:10:55.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.117 Nvme0n1 : 10.01 6930.58 27.07 0.00 0.00 18463.28 12332.68 153473.40 00:10:55.117 =================================================================================================================== 00:10:55.117 Total : 6930.58 27.07 0.00 0.00 18463.28 12332.68 153473.40 00:10:55.117 0 00:10:55.117 15:15:04 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66427 00:10:55.117 15:15:04 -- common/autotest_common.sh@936 -- # '[' -z 66427 ']' 00:10:55.117 15:15:04 -- common/autotest_common.sh@940 -- # kill -0 66427 00:10:55.117 15:15:04 -- common/autotest_common.sh@941 -- # uname 00:10:55.117 15:15:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:55.117 15:15:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66427 00:10:55.117 killing process with pid 66427 00:10:55.117 Received shutdown signal, test time was about 10.000000 seconds 00:10:55.117 00:10:55.117 Latency(us) 00:10:55.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.117 =================================================================================================================== 00:10:55.118 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:55.118 15:15:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:55.118 15:15:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:55.118 15:15:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66427' 00:10:55.118 15:15:04 -- common/autotest_common.sh@955 -- # kill 66427 00:10:55.118 15:15:04 -- common/autotest_common.sh@960 -- # wait 66427 00:10:55.375 15:15:04 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:55.634 15:15:04 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:55.892 15:15:05 -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 00:10:55.892 15:15:05 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:56.151 15:15:05 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:56.151 15:15:05 -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:56.151 15:15:05 -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66050 00:10:56.151 15:15:05 -- target/nvmf_lvs_grow.sh@75 -- # wait 66050 00:10:56.151 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66050 Killed "${NVMF_APP[@]}" "$@" 00:10:56.151 15:15:05 -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:56.151 15:15:05 -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:56.151 15:15:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:56.151 15:15:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:56.151 15:15:05 -- common/autotest_common.sh@10 -- # set +x 00:10:56.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.151 15:15:05 -- nvmf/common.sh@470 -- # nvmfpid=66584 00:10:56.151 15:15:05 -- nvmf/common.sh@471 -- # waitforlisten 66584 00:10:56.151 15:15:05 -- common/autotest_common.sh@817 -- # '[' -z 66584 ']' 00:10:56.151 15:15:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:56.151 15:15:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.151 15:15:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:56.151 15:15:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.151 15:15:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:56.151 15:15:05 -- common/autotest_common.sh@10 -- # set +x 00:10:56.410 [2024-04-24 15:15:05.423059] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:10:56.410 [2024-04-24 15:15:05.423175] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.410 [2024-04-24 15:15:05.561634] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.669 [2024-04-24 15:15:05.680868] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.669 [2024-04-24 15:15:05.680924] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.669 [2024-04-24 15:15:05.680936] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.669 [2024-04-24 15:15:05.680944] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.669 [2024-04-24 15:15:05.680952] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.669 [2024-04-24 15:15:05.680989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.236 15:15:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:57.236 15:15:06 -- common/autotest_common.sh@850 -- # return 0 00:10:57.236 15:15:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:57.236 15:15:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:57.236 15:15:06 -- common/autotest_common.sh@10 -- # set +x 00:10:57.236 15:15:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.236 15:15:06 -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:57.498 [2024-04-24 15:15:06.734306] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:57.498 [2024-04-24 15:15:06.734772] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:57.498 [2024-04-24 15:15:06.735195] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:57.757 15:15:06 -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:57.757 15:15:06 -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev dfed425f-3128-4c16-9033-36b77fe4e36d 00:10:57.757 15:15:06 -- common/autotest_common.sh@885 -- # local bdev_name=dfed425f-3128-4c16-9033-36b77fe4e36d 00:10:57.757 15:15:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:57.757 15:15:06 -- common/autotest_common.sh@887 -- # local i 00:10:57.757 15:15:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:57.757 15:15:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:57.757 15:15:06 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:58.016 15:15:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dfed425f-3128-4c16-9033-36b77fe4e36d -t 2000 00:10:58.274 [ 00:10:58.274 { 00:10:58.274 "name": "dfed425f-3128-4c16-9033-36b77fe4e36d", 00:10:58.274 "aliases": [ 00:10:58.274 "lvs/lvol" 00:10:58.274 ], 00:10:58.274 "product_name": "Logical Volume", 00:10:58.274 "block_size": 4096, 00:10:58.274 "num_blocks": 38912, 00:10:58.274 "uuid": "dfed425f-3128-4c16-9033-36b77fe4e36d", 00:10:58.274 "assigned_rate_limits": { 00:10:58.274 "rw_ios_per_sec": 0, 00:10:58.274 "rw_mbytes_per_sec": 0, 00:10:58.274 "r_mbytes_per_sec": 0, 00:10:58.274 "w_mbytes_per_sec": 0 00:10:58.274 }, 00:10:58.274 "claimed": false, 00:10:58.274 "zoned": false, 00:10:58.274 "supported_io_types": { 00:10:58.274 "read": true, 00:10:58.274 "write": true, 00:10:58.274 "unmap": true, 00:10:58.274 "write_zeroes": true, 00:10:58.274 "flush": false, 00:10:58.274 "reset": true, 00:10:58.274 "compare": false, 00:10:58.274 "compare_and_write": false, 00:10:58.274 "abort": false, 00:10:58.274 "nvme_admin": false, 00:10:58.274 "nvme_io": false 00:10:58.274 }, 00:10:58.274 "driver_specific": { 00:10:58.274 "lvol": { 00:10:58.274 "lvol_store_uuid": "03a654b4-030c-4cd0-b8ac-c4aef63d4bb1", 00:10:58.274 "base_bdev": "aio_bdev", 00:10:58.274 "thin_provision": false, 00:10:58.274 "snapshot": false, 00:10:58.274 "clone": false, 00:10:58.274 "esnap_clone": false 00:10:58.274 } 00:10:58.274 } 00:10:58.274 } 00:10:58.274 ] 00:10:58.274 15:15:07 -- common/autotest_common.sh@893 -- # return 0 00:10:58.274 15:15:07 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 00:10:58.274 15:15:07 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:58.533 15:15:07 -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:58.533 15:15:07 -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 00:10:58.533 15:15:07 -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:58.792 15:15:07 -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:58.792 15:15:07 -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:59.050 [2024-04-24 15:15:08.123813] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:59.050 15:15:08 -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 00:10:59.050 15:15:08 -- common/autotest_common.sh@638 -- # local es=0 00:10:59.050 15:15:08 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 00:10:59.050 15:15:08 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.050 15:15:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:59.050 15:15:08 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.050 15:15:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:59.051 15:15:08 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.051 15:15:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:59.051 15:15:08 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.051 15:15:08 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:59.051 15:15:08 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 00:10:59.309 request: 00:10:59.309 { 00:10:59.309 "uuid": "03a654b4-030c-4cd0-b8ac-c4aef63d4bb1", 00:10:59.309 "method": "bdev_lvol_get_lvstores", 00:10:59.309 "req_id": 1 00:10:59.309 } 00:10:59.309 Got JSON-RPC error response 00:10:59.309 response: 00:10:59.309 { 00:10:59.309 "code": -19, 00:10:59.309 "message": "No such device" 00:10:59.309 } 00:10:59.309 15:15:08 -- common/autotest_common.sh@641 -- # es=1 00:10:59.309 15:15:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:59.309 15:15:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:59.309 15:15:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:59.309 15:15:08 -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:59.568 aio_bdev 00:10:59.568 15:15:08 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dfed425f-3128-4c16-9033-36b77fe4e36d 00:10:59.568 15:15:08 -- common/autotest_common.sh@885 -- # local bdev_name=dfed425f-3128-4c16-9033-36b77fe4e36d 00:10:59.568 15:15:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:59.568 15:15:08 -- common/autotest_common.sh@887 -- # local i 00:10:59.568 15:15:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:59.568 15:15:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:59.568 15:15:08 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:59.829 15:15:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dfed425f-3128-4c16-9033-36b77fe4e36d -t 2000 00:11:00.088 [ 00:11:00.088 { 00:11:00.088 "name": "dfed425f-3128-4c16-9033-36b77fe4e36d", 00:11:00.088 "aliases": [ 00:11:00.088 "lvs/lvol" 00:11:00.088 ], 00:11:00.088 "product_name": "Logical Volume", 00:11:00.088 "block_size": 4096, 00:11:00.088 "num_blocks": 38912, 00:11:00.088 "uuid": "dfed425f-3128-4c16-9033-36b77fe4e36d", 00:11:00.088 "assigned_rate_limits": { 00:11:00.088 "rw_ios_per_sec": 0, 00:11:00.088 "rw_mbytes_per_sec": 0, 00:11:00.088 "r_mbytes_per_sec": 0, 00:11:00.088 "w_mbytes_per_sec": 0 00:11:00.088 }, 00:11:00.088 "claimed": false, 00:11:00.088 "zoned": false, 00:11:00.088 "supported_io_types": { 00:11:00.088 "read": true, 00:11:00.088 "write": true, 00:11:00.088 "unmap": true, 00:11:00.088 "write_zeroes": true, 00:11:00.088 "flush": false, 00:11:00.088 "reset": true, 00:11:00.088 "compare": false, 00:11:00.088 "compare_and_write": false, 00:11:00.088 "abort": false, 00:11:00.088 "nvme_admin": false, 00:11:00.088 "nvme_io": false 00:11:00.088 }, 00:11:00.088 "driver_specific": { 00:11:00.088 "lvol": { 00:11:00.088 "lvol_store_uuid": "03a654b4-030c-4cd0-b8ac-c4aef63d4bb1", 00:11:00.088 "base_bdev": "aio_bdev", 00:11:00.088 "thin_provision": false, 00:11:00.088 "snapshot": false, 00:11:00.088 "clone": false, 00:11:00.088 "esnap_clone": false 00:11:00.088 } 00:11:00.088 } 00:11:00.088 } 00:11:00.088 ] 00:11:00.088 15:15:09 -- common/autotest_common.sh@893 -- # return 0 00:11:00.088 15:15:09 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 00:11:00.088 15:15:09 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:00.346 15:15:09 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:00.346 15:15:09 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:00.346 15:15:09 -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 00:11:00.912 15:15:09 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:00.913 15:15:09 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete dfed425f-3128-4c16-9033-36b77fe4e36d 00:11:00.913 15:15:10 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 03a654b4-030c-4cd0-b8ac-c4aef63d4bb1 00:11:01.171 15:15:10 -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:01.429 15:15:10 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:01.995 ************************************ 00:11:01.995 END TEST lvs_grow_dirty 00:11:01.995 ************************************ 00:11:01.995 00:11:01.995 real 0m21.306s 00:11:01.995 user 0m44.579s 00:11:01.995 sys 0m8.512s 00:11:01.995 15:15:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:01.995 15:15:10 -- common/autotest_common.sh@10 -- # set +x 00:11:01.995 15:15:11 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:01.995 15:15:11 -- common/autotest_common.sh@794 -- # type=--id 00:11:01.995 15:15:11 -- common/autotest_common.sh@795 -- # id=0 00:11:01.995 15:15:11 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:11:01.995 15:15:11 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:01.995 15:15:11 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:11:01.995 15:15:11 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:11:01.995 15:15:11 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:11:01.995 15:15:11 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:01.995 nvmf_trace.0 00:11:01.995 15:15:11 -- common/autotest_common.sh@809 -- # return 0 00:11:01.995 15:15:11 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:01.995 15:15:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:01.995 15:15:11 -- nvmf/common.sh@117 -- # sync 00:11:01.995 15:15:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:01.995 15:15:11 -- nvmf/common.sh@120 -- # set +e 00:11:01.995 15:15:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:01.995 15:15:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:01.995 rmmod nvme_tcp 00:11:01.995 rmmod nvme_fabrics 00:11:01.995 rmmod nvme_keyring 00:11:01.995 15:15:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:01.995 15:15:11 -- nvmf/common.sh@124 -- # set -e 00:11:01.995 15:15:11 -- nvmf/common.sh@125 -- # return 0 00:11:01.995 15:15:11 -- nvmf/common.sh@478 -- # '[' -n 66584 ']' 00:11:01.995 15:15:11 -- nvmf/common.sh@479 -- # killprocess 66584 00:11:01.995 15:15:11 -- common/autotest_common.sh@936 -- # '[' -z 66584 ']' 00:11:01.995 15:15:11 -- common/autotest_common.sh@940 -- # kill -0 66584 00:11:01.995 15:15:11 -- common/autotest_common.sh@941 -- # uname 00:11:01.995 15:15:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:01.995 15:15:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66584 00:11:02.255 killing process with pid 66584 00:11:02.255 15:15:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:02.255 15:15:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:02.255 15:15:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66584' 00:11:02.255 15:15:11 -- common/autotest_common.sh@955 -- # kill 66584 00:11:02.255 15:15:11 -- common/autotest_common.sh@960 -- # wait 66584 00:11:02.513 15:15:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:02.513 15:15:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:02.513 15:15:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:02.513 15:15:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:02.513 15:15:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:02.513 15:15:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.513 15:15:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.513 15:15:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.513 15:15:11 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:02.513 ************************************ 00:11:02.513 END TEST nvmf_lvs_grow 00:11:02.513 ************************************ 00:11:02.513 00:11:02.513 real 0m42.925s 00:11:02.513 user 1m8.976s 00:11:02.513 sys 0m12.200s 00:11:02.513 15:15:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:02.513 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:11:02.513 15:15:11 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:02.513 15:15:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:02.513 15:15:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:02.513 15:15:11 -- common/autotest_common.sh@10 -- # set +x 00:11:02.513 ************************************ 00:11:02.513 START TEST nvmf_bdev_io_wait 00:11:02.513 ************************************ 00:11:02.513 15:15:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:02.513 * Looking for test storage... 00:11:02.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:02.513 15:15:11 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:02.513 15:15:11 -- nvmf/common.sh@7 -- # uname -s 00:11:02.513 15:15:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.513 15:15:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.513 15:15:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.513 15:15:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.513 15:15:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.513 15:15:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.513 15:15:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.513 15:15:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.513 15:15:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.513 15:15:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.513 15:15:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:11:02.513 15:15:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:11:02.513 15:15:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.513 15:15:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.513 15:15:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:02.513 15:15:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.513 15:15:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.513 15:15:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.513 15:15:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.513 15:15:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.513 15:15:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.513 15:15:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.513 15:15:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.513 15:15:11 -- paths/export.sh@5 -- # export PATH 00:11:02.513 15:15:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.513 15:15:11 -- nvmf/common.sh@47 -- # : 0 00:11:02.513 15:15:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.513 15:15:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.513 15:15:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.513 15:15:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.513 15:15:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.513 15:15:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.513 15:15:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.513 15:15:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.513 15:15:11 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.513 15:15:11 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.513 15:15:11 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:02.774 15:15:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:02.774 15:15:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.774 15:15:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:02.774 15:15:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:02.774 15:15:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:02.774 15:15:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.774 15:15:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.774 15:15:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.774 15:15:11 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:02.774 15:15:11 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:02.774 15:15:11 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:02.774 15:15:11 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:02.774 15:15:11 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:02.774 15:15:11 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:02.774 15:15:11 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.774 15:15:11 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.774 15:15:11 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:02.774 15:15:11 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:02.774 15:15:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:02.774 15:15:11 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:02.774 15:15:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:02.774 15:15:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.774 15:15:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:02.774 15:15:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:02.774 15:15:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:02.774 15:15:11 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:02.774 15:15:11 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:02.774 15:15:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:02.774 Cannot find device "nvmf_tgt_br" 00:11:02.774 15:15:11 -- nvmf/common.sh@155 -- # true 00:11:02.774 15:15:11 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.774 Cannot find device "nvmf_tgt_br2" 00:11:02.774 15:15:11 -- nvmf/common.sh@156 -- # true 00:11:02.774 15:15:11 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:02.774 15:15:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:02.774 Cannot find device "nvmf_tgt_br" 00:11:02.774 15:15:11 -- nvmf/common.sh@158 -- # true 00:11:02.774 15:15:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:02.774 Cannot find device "nvmf_tgt_br2" 00:11:02.774 15:15:11 -- nvmf/common.sh@159 -- # true 00:11:02.774 15:15:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:02.774 15:15:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:02.774 15:15:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.774 15:15:11 -- nvmf/common.sh@162 -- # true 00:11:02.774 15:15:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.774 15:15:11 -- nvmf/common.sh@163 -- # true 00:11:02.774 15:15:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:02.774 15:15:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:02.774 15:15:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:02.774 15:15:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:02.774 15:15:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:02.774 15:15:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:02.774 15:15:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:02.774 15:15:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:02.774 15:15:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:02.774 15:15:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:02.774 15:15:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:02.774 15:15:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:02.774 15:15:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:02.775 15:15:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:02.775 15:15:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:02.775 15:15:12 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:03.032 15:15:12 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:03.032 15:15:12 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:03.032 15:15:12 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:03.032 15:15:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:03.032 15:15:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:03.032 15:15:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:03.032 15:15:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:03.032 15:15:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:03.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:11:03.032 00:11:03.032 --- 10.0.0.2 ping statistics --- 00:11:03.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.032 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:03.032 15:15:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:03.032 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:03.032 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:11:03.032 00:11:03.032 --- 10.0.0.3 ping statistics --- 00:11:03.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.032 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:03.032 15:15:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:03.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:11:03.032 00:11:03.032 --- 10.0.0.1 ping statistics --- 00:11:03.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.032 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:03.032 15:15:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.032 15:15:12 -- nvmf/common.sh@422 -- # return 0 00:11:03.032 15:15:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:03.032 15:15:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.032 15:15:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:03.032 15:15:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:03.032 15:15:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.032 15:15:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:03.032 15:15:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:03.032 15:15:12 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:03.032 15:15:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:03.032 15:15:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:03.032 15:15:12 -- common/autotest_common.sh@10 -- # set +x 00:11:03.032 15:15:12 -- nvmf/common.sh@470 -- # nvmfpid=66906 00:11:03.032 15:15:12 -- nvmf/common.sh@471 -- # waitforlisten 66906 00:11:03.032 15:15:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:03.032 15:15:12 -- common/autotest_common.sh@817 -- # '[' -z 66906 ']' 00:11:03.032 15:15:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.032 15:15:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:03.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.032 15:15:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.032 15:15:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:03.032 15:15:12 -- common/autotest_common.sh@10 -- # set +x 00:11:03.032 [2024-04-24 15:15:12.194670] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:03.032 [2024-04-24 15:15:12.195683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.289 [2024-04-24 15:15:12.339775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.289 [2024-04-24 15:15:12.500996] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.289 [2024-04-24 15:15:12.501359] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.290 [2024-04-24 15:15:12.501670] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.290 [2024-04-24 15:15:12.501835] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.290 [2024-04-24 15:15:12.501954] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.290 [2024-04-24 15:15:12.502111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.290 [2024-04-24 15:15:12.502233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.290 [2024-04-24 15:15:12.502796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.290 [2024-04-24 15:15:12.502841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.223 15:15:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:04.223 15:15:13 -- common/autotest_common.sh@850 -- # return 0 00:11:04.223 15:15:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:04.223 15:15:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:04.223 15:15:13 -- common/autotest_common.sh@10 -- # set +x 00:11:04.223 15:15:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.223 15:15:13 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:04.223 15:15:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.223 15:15:13 -- common/autotest_common.sh@10 -- # set +x 00:11:04.223 15:15:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.223 15:15:13 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:04.223 15:15:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.223 15:15:13 -- common/autotest_common.sh@10 -- # set +x 00:11:04.223 15:15:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.223 15:15:13 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.223 15:15:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.223 15:15:13 -- common/autotest_common.sh@10 -- # set +x 00:11:04.223 [2024-04-24 15:15:13.312598] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.223 15:15:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.223 15:15:13 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:04.223 15:15:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.223 15:15:13 -- common/autotest_common.sh@10 -- # set +x 00:11:04.223 Malloc0 00:11:04.223 15:15:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.223 15:15:13 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:04.223 15:15:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.223 15:15:13 -- common/autotest_common.sh@10 -- # set +x 00:11:04.223 15:15:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.223 15:15:13 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.223 15:15:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.223 15:15:13 -- common/autotest_common.sh@10 -- # set +x 00:11:04.223 15:15:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.223 15:15:13 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.223 15:15:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.223 15:15:13 -- common/autotest_common.sh@10 -- # set +x 00:11:04.223 [2024-04-24 15:15:13.386997] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.223 15:15:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.223 15:15:13 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66946 00:11:04.223 15:15:13 -- target/bdev_io_wait.sh@30 -- # READ_PID=66948 00:11:04.223 15:15:13 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:04.223 15:15:13 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:04.223 15:15:13 -- nvmf/common.sh@521 -- # config=() 00:11:04.223 15:15:13 -- nvmf/common.sh@521 -- # local subsystem config 00:11:04.223 15:15:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:04.223 15:15:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:04.223 { 00:11:04.223 "params": { 00:11:04.224 "name": "Nvme$subsystem", 00:11:04.224 "trtype": "$TEST_TRANSPORT", 00:11:04.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:04.224 "adrfam": "ipv4", 00:11:04.224 "trsvcid": "$NVMF_PORT", 00:11:04.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:04.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:04.224 "hdgst": ${hdgst:-false}, 00:11:04.224 "ddgst": ${ddgst:-false} 00:11:04.224 }, 00:11:04.224 "method": "bdev_nvme_attach_controller" 00:11:04.224 } 00:11:04.224 EOF 00:11:04.224 )") 00:11:04.224 15:15:13 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:04.224 15:15:13 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:04.224 15:15:13 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66950 00:11:04.224 15:15:13 -- nvmf/common.sh@521 -- # config=() 00:11:04.224 15:15:13 -- nvmf/common.sh@521 -- # local subsystem config 00:11:04.224 15:15:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:04.224 15:15:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:04.224 { 00:11:04.224 "params": { 00:11:04.224 "name": "Nvme$subsystem", 00:11:04.224 "trtype": "$TEST_TRANSPORT", 00:11:04.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:04.224 "adrfam": "ipv4", 00:11:04.224 "trsvcid": "$NVMF_PORT", 00:11:04.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:04.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:04.224 "hdgst": ${hdgst:-false}, 00:11:04.224 "ddgst": ${ddgst:-false} 00:11:04.224 }, 00:11:04.224 "method": "bdev_nvme_attach_controller" 00:11:04.224 } 00:11:04.224 EOF 00:11:04.224 )") 00:11:04.224 15:15:13 -- nvmf/common.sh@543 -- # cat 00:11:04.224 15:15:13 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:04.224 15:15:13 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66953 00:11:04.224 15:15:13 -- target/bdev_io_wait.sh@35 -- # sync 00:11:04.224 15:15:13 -- nvmf/common.sh@543 -- # cat 00:11:04.224 15:15:13 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:04.224 15:15:13 -- nvmf/common.sh@521 -- # config=() 00:11:04.224 15:15:13 -- nvmf/common.sh@521 -- # local subsystem config 00:11:04.224 15:15:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:04.224 15:15:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:04.224 { 00:11:04.224 "params": { 00:11:04.224 "name": "Nvme$subsystem", 00:11:04.224 "trtype": "$TEST_TRANSPORT", 00:11:04.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:04.224 "adrfam": "ipv4", 00:11:04.224 "trsvcid": "$NVMF_PORT", 00:11:04.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:04.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:04.224 "hdgst": ${hdgst:-false}, 00:11:04.224 "ddgst": ${ddgst:-false} 00:11:04.224 }, 00:11:04.224 "method": "bdev_nvme_attach_controller" 00:11:04.224 } 00:11:04.224 EOF 00:11:04.224 )") 00:11:04.224 15:15:13 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:04.224 15:15:13 -- nvmf/common.sh@545 -- # jq . 00:11:04.224 15:15:13 -- nvmf/common.sh@543 -- # cat 00:11:04.224 15:15:13 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:04.224 15:15:13 -- nvmf/common.sh@521 -- # config=() 00:11:04.224 15:15:13 -- nvmf/common.sh@521 -- # local subsystem config 00:11:04.224 15:15:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:04.224 15:15:13 -- nvmf/common.sh@546 -- # IFS=, 00:11:04.224 15:15:13 -- nvmf/common.sh@545 -- # jq . 00:11:04.224 15:15:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:04.224 { 00:11:04.224 "params": { 00:11:04.224 "name": "Nvme$subsystem", 00:11:04.224 "trtype": "$TEST_TRANSPORT", 00:11:04.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:04.224 "adrfam": "ipv4", 00:11:04.224 "trsvcid": "$NVMF_PORT", 00:11:04.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:04.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:04.224 "hdgst": ${hdgst:-false}, 00:11:04.224 "ddgst": ${ddgst:-false} 00:11:04.224 }, 00:11:04.224 "method": "bdev_nvme_attach_controller" 00:11:04.224 } 00:11:04.224 EOF 00:11:04.224 )") 00:11:04.224 15:15:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:04.224 "params": { 00:11:04.224 "name": "Nvme1", 00:11:04.224 "trtype": "tcp", 00:11:04.224 "traddr": "10.0.0.2", 00:11:04.224 "adrfam": "ipv4", 00:11:04.224 "trsvcid": "4420", 00:11:04.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:04.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:04.224 "hdgst": false, 00:11:04.224 "ddgst": false 00:11:04.224 }, 00:11:04.224 "method": "bdev_nvme_attach_controller" 00:11:04.224 }' 00:11:04.224 15:15:13 -- nvmf/common.sh@543 -- # cat 00:11:04.224 15:15:13 -- nvmf/common.sh@546 -- # IFS=, 00:11:04.224 15:15:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:04.224 "params": { 00:11:04.224 "name": "Nvme1", 00:11:04.224 "trtype": "tcp", 00:11:04.224 "traddr": "10.0.0.2", 00:11:04.224 "adrfam": "ipv4", 00:11:04.224 "trsvcid": "4420", 00:11:04.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:04.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:04.224 "hdgst": false, 00:11:04.224 "ddgst": false 00:11:04.224 }, 00:11:04.224 "method": "bdev_nvme_attach_controller" 00:11:04.224 }' 00:11:04.224 15:15:13 -- nvmf/common.sh@545 -- # jq . 00:11:04.224 15:15:13 -- nvmf/common.sh@546 -- # IFS=, 00:11:04.224 15:15:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:04.224 "params": { 00:11:04.224 "name": "Nvme1", 00:11:04.224 "trtype": "tcp", 00:11:04.224 "traddr": "10.0.0.2", 00:11:04.224 "adrfam": "ipv4", 00:11:04.224 "trsvcid": "4420", 00:11:04.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:04.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:04.224 "hdgst": false, 00:11:04.224 "ddgst": false 00:11:04.224 }, 00:11:04.224 "method": "bdev_nvme_attach_controller" 00:11:04.224 }' 00:11:04.224 15:15:13 -- nvmf/common.sh@545 -- # jq . 00:11:04.224 15:15:13 -- nvmf/common.sh@546 -- # IFS=, 00:11:04.224 15:15:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:04.224 "params": { 00:11:04.224 "name": "Nvme1", 00:11:04.224 "trtype": "tcp", 00:11:04.224 "traddr": "10.0.0.2", 00:11:04.224 "adrfam": "ipv4", 00:11:04.224 "trsvcid": "4420", 00:11:04.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:04.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:04.224 "hdgst": false, 00:11:04.224 "ddgst": false 00:11:04.224 }, 00:11:04.224 "method": "bdev_nvme_attach_controller" 00:11:04.224 }' 00:11:04.224 [2024-04-24 15:15:13.444727] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:04.224 [2024-04-24 15:15:13.445672] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:04.224 15:15:13 -- target/bdev_io_wait.sh@37 -- # wait 66946 00:11:04.483 [2024-04-24 15:15:13.478345] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:04.483 [2024-04-24 15:15:13.478548] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:04.483 [2024-04-24 15:15:13.482545] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:04.483 [2024-04-24 15:15:13.482652] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:04.483 [2024-04-24 15:15:13.489277] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:04.483 [2024-04-24 15:15:13.489415] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:04.483 [2024-04-24 15:15:13.681019] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.742 [2024-04-24 15:15:13.770454] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.742 [2024-04-24 15:15:13.804043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:04.742 [2024-04-24 15:15:13.879505] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.742 [2024-04-24 15:15:13.910385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:11:05.000 [2024-04-24 15:15:13.997237] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.000 [2024-04-24 15:15:14.022529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:05.000 Running I/O for 1 seconds... 00:11:05.000 Running I/O for 1 seconds... 00:11:05.000 [2024-04-24 15:15:14.118330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:05.000 Running I/O for 1 seconds... 00:11:05.259 Running I/O for 1 seconds... 00:11:06.193 00:11:06.193 Latency(us) 00:11:06.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.193 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:06.193 Nvme1n1 : 1.01 7139.93 27.89 0.00 0.00 17820.85 4379.00 21209.83 00:11:06.193 =================================================================================================================== 00:11:06.193 Total : 7139.93 27.89 0.00 0.00 17820.85 4379.00 21209.83 00:11:06.193 00:11:06.193 Latency(us) 00:11:06.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.194 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:06.194 Nvme1n1 : 1.01 5355.98 20.92 0.00 0.00 23718.39 11021.96 49569.05 00:11:06.194 =================================================================================================================== 00:11:06.194 Total : 5355.98 20.92 0.00 0.00 23718.39 11021.96 49569.05 00:11:06.194 00:11:06.194 Latency(us) 00:11:06.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.194 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:06.194 Nvme1n1 : 1.01 5669.20 22.15 0.00 0.00 22452.41 7030.23 28001.75 00:11:06.194 =================================================================================================================== 00:11:06.194 Total : 5669.20 22.15 0.00 0.00 22452.41 7030.23 28001.75 00:11:06.194 00:11:06.194 Latency(us) 00:11:06.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.194 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:06.194 Nvme1n1 : 1.00 173128.36 676.28 0.00 0.00 736.72 396.57 1385.19 00:11:06.194 =================================================================================================================== 00:11:06.194 Total : 173128.36 676.28 0.00 0.00 736.72 396.57 1385.19 00:11:06.194 15:15:15 -- target/bdev_io_wait.sh@38 -- # wait 66948 00:11:06.452 15:15:15 -- target/bdev_io_wait.sh@39 -- # wait 66950 00:11:06.452 15:15:15 -- target/bdev_io_wait.sh@40 -- # wait 66953 00:11:06.452 15:15:15 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.452 15:15:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.452 15:15:15 -- common/autotest_common.sh@10 -- # set +x 00:11:06.452 15:15:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.452 15:15:15 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:06.452 15:15:15 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:06.452 15:15:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:06.452 15:15:15 -- nvmf/common.sh@117 -- # sync 00:11:06.712 15:15:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:06.712 15:15:15 -- nvmf/common.sh@120 -- # set +e 00:11:06.712 15:15:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:06.712 15:15:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:06.712 rmmod nvme_tcp 00:11:06.712 rmmod nvme_fabrics 00:11:06.712 rmmod nvme_keyring 00:11:06.712 15:15:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:06.712 15:15:15 -- nvmf/common.sh@124 -- # set -e 00:11:06.712 15:15:15 -- nvmf/common.sh@125 -- # return 0 00:11:06.712 15:15:15 -- nvmf/common.sh@478 -- # '[' -n 66906 ']' 00:11:06.712 15:15:15 -- nvmf/common.sh@479 -- # killprocess 66906 00:11:06.712 15:15:15 -- common/autotest_common.sh@936 -- # '[' -z 66906 ']' 00:11:06.712 15:15:15 -- common/autotest_common.sh@940 -- # kill -0 66906 00:11:06.712 15:15:15 -- common/autotest_common.sh@941 -- # uname 00:11:06.712 15:15:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:06.712 15:15:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66906 00:11:06.712 15:15:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:06.712 15:15:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:06.712 15:15:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66906' 00:11:06.712 killing process with pid 66906 00:11:06.712 15:15:15 -- common/autotest_common.sh@955 -- # kill 66906 00:11:06.712 15:15:15 -- common/autotest_common.sh@960 -- # wait 66906 00:11:06.972 15:15:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:06.972 15:15:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:06.972 15:15:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:06.972 15:15:16 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:06.972 15:15:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:06.972 15:15:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.972 15:15:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:06.972 15:15:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.972 15:15:16 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:06.972 00:11:06.972 real 0m4.432s 00:11:06.972 user 0m19.447s 00:11:06.972 sys 0m2.535s 00:11:06.972 15:15:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:06.972 15:15:16 -- common/autotest_common.sh@10 -- # set +x 00:11:06.972 ************************************ 00:11:06.972 END TEST nvmf_bdev_io_wait 00:11:06.972 ************************************ 00:11:06.972 15:15:16 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:06.972 15:15:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:06.972 15:15:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:06.972 15:15:16 -- common/autotest_common.sh@10 -- # set +x 00:11:07.231 ************************************ 00:11:07.231 START TEST nvmf_queue_depth 00:11:07.231 ************************************ 00:11:07.231 15:15:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:07.231 * Looking for test storage... 00:11:07.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:07.231 15:15:16 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:07.231 15:15:16 -- nvmf/common.sh@7 -- # uname -s 00:11:07.231 15:15:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.231 15:15:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.231 15:15:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.231 15:15:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.231 15:15:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.231 15:15:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.231 15:15:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.231 15:15:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.231 15:15:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.231 15:15:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.231 15:15:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:11:07.231 15:15:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:11:07.231 15:15:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.231 15:15:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.231 15:15:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:07.231 15:15:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.231 15:15:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:07.231 15:15:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.231 15:15:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.231 15:15:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.231 15:15:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.231 15:15:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.231 15:15:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.231 15:15:16 -- paths/export.sh@5 -- # export PATH 00:11:07.231 15:15:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.231 15:15:16 -- nvmf/common.sh@47 -- # : 0 00:11:07.231 15:15:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.231 15:15:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.231 15:15:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.231 15:15:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.231 15:15:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.231 15:15:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.231 15:15:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.231 15:15:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.231 15:15:16 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:07.231 15:15:16 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:07.231 15:15:16 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:07.231 15:15:16 -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:07.231 15:15:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:07.231 15:15:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.231 15:15:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:07.231 15:15:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:07.231 15:15:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:07.231 15:15:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.231 15:15:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.231 15:15:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.231 15:15:16 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:07.231 15:15:16 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:07.231 15:15:16 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:07.231 15:15:16 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:07.231 15:15:16 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:07.231 15:15:16 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:07.231 15:15:16 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.231 15:15:16 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.231 15:15:16 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:07.231 15:15:16 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:07.231 15:15:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:07.231 15:15:16 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:07.231 15:15:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:07.231 15:15:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.231 15:15:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:07.231 15:15:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:07.231 15:15:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:07.231 15:15:16 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:07.231 15:15:16 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:07.231 15:15:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:07.231 Cannot find device "nvmf_tgt_br" 00:11:07.231 15:15:16 -- nvmf/common.sh@155 -- # true 00:11:07.231 15:15:16 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:07.231 Cannot find device "nvmf_tgt_br2" 00:11:07.231 15:15:16 -- nvmf/common.sh@156 -- # true 00:11:07.231 15:15:16 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:07.231 15:15:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:07.231 Cannot find device "nvmf_tgt_br" 00:11:07.231 15:15:16 -- nvmf/common.sh@158 -- # true 00:11:07.231 15:15:16 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:07.231 Cannot find device "nvmf_tgt_br2" 00:11:07.231 15:15:16 -- nvmf/common.sh@159 -- # true 00:11:07.231 15:15:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:07.231 15:15:16 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:07.231 15:15:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:07.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:07.231 15:15:16 -- nvmf/common.sh@162 -- # true 00:11:07.231 15:15:16 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:07.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:07.231 15:15:16 -- nvmf/common.sh@163 -- # true 00:11:07.231 15:15:16 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:07.490 15:15:16 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:07.490 15:15:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:07.490 15:15:16 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:07.490 15:15:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:07.490 15:15:16 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:07.490 15:15:16 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:07.490 15:15:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:07.490 15:15:16 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:07.490 15:15:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:07.490 15:15:16 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:07.490 15:15:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:07.490 15:15:16 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:07.490 15:15:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:07.490 15:15:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:07.490 15:15:16 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:07.490 15:15:16 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:07.490 15:15:16 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:07.490 15:15:16 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:07.490 15:15:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:07.490 15:15:16 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:07.490 15:15:16 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:07.490 15:15:16 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:07.490 15:15:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:07.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:11:07.490 00:11:07.490 --- 10.0.0.2 ping statistics --- 00:11:07.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.490 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:07.490 15:15:16 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:07.490 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:07.490 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:11:07.490 00:11:07.490 --- 10.0.0.3 ping statistics --- 00:11:07.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.490 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:07.490 15:15:16 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:07.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:11:07.490 00:11:07.490 --- 10.0.0.1 ping statistics --- 00:11:07.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.490 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:07.490 15:15:16 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.490 15:15:16 -- nvmf/common.sh@422 -- # return 0 00:11:07.490 15:15:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:07.490 15:15:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.490 15:15:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:07.490 15:15:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:07.490 15:15:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.490 15:15:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:07.490 15:15:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:07.490 15:15:16 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:07.490 15:15:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:07.490 15:15:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:07.490 15:15:16 -- common/autotest_common.sh@10 -- # set +x 00:11:07.490 15:15:16 -- nvmf/common.sh@470 -- # nvmfpid=67195 00:11:07.490 15:15:16 -- nvmf/common.sh@471 -- # waitforlisten 67195 00:11:07.490 15:15:16 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:07.490 15:15:16 -- common/autotest_common.sh@817 -- # '[' -z 67195 ']' 00:11:07.490 15:15:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.490 15:15:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:07.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.490 15:15:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.490 15:15:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:07.490 15:15:16 -- common/autotest_common.sh@10 -- # set +x 00:11:07.749 [2024-04-24 15:15:16.772986] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:07.749 [2024-04-24 15:15:16.773134] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.749 [2024-04-24 15:15:16.924027] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.007 [2024-04-24 15:15:17.077592] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.007 [2024-04-24 15:15:17.077659] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.007 [2024-04-24 15:15:17.077679] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.007 [2024-04-24 15:15:17.077690] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.007 [2024-04-24 15:15:17.077700] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.007 [2024-04-24 15:15:17.077741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.577 15:15:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:08.577 15:15:17 -- common/autotest_common.sh@850 -- # return 0 00:11:08.577 15:15:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:08.577 15:15:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:08.577 15:15:17 -- common/autotest_common.sh@10 -- # set +x 00:11:08.577 15:15:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.577 15:15:17 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:08.577 15:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:08.577 15:15:17 -- common/autotest_common.sh@10 -- # set +x 00:11:08.577 [2024-04-24 15:15:17.780841] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.577 15:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:08.577 15:15:17 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:08.577 15:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:08.577 15:15:17 -- common/autotest_common.sh@10 -- # set +x 00:11:08.577 Malloc0 00:11:08.577 15:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:08.577 15:15:17 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:08.577 15:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:08.577 15:15:17 -- common/autotest_common.sh@10 -- # set +x 00:11:08.837 15:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:08.837 15:15:17 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:08.837 15:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:08.837 15:15:17 -- common/autotest_common.sh@10 -- # set +x 00:11:08.837 15:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:08.837 15:15:17 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.837 15:15:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:08.837 15:15:17 -- common/autotest_common.sh@10 -- # set +x 00:11:08.837 [2024-04-24 15:15:17.842657] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.837 15:15:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:08.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:08.837 15:15:17 -- target/queue_depth.sh@30 -- # bdevperf_pid=67231 00:11:08.837 15:15:17 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:08.837 15:15:17 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:08.837 15:15:17 -- target/queue_depth.sh@33 -- # waitforlisten 67231 /var/tmp/bdevperf.sock 00:11:08.837 15:15:17 -- common/autotest_common.sh@817 -- # '[' -z 67231 ']' 00:11:08.837 15:15:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:08.837 15:15:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:08.837 15:15:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:08.837 15:15:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:08.837 15:15:17 -- common/autotest_common.sh@10 -- # set +x 00:11:08.837 [2024-04-24 15:15:17.899349] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:08.837 [2024-04-24 15:15:17.899483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67231 ] 00:11:08.837 [2024-04-24 15:15:18.040528] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.096 [2024-04-24 15:15:18.171692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.032 15:15:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:10.032 15:15:18 -- common/autotest_common.sh@850 -- # return 0 00:11:10.032 15:15:18 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:10.032 15:15:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.032 15:15:18 -- common/autotest_common.sh@10 -- # set +x 00:11:10.032 NVMe0n1 00:11:10.032 15:15:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.032 15:15:18 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:10.032 Running I/O for 10 seconds... 00:11:20.034 00:11:20.034 Latency(us) 00:11:20.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.034 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:20.034 Verification LBA range: start 0x0 length 0x4000 00:11:20.034 NVMe0n1 : 10.08 7938.60 31.01 0.00 0.00 128386.37 27763.43 99614.72 00:11:20.034 =================================================================================================================== 00:11:20.034 Total : 7938.60 31.01 0.00 0.00 128386.37 27763.43 99614.72 00:11:20.034 0 00:11:20.034 15:15:29 -- target/queue_depth.sh@39 -- # killprocess 67231 00:11:20.034 15:15:29 -- common/autotest_common.sh@936 -- # '[' -z 67231 ']' 00:11:20.034 15:15:29 -- common/autotest_common.sh@940 -- # kill -0 67231 00:11:20.034 15:15:29 -- common/autotest_common.sh@941 -- # uname 00:11:20.034 15:15:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:20.034 15:15:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67231 00:11:20.034 15:15:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:20.034 15:15:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:20.034 killing process with pid 67231 00:11:20.034 15:15:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67231' 00:11:20.034 Received shutdown signal, test time was about 10.000000 seconds 00:11:20.035 00:11:20.035 Latency(us) 00:11:20.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.035 =================================================================================================================== 00:11:20.035 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:20.035 15:15:29 -- common/autotest_common.sh@955 -- # kill 67231 00:11:20.035 15:15:29 -- common/autotest_common.sh@960 -- # wait 67231 00:11:20.293 15:15:29 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:20.293 15:15:29 -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:20.293 15:15:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:20.293 15:15:29 -- nvmf/common.sh@117 -- # sync 00:11:20.551 15:15:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:20.551 15:15:29 -- nvmf/common.sh@120 -- # set +e 00:11:20.551 15:15:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.551 15:15:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:20.551 rmmod nvme_tcp 00:11:20.551 rmmod nvme_fabrics 00:11:20.551 rmmod nvme_keyring 00:11:20.551 15:15:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.551 15:15:29 -- nvmf/common.sh@124 -- # set -e 00:11:20.551 15:15:29 -- nvmf/common.sh@125 -- # return 0 00:11:20.551 15:15:29 -- nvmf/common.sh@478 -- # '[' -n 67195 ']' 00:11:20.551 15:15:29 -- nvmf/common.sh@479 -- # killprocess 67195 00:11:20.551 15:15:29 -- common/autotest_common.sh@936 -- # '[' -z 67195 ']' 00:11:20.551 15:15:29 -- common/autotest_common.sh@940 -- # kill -0 67195 00:11:20.551 15:15:29 -- common/autotest_common.sh@941 -- # uname 00:11:20.551 15:15:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:20.551 15:15:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67195 00:11:20.551 15:15:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:20.551 15:15:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:20.551 killing process with pid 67195 00:11:20.551 15:15:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67195' 00:11:20.551 15:15:29 -- common/autotest_common.sh@955 -- # kill 67195 00:11:20.551 15:15:29 -- common/autotest_common.sh@960 -- # wait 67195 00:11:20.809 15:15:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:20.809 15:15:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:20.809 15:15:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:20.809 15:15:29 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.809 15:15:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:20.809 15:15:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.809 15:15:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.809 15:15:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.809 15:15:29 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:20.809 00:11:20.809 real 0m13.749s 00:11:20.809 user 0m23.782s 00:11:20.809 sys 0m2.269s 00:11:20.809 15:15:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:20.809 ************************************ 00:11:20.809 END TEST nvmf_queue_depth 00:11:20.809 ************************************ 00:11:20.809 15:15:29 -- common/autotest_common.sh@10 -- # set +x 00:11:20.809 15:15:30 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:20.809 15:15:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:20.809 15:15:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:20.809 15:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:21.068 ************************************ 00:11:21.068 START TEST nvmf_multipath 00:11:21.068 ************************************ 00:11:21.068 15:15:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:21.068 * Looking for test storage... 00:11:21.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:21.068 15:15:30 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:21.068 15:15:30 -- nvmf/common.sh@7 -- # uname -s 00:11:21.068 15:15:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.068 15:15:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.068 15:15:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.068 15:15:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.068 15:15:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.068 15:15:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.068 15:15:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.068 15:15:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.068 15:15:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.068 15:15:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.068 15:15:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:11:21.068 15:15:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:11:21.068 15:15:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.068 15:15:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.068 15:15:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:21.068 15:15:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.068 15:15:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:21.068 15:15:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.068 15:15:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.068 15:15:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.068 15:15:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.068 15:15:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.068 15:15:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.068 15:15:30 -- paths/export.sh@5 -- # export PATH 00:11:21.068 15:15:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.068 15:15:30 -- nvmf/common.sh@47 -- # : 0 00:11:21.068 15:15:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.068 15:15:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.069 15:15:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.069 15:15:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.069 15:15:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.069 15:15:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.069 15:15:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.069 15:15:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.069 15:15:30 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.069 15:15:30 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.069 15:15:30 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:21.069 15:15:30 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:21.069 15:15:30 -- target/multipath.sh@43 -- # nvmftestinit 00:11:21.069 15:15:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:21.069 15:15:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.069 15:15:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:21.069 15:15:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:21.069 15:15:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:21.069 15:15:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.069 15:15:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.069 15:15:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.069 15:15:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:21.069 15:15:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:21.069 15:15:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:21.069 15:15:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:21.069 15:15:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:21.069 15:15:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:21.069 15:15:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.069 15:15:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.069 15:15:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:21.069 15:15:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:21.069 15:15:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:21.069 15:15:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:21.069 15:15:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:21.069 15:15:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.069 15:15:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:21.069 15:15:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:21.069 15:15:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:21.069 15:15:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:21.069 15:15:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:21.069 15:15:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:21.069 Cannot find device "nvmf_tgt_br" 00:11:21.069 15:15:30 -- nvmf/common.sh@155 -- # true 00:11:21.069 15:15:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.069 Cannot find device "nvmf_tgt_br2" 00:11:21.069 15:15:30 -- nvmf/common.sh@156 -- # true 00:11:21.069 15:15:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:21.069 15:15:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:21.069 Cannot find device "nvmf_tgt_br" 00:11:21.069 15:15:30 -- nvmf/common.sh@158 -- # true 00:11:21.069 15:15:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:21.069 Cannot find device "nvmf_tgt_br2" 00:11:21.069 15:15:30 -- nvmf/common.sh@159 -- # true 00:11:21.069 15:15:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:21.327 15:15:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:21.327 15:15:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.327 15:15:30 -- nvmf/common.sh@162 -- # true 00:11:21.327 15:15:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:21.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.327 15:15:30 -- nvmf/common.sh@163 -- # true 00:11:21.327 15:15:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:21.327 15:15:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:21.327 15:15:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:21.327 15:15:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:21.327 15:15:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:21.327 15:15:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:21.327 15:15:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:21.327 15:15:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:21.327 15:15:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:21.327 15:15:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:21.327 15:15:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:21.327 15:15:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:21.327 15:15:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:21.327 15:15:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:21.327 15:15:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:21.327 15:15:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:21.327 15:15:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:21.327 15:15:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:21.327 15:15:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:21.327 15:15:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:21.327 15:15:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:21.327 15:15:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:21.327 15:15:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:21.327 15:15:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:21.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:11:21.327 00:11:21.327 --- 10.0.0.2 ping statistics --- 00:11:21.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.327 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:11:21.327 15:15:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:21.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:21.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:11:21.327 00:11:21.327 --- 10.0.0.3 ping statistics --- 00:11:21.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.327 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:21.327 15:15:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:21.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:21.327 00:11:21.327 --- 10.0.0.1 ping statistics --- 00:11:21.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.327 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:21.584 15:15:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.584 15:15:30 -- nvmf/common.sh@422 -- # return 0 00:11:21.584 15:15:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:21.584 15:15:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.584 15:15:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:21.584 15:15:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:21.584 15:15:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.585 15:15:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:21.585 15:15:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:21.585 15:15:30 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:11:21.585 15:15:30 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:21.585 15:15:30 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:21.585 15:15:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:21.585 15:15:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:21.585 15:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:21.585 15:15:30 -- nvmf/common.sh@470 -- # nvmfpid=67556 00:11:21.585 15:15:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.585 15:15:30 -- nvmf/common.sh@471 -- # waitforlisten 67556 00:11:21.585 15:15:30 -- common/autotest_common.sh@817 -- # '[' -z 67556 ']' 00:11:21.585 15:15:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.585 15:15:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:21.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.585 15:15:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.585 15:15:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:21.585 15:15:30 -- common/autotest_common.sh@10 -- # set +x 00:11:21.585 [2024-04-24 15:15:30.657564] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:21.585 [2024-04-24 15:15:30.657702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.585 [2024-04-24 15:15:30.802463] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.842 [2024-04-24 15:15:30.948097] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.842 [2024-04-24 15:15:30.948154] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.842 [2024-04-24 15:15:30.948168] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.842 [2024-04-24 15:15:30.948179] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.842 [2024-04-24 15:15:30.948188] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.842 [2024-04-24 15:15:30.948307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.842 [2024-04-24 15:15:30.948861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.842 [2024-04-24 15:15:30.948955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.842 [2024-04-24 15:15:30.948963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.408 15:15:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:22.408 15:15:31 -- common/autotest_common.sh@850 -- # return 0 00:11:22.408 15:15:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:22.408 15:15:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:22.408 15:15:31 -- common/autotest_common.sh@10 -- # set +x 00:11:22.408 15:15:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.408 15:15:31 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:22.666 [2024-04-24 15:15:31.851615] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.666 15:15:31 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:22.924 Malloc0 00:11:22.924 15:15:32 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:23.497 15:15:32 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:23.497 15:15:32 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.774 [2024-04-24 15:15:32.939248] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.774 15:15:32 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:24.031 [2024-04-24 15:15:33.231509] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:24.031 15:15:33 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab --hostid=bfc76e61-7421-4d42-8df7-48fb051b4cab -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:11:24.290 15:15:33 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab --hostid=bfc76e61-7421-4d42-8df7-48fb051b4cab -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:24.290 15:15:33 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.290 15:15:33 -- common/autotest_common.sh@1184 -- # local i=0 00:11:24.290 15:15:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.290 15:15:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:24.290 15:15:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:26.820 15:15:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:26.820 15:15:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:26.820 15:15:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:26.820 15:15:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:26.820 15:15:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:26.820 15:15:35 -- common/autotest_common.sh@1194 -- # return 0 00:11:26.820 15:15:35 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:26.820 15:15:35 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:26.820 15:15:35 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:26.820 15:15:35 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:26.820 15:15:35 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:26.820 15:15:35 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:26.820 15:15:35 -- target/multipath.sh@38 -- # return 0 00:11:26.820 15:15:35 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:26.820 15:15:35 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:26.820 15:15:35 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:26.820 15:15:35 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:26.820 15:15:35 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:26.820 15:15:35 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:26.820 15:15:35 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:26.820 15:15:35 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:26.820 15:15:35 -- target/multipath.sh@22 -- # local timeout=20 00:11:26.820 15:15:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:26.820 15:15:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:26.820 15:15:35 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:26.820 15:15:35 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:26.820 15:15:35 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:26.820 15:15:35 -- target/multipath.sh@22 -- # local timeout=20 00:11:26.820 15:15:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:26.820 15:15:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:26.820 15:15:35 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:26.820 15:15:35 -- target/multipath.sh@85 -- # echo numa 00:11:26.820 15:15:35 -- target/multipath.sh@88 -- # fio_pid=67651 00:11:26.820 15:15:35 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:26.820 15:15:35 -- target/multipath.sh@90 -- # sleep 1 00:11:26.820 [global] 00:11:26.820 thread=1 00:11:26.820 invalidate=1 00:11:26.820 rw=randrw 00:11:26.820 time_based=1 00:11:26.820 runtime=6 00:11:26.820 ioengine=libaio 00:11:26.820 direct=1 00:11:26.820 bs=4096 00:11:26.820 iodepth=128 00:11:26.820 norandommap=0 00:11:26.820 numjobs=1 00:11:26.820 00:11:26.820 verify_dump=1 00:11:26.820 verify_backlog=512 00:11:26.820 verify_state_save=0 00:11:26.820 do_verify=1 00:11:26.820 verify=crc32c-intel 00:11:26.820 [job0] 00:11:26.820 filename=/dev/nvme0n1 00:11:26.820 Could not set queue depth (nvme0n1) 00:11:26.820 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:26.820 fio-3.35 00:11:26.820 Starting 1 thread 00:11:27.386 15:15:36 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:27.644 15:15:36 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:27.903 15:15:37 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:27.903 15:15:37 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:27.903 15:15:37 -- target/multipath.sh@22 -- # local timeout=20 00:11:27.903 15:15:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:27.903 15:15:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:27.903 15:15:37 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:27.903 15:15:37 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:27.903 15:15:37 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:27.903 15:15:37 -- target/multipath.sh@22 -- # local timeout=20 00:11:27.903 15:15:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:27.903 15:15:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:27.903 15:15:37 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:27.903 15:15:37 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:28.469 15:15:37 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:28.727 15:15:37 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:28.727 15:15:37 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:28.727 15:15:37 -- target/multipath.sh@22 -- # local timeout=20 00:11:28.727 15:15:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:28.727 15:15:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:28.727 15:15:37 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:28.727 15:15:37 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:28.727 15:15:37 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:28.727 15:15:37 -- target/multipath.sh@22 -- # local timeout=20 00:11:28.727 15:15:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:28.727 15:15:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:28.727 15:15:37 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:28.727 15:15:37 -- target/multipath.sh@104 -- # wait 67651 00:11:32.912 00:11:32.912 job0: (groupid=0, jobs=1): err= 0: pid=67672: Wed Apr 24 15:15:41 2024 00:11:32.912 read: IOPS=10.1k, BW=39.4MiB/s (41.3MB/s)(237MiB/6007msec) 00:11:32.912 slat (usec): min=6, max=6469, avg=57.91, stdev=227.98 00:11:32.912 clat (usec): min=1368, max=18058, avg=8655.80, stdev=1615.62 00:11:32.912 lat (usec): min=1390, max=18073, avg=8713.71, stdev=1620.92 00:11:32.912 clat percentiles (usec): 00:11:32.912 | 1.00th=[ 4621], 5.00th=[ 6587], 10.00th=[ 7373], 20.00th=[ 7832], 00:11:32.912 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:11:32.912 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[10290], 95.00th=[12387], 00:11:32.912 | 99.00th=[13829], 99.50th=[14615], 99.90th=[17171], 99.95th=[17957], 00:11:32.912 | 99.99th=[17957] 00:11:32.912 bw ( KiB/s): min= 3520, max=27304, per=51.19%, avg=20647.91, stdev=6995.72, samples=11 00:11:32.912 iops : min= 880, max= 6826, avg=5161.91, stdev=1749.02, samples=11 00:11:32.912 write: IOPS=5957, BW=23.3MiB/s (24.4MB/s)(124MiB/5318msec); 0 zone resets 00:11:32.912 slat (usec): min=13, max=2146, avg=67.15, stdev=156.92 00:11:32.912 clat (usec): min=851, max=17790, avg=7506.00, stdev=1441.89 00:11:32.912 lat (usec): min=932, max=17832, avg=7573.15, stdev=1447.25 00:11:32.912 clat percentiles (usec): 00:11:32.912 | 1.00th=[ 3425], 5.00th=[ 4555], 10.00th=[ 5735], 20.00th=[ 6915], 00:11:32.912 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7832], 00:11:32.912 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8717], 95.00th=[ 9372], 00:11:32.912 | 99.00th=[11863], 99.50th=[12649], 99.90th=[14615], 99.95th=[15926], 00:11:32.912 | 99.99th=[16450] 00:11:32.912 bw ( KiB/s): min= 3712, max=27344, per=86.99%, avg=20728.64, stdev=6942.57, samples=11 00:11:32.912 iops : min= 928, max= 6836, avg=5182.09, stdev=1735.73, samples=11 00:11:32.912 lat (usec) : 1000=0.01% 00:11:32.912 lat (msec) : 2=0.06%, 4=1.10%, 10=90.32%, 20=8.52% 00:11:32.912 cpu : usr=5.83%, sys=23.26%, ctx=5553, majf=0, minf=121 00:11:32.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:32.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:32.912 issued rwts: total=60570,31681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:32.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:32.912 00:11:32.912 Run status group 0 (all jobs): 00:11:32.912 READ: bw=39.4MiB/s (41.3MB/s), 39.4MiB/s-39.4MiB/s (41.3MB/s-41.3MB/s), io=237MiB (248MB), run=6007-6007msec 00:11:32.912 WRITE: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=124MiB (130MB), run=5318-5318msec 00:11:32.912 00:11:32.912 Disk stats (read/write): 00:11:32.912 nvme0n1: ios=59693/31059, merge=0/0, ticks=494325/217751, in_queue=712076, util=98.66% 00:11:32.912 15:15:41 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:11:32.912 15:15:42 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:33.171 15:15:42 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:33.171 15:15:42 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:33.171 15:15:42 -- target/multipath.sh@22 -- # local timeout=20 00:11:33.171 15:15:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:33.171 15:15:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:33.171 15:15:42 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:33.171 15:15:42 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:33.171 15:15:42 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:33.171 15:15:42 -- target/multipath.sh@22 -- # local timeout=20 00:11:33.171 15:15:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:33.171 15:15:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:33.171 15:15:42 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:33.171 15:15:42 -- target/multipath.sh@113 -- # echo round-robin 00:11:33.171 15:15:42 -- target/multipath.sh@116 -- # fio_pid=67753 00:11:33.171 15:15:42 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:33.171 15:15:42 -- target/multipath.sh@118 -- # sleep 1 00:11:33.171 [global] 00:11:33.171 thread=1 00:11:33.171 invalidate=1 00:11:33.171 rw=randrw 00:11:33.171 time_based=1 00:11:33.171 runtime=6 00:11:33.171 ioengine=libaio 00:11:33.171 direct=1 00:11:33.171 bs=4096 00:11:33.171 iodepth=128 00:11:33.171 norandommap=0 00:11:33.171 numjobs=1 00:11:33.171 00:11:33.171 verify_dump=1 00:11:33.171 verify_backlog=512 00:11:33.171 verify_state_save=0 00:11:33.171 do_verify=1 00:11:33.171 verify=crc32c-intel 00:11:33.171 [job0] 00:11:33.171 filename=/dev/nvme0n1 00:11:33.171 Could not set queue depth (nvme0n1) 00:11:33.429 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:33.429 fio-3.35 00:11:33.429 Starting 1 thread 00:11:34.399 15:15:43 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:34.657 15:15:43 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:34.914 15:15:43 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:34.914 15:15:43 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:34.914 15:15:43 -- target/multipath.sh@22 -- # local timeout=20 00:11:34.914 15:15:43 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:34.914 15:15:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:34.914 15:15:43 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:34.914 15:15:43 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:34.914 15:15:43 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:34.914 15:15:43 -- target/multipath.sh@22 -- # local timeout=20 00:11:34.915 15:15:43 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:34.915 15:15:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:34.915 15:15:43 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:34.915 15:15:43 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:35.172 15:15:44 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:35.430 15:15:44 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:35.430 15:15:44 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:35.430 15:15:44 -- target/multipath.sh@22 -- # local timeout=20 00:11:35.430 15:15:44 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:35.430 15:15:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:35.430 15:15:44 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:35.430 15:15:44 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:35.430 15:15:44 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:35.430 15:15:44 -- target/multipath.sh@22 -- # local timeout=20 00:11:35.430 15:15:44 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:35.430 15:15:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:35.430 15:15:44 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:35.430 15:15:44 -- target/multipath.sh@132 -- # wait 67753 00:11:39.618 00:11:39.618 job0: (groupid=0, jobs=1): err= 0: pid=67774: Wed Apr 24 15:15:48 2024 00:11:39.618 read: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(259MiB/6007msec) 00:11:39.618 slat (usec): min=3, max=7794, avg=43.22, stdev=192.59 00:11:39.618 clat (usec): min=434, max=18446, avg=7782.11, stdev=2031.18 00:11:39.618 lat (usec): min=451, max=18595, avg=7825.33, stdev=2046.18 00:11:39.618 clat percentiles (usec): 00:11:39.618 | 1.00th=[ 2769], 5.00th=[ 4146], 10.00th=[ 4883], 20.00th=[ 6063], 00:11:39.618 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8160], 60.00th=[ 8356], 00:11:39.618 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[11600], 00:11:39.618 | 99.00th=[13173], 99.50th=[13435], 99.90th=[14353], 99.95th=[14484], 00:11:39.618 | 99.99th=[15008] 00:11:39.618 bw ( KiB/s): min=12176, max=35680, per=55.28%, avg=24447.27, stdev=6839.72, samples=11 00:11:39.618 iops : min= 3044, max= 8920, avg=6111.82, stdev=1709.93, samples=11 00:11:39.618 write: IOPS=6635, BW=25.9MiB/s (27.2MB/s)(144MiB/5555msec); 0 zone resets 00:11:39.618 slat (usec): min=4, max=2534, avg=57.09, stdev=140.36 00:11:39.618 clat (usec): min=940, max=14861, avg=6665.66, stdev=1799.81 00:11:39.618 lat (usec): min=986, max=14897, avg=6722.75, stdev=1814.70 00:11:39.618 clat percentiles (usec): 00:11:39.618 | 1.00th=[ 2638], 5.00th=[ 3458], 10.00th=[ 3949], 20.00th=[ 4686], 00:11:39.618 | 30.00th=[ 5669], 40.00th=[ 6915], 50.00th=[ 7308], 60.00th=[ 7570], 00:11:39.618 | 70.00th=[ 7767], 80.00th=[ 8029], 90.00th=[ 8455], 95.00th=[ 8717], 00:11:39.618 | 99.00th=[10945], 99.50th=[11731], 99.90th=[13042], 99.95th=[13304], 00:11:39.618 | 99.99th=[13829] 00:11:39.618 bw ( KiB/s): min=12288, max=35264, per=91.92%, avg=24398.55, stdev=6641.18, samples=11 00:11:39.618 iops : min= 3072, max= 8816, avg=6099.64, stdev=1660.30, samples=11 00:11:39.618 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.03% 00:11:39.618 lat (msec) : 2=0.23%, 4=6.23%, 10=88.27%, 20=5.23% 00:11:39.618 cpu : usr=6.14%, sys=24.18%, ctx=5987, majf=0, minf=145 00:11:39.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:39.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.618 issued rwts: total=66413,36862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.618 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.618 00:11:39.618 Run status group 0 (all jobs): 00:11:39.618 READ: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=259MiB (272MB), run=6007-6007msec 00:11:39.618 WRITE: bw=25.9MiB/s (27.2MB/s), 25.9MiB/s-25.9MiB/s (27.2MB/s-27.2MB/s), io=144MiB (151MB), run=5555-5555msec 00:11:39.618 00:11:39.618 Disk stats (read/write): 00:11:39.618 nvme0n1: ios=65551/36270, merge=0/0, ticks=485362/224033, in_queue=709395, util=98.62% 00:11:39.618 15:15:48 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:39.618 15:15:48 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.618 15:15:48 -- common/autotest_common.sh@1205 -- # local i=0 00:11:39.618 15:15:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:39.618 15:15:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.618 15:15:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:39.618 15:15:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.618 15:15:48 -- common/autotest_common.sh@1217 -- # return 0 00:11:39.618 15:15:48 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.929 15:15:48 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:39.929 15:15:48 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:39.929 15:15:48 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:39.929 15:15:48 -- target/multipath.sh@144 -- # nvmftestfini 00:11:39.929 15:15:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:39.929 15:15:48 -- nvmf/common.sh@117 -- # sync 00:11:39.929 15:15:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:39.929 15:15:49 -- nvmf/common.sh@120 -- # set +e 00:11:39.929 15:15:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:39.929 15:15:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:39.929 rmmod nvme_tcp 00:11:39.929 rmmod nvme_fabrics 00:11:39.929 rmmod nvme_keyring 00:11:39.929 15:15:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:39.929 15:15:49 -- nvmf/common.sh@124 -- # set -e 00:11:39.929 15:15:49 -- nvmf/common.sh@125 -- # return 0 00:11:39.929 15:15:49 -- nvmf/common.sh@478 -- # '[' -n 67556 ']' 00:11:39.929 15:15:49 -- nvmf/common.sh@479 -- # killprocess 67556 00:11:39.929 15:15:49 -- common/autotest_common.sh@936 -- # '[' -z 67556 ']' 00:11:39.929 15:15:49 -- common/autotest_common.sh@940 -- # kill -0 67556 00:11:39.929 15:15:49 -- common/autotest_common.sh@941 -- # uname 00:11:39.929 15:15:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:39.929 15:15:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67556 00:11:39.929 killing process with pid 67556 00:11:39.929 15:15:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:39.929 15:15:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:39.929 15:15:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67556' 00:11:39.929 15:15:49 -- common/autotest_common.sh@955 -- # kill 67556 00:11:39.929 15:15:49 -- common/autotest_common.sh@960 -- # wait 67556 00:11:40.188 15:15:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:40.188 15:15:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:40.188 15:15:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:40.188 15:15:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:40.188 15:15:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:40.188 15:15:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.188 15:15:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.188 15:15:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.188 15:15:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:40.188 ************************************ 00:11:40.188 END TEST nvmf_multipath 00:11:40.188 ************************************ 00:11:40.188 00:11:40.188 real 0m19.310s 00:11:40.188 user 1m12.556s 00:11:40.188 sys 0m9.637s 00:11:40.188 15:15:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:40.188 15:15:49 -- common/autotest_common.sh@10 -- # set +x 00:11:40.450 15:15:49 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:40.450 15:15:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:40.450 15:15:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:40.450 15:15:49 -- common/autotest_common.sh@10 -- # set +x 00:11:40.450 ************************************ 00:11:40.450 START TEST nvmf_zcopy 00:11:40.450 ************************************ 00:11:40.450 15:15:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:40.450 * Looking for test storage... 00:11:40.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:40.450 15:15:49 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:40.450 15:15:49 -- nvmf/common.sh@7 -- # uname -s 00:11:40.450 15:15:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.450 15:15:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.450 15:15:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.450 15:15:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.450 15:15:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.450 15:15:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.450 15:15:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.450 15:15:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.450 15:15:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.450 15:15:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.450 15:15:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:11:40.450 15:15:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:11:40.450 15:15:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.450 15:15:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.450 15:15:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:40.450 15:15:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.450 15:15:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:40.450 15:15:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.450 15:15:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.450 15:15:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.450 15:15:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.450 15:15:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.450 15:15:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.450 15:15:49 -- paths/export.sh@5 -- # export PATH 00:11:40.450 15:15:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.450 15:15:49 -- nvmf/common.sh@47 -- # : 0 00:11:40.450 15:15:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:40.450 15:15:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:40.450 15:15:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.450 15:15:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.450 15:15:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.450 15:15:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:40.450 15:15:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:40.450 15:15:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:40.450 15:15:49 -- target/zcopy.sh@12 -- # nvmftestinit 00:11:40.450 15:15:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:40.450 15:15:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.450 15:15:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:40.450 15:15:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:40.450 15:15:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:40.450 15:15:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.450 15:15:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.450 15:15:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.450 15:15:49 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:40.450 15:15:49 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:40.450 15:15:49 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:40.450 15:15:49 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:40.450 15:15:49 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:40.450 15:15:49 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:40.450 15:15:49 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.450 15:15:49 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.450 15:15:49 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:40.450 15:15:49 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:40.450 15:15:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:40.450 15:15:49 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:40.450 15:15:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:40.450 15:15:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.450 15:15:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:40.450 15:15:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:40.450 15:15:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:40.450 15:15:49 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:40.451 15:15:49 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:40.451 15:15:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:40.451 Cannot find device "nvmf_tgt_br" 00:11:40.451 15:15:49 -- nvmf/common.sh@155 -- # true 00:11:40.451 15:15:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:40.451 Cannot find device "nvmf_tgt_br2" 00:11:40.451 15:15:49 -- nvmf/common.sh@156 -- # true 00:11:40.451 15:15:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:40.451 15:15:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:40.451 Cannot find device "nvmf_tgt_br" 00:11:40.451 15:15:49 -- nvmf/common.sh@158 -- # true 00:11:40.451 15:15:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:40.451 Cannot find device "nvmf_tgt_br2" 00:11:40.451 15:15:49 -- nvmf/common.sh@159 -- # true 00:11:40.451 15:15:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:40.708 15:15:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:40.708 15:15:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:40.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:40.708 15:15:49 -- nvmf/common.sh@162 -- # true 00:11:40.708 15:15:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:40.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:40.708 15:15:49 -- nvmf/common.sh@163 -- # true 00:11:40.708 15:15:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:40.708 15:15:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:40.708 15:15:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:40.708 15:15:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:40.708 15:15:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:40.708 15:15:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:40.708 15:15:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:40.708 15:15:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:40.708 15:15:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:40.708 15:15:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:40.708 15:15:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:40.708 15:15:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:40.708 15:15:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:40.708 15:15:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:40.708 15:15:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:40.708 15:15:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:40.708 15:15:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:40.708 15:15:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:40.708 15:15:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:40.708 15:15:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:40.708 15:15:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:40.708 15:15:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:40.708 15:15:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:40.708 15:15:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:40.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:11:40.708 00:11:40.708 --- 10.0.0.2 ping statistics --- 00:11:40.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.708 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:11:40.708 15:15:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:40.708 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:40.708 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:11:40.708 00:11:40.708 --- 10.0.0.3 ping statistics --- 00:11:40.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.708 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:40.708 15:15:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:40.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:40.708 00:11:40.708 --- 10.0.0.1 ping statistics --- 00:11:40.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.708 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:40.708 15:15:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.708 15:15:49 -- nvmf/common.sh@422 -- # return 0 00:11:40.708 15:15:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:40.708 15:15:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.708 15:15:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:40.708 15:15:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:40.708 15:15:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.708 15:15:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:40.708 15:15:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:40.966 15:15:49 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:40.966 15:15:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:40.966 15:15:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:40.966 15:15:49 -- common/autotest_common.sh@10 -- # set +x 00:11:40.966 15:15:49 -- nvmf/common.sh@470 -- # nvmfpid=68027 00:11:40.966 15:15:49 -- nvmf/common.sh@471 -- # waitforlisten 68027 00:11:40.966 15:15:49 -- common/autotest_common.sh@817 -- # '[' -z 68027 ']' 00:11:40.966 15:15:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:40.966 15:15:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.966 15:15:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:40.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.966 15:15:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.966 15:15:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:40.966 15:15:49 -- common/autotest_common.sh@10 -- # set +x 00:11:40.966 [2024-04-24 15:15:50.015001] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:40.966 [2024-04-24 15:15:50.015113] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.966 [2024-04-24 15:15:50.154657] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.224 [2024-04-24 15:15:50.269269] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.224 [2024-04-24 15:15:50.269334] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.225 [2024-04-24 15:15:50.269347] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.225 [2024-04-24 15:15:50.269356] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.225 [2024-04-24 15:15:50.269364] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.225 [2024-04-24 15:15:50.269400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.792 15:15:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:41.792 15:15:50 -- common/autotest_common.sh@850 -- # return 0 00:11:41.792 15:15:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:41.792 15:15:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:41.792 15:15:50 -- common/autotest_common.sh@10 -- # set +x 00:11:41.792 15:15:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.792 15:15:50 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:41.792 15:15:50 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:41.792 15:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:41.792 15:15:50 -- common/autotest_common.sh@10 -- # set +x 00:11:41.792 [2024-04-24 15:15:50.994378] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.792 15:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:41.792 15:15:50 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:41.792 15:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:41.792 15:15:50 -- common/autotest_common.sh@10 -- # set +x 00:11:41.792 15:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:41.792 15:15:51 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.792 15:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:41.792 15:15:51 -- common/autotest_common.sh@10 -- # set +x 00:11:41.792 [2024-04-24 15:15:51.014485] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.792 15:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:41.792 15:15:51 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:41.792 15:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:41.792 15:15:51 -- common/autotest_common.sh@10 -- # set +x 00:11:41.792 15:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:41.792 15:15:51 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:41.792 15:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:41.792 15:15:51 -- common/autotest_common.sh@10 -- # set +x 00:11:42.051 malloc0 00:11:42.051 15:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:42.051 15:15:51 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:42.051 15:15:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:42.051 15:15:51 -- common/autotest_common.sh@10 -- # set +x 00:11:42.051 15:15:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:42.051 15:15:51 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:42.051 15:15:51 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:42.051 15:15:51 -- nvmf/common.sh@521 -- # config=() 00:11:42.051 15:15:51 -- nvmf/common.sh@521 -- # local subsystem config 00:11:42.051 15:15:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:42.051 15:15:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:42.051 { 00:11:42.051 "params": { 00:11:42.051 "name": "Nvme$subsystem", 00:11:42.051 "trtype": "$TEST_TRANSPORT", 00:11:42.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:42.051 "adrfam": "ipv4", 00:11:42.051 "trsvcid": "$NVMF_PORT", 00:11:42.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:42.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:42.051 "hdgst": ${hdgst:-false}, 00:11:42.051 "ddgst": ${ddgst:-false} 00:11:42.051 }, 00:11:42.051 "method": "bdev_nvme_attach_controller" 00:11:42.051 } 00:11:42.051 EOF 00:11:42.051 )") 00:11:42.051 15:15:51 -- nvmf/common.sh@543 -- # cat 00:11:42.051 15:15:51 -- nvmf/common.sh@545 -- # jq . 00:11:42.051 15:15:51 -- nvmf/common.sh@546 -- # IFS=, 00:11:42.051 15:15:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:42.051 "params": { 00:11:42.051 "name": "Nvme1", 00:11:42.051 "trtype": "tcp", 00:11:42.051 "traddr": "10.0.0.2", 00:11:42.051 "adrfam": "ipv4", 00:11:42.051 "trsvcid": "4420", 00:11:42.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:42.051 "hdgst": false, 00:11:42.051 "ddgst": false 00:11:42.051 }, 00:11:42.051 "method": "bdev_nvme_attach_controller" 00:11:42.051 }' 00:11:42.051 [2024-04-24 15:15:51.114981] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:42.051 [2024-04-24 15:15:51.115077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68060 ] 00:11:42.051 [2024-04-24 15:15:51.257644] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.310 [2024-04-24 15:15:51.386937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.568 Running I/O for 10 seconds... 00:11:52.588 00:11:52.588 Latency(us) 00:11:52.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:52.588 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:52.588 Verification LBA range: start 0x0 length 0x1000 00:11:52.588 Nvme1n1 : 10.01 5647.01 44.12 0.00 0.00 22597.37 3157.64 32648.84 00:11:52.588 =================================================================================================================== 00:11:52.588 Total : 5647.01 44.12 0.00 0.00 22597.37 3157.64 32648.84 00:11:52.846 15:16:01 -- target/zcopy.sh@39 -- # perfpid=68182 00:11:52.846 15:16:01 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:52.846 15:16:01 -- target/zcopy.sh@41 -- # xtrace_disable 00:11:52.846 15:16:01 -- common/autotest_common.sh@10 -- # set +x 00:11:52.846 15:16:01 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:52.846 15:16:01 -- nvmf/common.sh@521 -- # config=() 00:11:52.846 15:16:01 -- nvmf/common.sh@521 -- # local subsystem config 00:11:52.846 15:16:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:52.846 15:16:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:52.846 { 00:11:52.846 "params": { 00:11:52.846 "name": "Nvme$subsystem", 00:11:52.846 "trtype": "$TEST_TRANSPORT", 00:11:52.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.846 "adrfam": "ipv4", 00:11:52.846 "trsvcid": "$NVMF_PORT", 00:11:52.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.846 "hdgst": ${hdgst:-false}, 00:11:52.846 "ddgst": ${ddgst:-false} 00:11:52.846 }, 00:11:52.846 "method": "bdev_nvme_attach_controller" 00:11:52.846 } 00:11:52.846 EOF 00:11:52.846 )") 00:11:52.846 15:16:01 -- nvmf/common.sh@543 -- # cat 00:11:52.846 15:16:01 -- nvmf/common.sh@545 -- # jq . 00:11:52.846 [2024-04-24 15:16:01.867498] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:01.867543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 15:16:01 -- nvmf/common.sh@546 -- # IFS=, 00:11:52.846 15:16:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:52.846 "params": { 00:11:52.846 "name": "Nvme1", 00:11:52.846 "trtype": "tcp", 00:11:52.846 "traddr": "10.0.0.2", 00:11:52.846 "adrfam": "ipv4", 00:11:52.846 "trsvcid": "4420", 00:11:52.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.846 "hdgst": false, 00:11:52.846 "ddgst": false 00:11:52.846 }, 00:11:52.846 "method": "bdev_nvme_attach_controller" 00:11:52.846 }' 00:11:52.846 [2024-04-24 15:16:01.875461] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:01.875496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:01.887462] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:01.887499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:01.899473] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:01.899520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:01.901567] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:11:52.846 [2024-04-24 15:16:01.901645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68182 ] 00:11:52.846 [2024-04-24 15:16:01.911478] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:01.911683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:01.923484] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:01.923683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:01.935505] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:01.935787] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:01.947485] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:01.947663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:01.959480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:01.959641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:01.971495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:01.971666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:01.983503] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:01.983685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:01.995496] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:01.995690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:02.007513] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:02.007706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:02.019525] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:02.019783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:02.031515] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:02.031557] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:02.033731] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.846 [2024-04-24 15:16:02.043523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:02.043569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:02.055532] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:02.055579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:02.067519] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:02.067559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.846 [2024-04-24 15:16:02.079534] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.846 [2024-04-24 15:16:02.079580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.091546] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.091597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.103558] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.103619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.115549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.115598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.127536] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.127573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.139536] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.139573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.151556] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.151600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.153244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.105 [2024-04-24 15:16:02.163543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.163580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.175559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.175604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.187560] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.187603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.199564] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.199608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.211577] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.211625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.223571] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.223614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.235579] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.235624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.247568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.247604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.259584] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.259627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.271585] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.271624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.283594] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.283632] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.295606] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.295645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.307617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.307660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 [2024-04-24 15:16:02.319623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.319669] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.105 Running I/O for 5 seconds... 00:11:53.105 [2024-04-24 15:16:02.331649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.105 [2024-04-24 15:16:02.331692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.363 [2024-04-24 15:16:02.350785] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.363 [2024-04-24 15:16:02.350866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.363 [2024-04-24 15:16:02.366450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.363 [2024-04-24 15:16:02.366510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.363 [2024-04-24 15:16:02.381640] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.363 [2024-04-24 15:16:02.381697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.363 [2024-04-24 15:16:02.396962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.363 [2024-04-24 15:16:02.397018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.363 [2024-04-24 15:16:02.406630] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.363 [2024-04-24 15:16:02.406685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.363 [2024-04-24 15:16:02.421577] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.363 [2024-04-24 15:16:02.421636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.363 [2024-04-24 15:16:02.437692] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.363 [2024-04-24 15:16:02.437755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.363 [2024-04-24 15:16:02.454555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.364 [2024-04-24 15:16:02.454608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.364 [2024-04-24 15:16:02.472603] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.364 [2024-04-24 15:16:02.472660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.364 [2024-04-24 15:16:02.487362] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.364 [2024-04-24 15:16:02.487417] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.364 [2024-04-24 15:16:02.503245] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.364 [2024-04-24 15:16:02.503298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.364 [2024-04-24 15:16:02.521550] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.364 [2024-04-24 15:16:02.521606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.364 [2024-04-24 15:16:02.537003] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.364 [2024-04-24 15:16:02.537055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.364 [2024-04-24 15:16:02.546878] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.364 [2024-04-24 15:16:02.546923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.364 [2024-04-24 15:16:02.562702] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.364 [2024-04-24 15:16:02.562766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.364 [2024-04-24 15:16:02.572880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.364 [2024-04-24 15:16:02.572929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.364 [2024-04-24 15:16:02.589720] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.364 [2024-04-24 15:16:02.589782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.364 [2024-04-24 15:16:02.605114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.364 [2024-04-24 15:16:02.605166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.622 [2024-04-24 15:16:02.615495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.622 [2024-04-24 15:16:02.615555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.622 [2024-04-24 15:16:02.630893] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.622 [2024-04-24 15:16:02.630946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.623 [2024-04-24 15:16:02.646296] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.623 [2024-04-24 15:16:02.646343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.623 [2024-04-24 15:16:02.664456] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.623 [2024-04-24 15:16:02.664508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.623 [2024-04-24 15:16:02.679352] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.623 [2024-04-24 15:16:02.679405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.623 [2024-04-24 15:16:02.695142] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.623 [2024-04-24 15:16:02.695191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.623 [2024-04-24 15:16:02.712094] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.623 [2024-04-24 15:16:02.712144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.623 [2024-04-24 15:16:02.729580] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.623 [2024-04-24 15:16:02.729642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.623 [2024-04-24 15:16:02.744564] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.623 [2024-04-24 15:16:02.744614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.623 [2024-04-24 15:16:02.761006] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.623 [2024-04-24 15:16:02.761059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.623 [2024-04-24 15:16:02.777980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.623 [2024-04-24 15:16:02.778032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.623 [2024-04-24 15:16:02.793458] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.623 [2024-04-24 15:16:02.793512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.623 [2024-04-24 15:16:02.803594] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.623 [2024-04-24 15:16:02.803664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.623 [2024-04-24 15:16:02.819365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.623 [2024-04-24 15:16:02.819424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.623 [2024-04-24 15:16:02.837238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.623 [2024-04-24 15:16:02.837341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.624 [2024-04-24 15:16:02.853840] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.624 [2024-04-24 15:16:02.853897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:02.870222] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:02.870282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:02.887421] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:02.887487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:02.902617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:02.902689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:02.912535] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:02.912582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:02.928451] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:02.928498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:02.944152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:02.944200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:02.953574] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:02.953616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:02.969834] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:02.969882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:02.984823] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:02.984891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:03.001003] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:03.001069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:03.018231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:03.018288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:03.034052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:03.034106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:03.044082] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:03.044132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:03.056765] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:03.056839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:03.069004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:03.069074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:03.084494] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:03.084559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:03.095004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:03.095063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:03.110938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:03.111008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.883 [2024-04-24 15:16:03.126600] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.883 [2024-04-24 15:16:03.126658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.139775] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.139853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.159578] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.159673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.174543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.174622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.193267] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.193358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.207998] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.208090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.225853] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.225952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.243761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.243827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.259342] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.259402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.269195] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.269244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.281829] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.281877] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.297444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.297500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.314412] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.314487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.330857] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.330919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.347670] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.347731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.363760] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.363818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.145 [2024-04-24 15:16:03.382726] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.145 [2024-04-24 15:16:03.382781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.403 [2024-04-24 15:16:03.398181] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.403 [2024-04-24 15:16:03.398236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.403 [2024-04-24 15:16:03.414948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.403 [2024-04-24 15:16:03.415017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.403 [2024-04-24 15:16:03.432392] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.403 [2024-04-24 15:16:03.432458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.403 [2024-04-24 15:16:03.448783] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.403 [2024-04-24 15:16:03.448844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.403 [2024-04-24 15:16:03.465994] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.403 [2024-04-24 15:16:03.466055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.403 [2024-04-24 15:16:03.481008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.404 [2024-04-24 15:16:03.481082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.404 [2024-04-24 15:16:03.495753] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.404 [2024-04-24 15:16:03.495827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.404 [2024-04-24 15:16:03.511374] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.404 [2024-04-24 15:16:03.511460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.404 [2024-04-24 15:16:03.521835] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.404 [2024-04-24 15:16:03.521897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.404 [2024-04-24 15:16:03.538559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.404 [2024-04-24 15:16:03.538617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.404 [2024-04-24 15:16:03.553644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.404 [2024-04-24 15:16:03.553704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.404 [2024-04-24 15:16:03.569290] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.404 [2024-04-24 15:16:03.569344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.404 [2024-04-24 15:16:03.579142] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.404 [2024-04-24 15:16:03.579192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.404 [2024-04-24 15:16:03.594345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.404 [2024-04-24 15:16:03.594409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.404 [2024-04-24 15:16:03.604979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.404 [2024-04-24 15:16:03.605027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.404 [2024-04-24 15:16:03.619716] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.404 [2024-04-24 15:16:03.619772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.404 [2024-04-24 15:16:03.636276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.404 [2024-04-24 15:16:03.636340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.652723] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.652786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.669566] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.669632] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.686370] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.686439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.703979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.704032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.718760] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.718821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.735216] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.735280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.750507] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.750589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.760766] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.760822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.777089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.777148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.792982] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.793025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.802714] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.802757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.818735] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.818788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.835790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.835836] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.852408] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.852474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.868282] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.868332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.878036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.878085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.890241] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.890292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.663 [2024-04-24 15:16:03.901238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.663 [2024-04-24 15:16:03.901286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:03.916672] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:03.916731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:03.933592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:03.933646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:03.943648] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:03.943697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:03.958913] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:03.958968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:03.969742] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:03.969788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:03.984820] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:03.984873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:04.000499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:04.000554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:04.017462] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:04.017512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:04.032932] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:04.032981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:04.043087] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:04.043138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:04.058121] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:04.058173] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:04.068478] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:04.068519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:04.080722] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:04.080768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:04.091979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:04.092023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:04.103866] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:04.103921] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:04.115298] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:04.115353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:04.126923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:04.126973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:04.138112] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:04.138182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:04.154247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:04.154311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.921 [2024-04-24 15:16:04.164797] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.921 [2024-04-24 15:16:04.164851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.176923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.176981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.188795] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.188853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.200570] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.200621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.214018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.214073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.230202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.230259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.248941] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.249003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.264370] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.264446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.275031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.275099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.293763] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.293854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.309936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.309996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.325948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.326008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.344229] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.344292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.359649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.359707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.369645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.369696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.385090] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.385154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.402752] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.402805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.180 [2024-04-24 15:16:04.419688] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.180 [2024-04-24 15:16:04.419740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.435955] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.436010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.453891] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.453948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.467676] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.467736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.483915] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.483967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.502012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.502074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.515626] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.515680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.525487] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.525534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.537602] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.537659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.553215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.553269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.568662] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.568718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.584080] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.584133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.593450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.593494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.609702] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.609754] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.625874] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.625926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.635737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.439 [2024-04-24 15:16:04.635780] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.439 [2024-04-24 15:16:04.650554] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.440 [2024-04-24 15:16:04.650600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.440 [2024-04-24 15:16:04.665096] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.440 [2024-04-24 15:16:04.665145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.440 [2024-04-24 15:16:04.682718] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.440 [2024-04-24 15:16:04.682768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.697786] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.697835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.707875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.707927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.722733] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.722776] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.740310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.740371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.755314] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.755361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.765211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.765254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.780303] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.780348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.795177] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.795230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.805291] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.805341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.821621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.821674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.836836] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.836893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.846604] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.846646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.862012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.862066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.877592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.877647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.896005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.896058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.911163] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.911216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.921442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.921486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-04-24 15:16:04.937243] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-04-24 15:16:04.937297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:04.954008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:04.954058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:04.970872] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:04.970920] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:04.987307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:04.987357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:05.004180] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:05.004226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:05.020118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:05.020167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:05.038311] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:05.038361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:05.053260] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:05.053307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:05.064895] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:05.064940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:05.080795] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:05.080845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:05.097536] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:05.097584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:05.114304] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:05.114350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:05.131126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:05.131182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:05.146861] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:05.146917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:05.163267] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:05.163320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:05.180661] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:05.180711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-04-24 15:16:05.197934] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-04-24 15:16:05.197987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.212810] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.212860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.228802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.228853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.238885] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.238935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.254017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.254065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.269451] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.269496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.287657] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.287708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.303404] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.303476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.319608] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.319664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.337658] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.337714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.351345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.351397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.367685] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.367753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.384163] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.384215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.400969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.401023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.417957] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.418017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.428272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.428325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.440388] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.440452] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.215 [2024-04-24 15:16:05.455567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.215 [2024-04-24 15:16:05.455617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.473 [2024-04-24 15:16:05.473833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.473 [2024-04-24 15:16:05.473906] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.473 [2024-04-24 15:16:05.487972] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.473 [2024-04-24 15:16:05.488034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.473 [2024-04-24 15:16:05.504564] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.473 [2024-04-24 15:16:05.504620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.473 [2024-04-24 15:16:05.521647] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.473 [2024-04-24 15:16:05.521702] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.473 [2024-04-24 15:16:05.538111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.473 [2024-04-24 15:16:05.538164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.473 [2024-04-24 15:16:05.554781] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.473 [2024-04-24 15:16:05.554833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.473 [2024-04-24 15:16:05.572000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.473 [2024-04-24 15:16:05.572055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.473 [2024-04-24 15:16:05.589538] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.473 [2024-04-24 15:16:05.589597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.473 [2024-04-24 15:16:05.605159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.473 [2024-04-24 15:16:05.605217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.473 [2024-04-24 15:16:05.621777] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.473 [2024-04-24 15:16:05.621834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.473 [2024-04-24 15:16:05.638110] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.473 [2024-04-24 15:16:05.638168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.473 [2024-04-24 15:16:05.656604] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.474 [2024-04-24 15:16:05.656663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.474 [2024-04-24 15:16:05.672133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.474 [2024-04-24 15:16:05.672186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.474 [2024-04-24 15:16:05.690403] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.474 [2024-04-24 15:16:05.690473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.474 [2024-04-24 15:16:05.704875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.474 [2024-04-24 15:16:05.704925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.720105] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.720154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.729698] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.729744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.745285] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.745341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.763627] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.763691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.778704] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.778759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.794975] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.795028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.811507] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.811553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.828147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.828196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.844337] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.844384] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.861406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.861467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.878736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.878785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.894229] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.894280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.903739] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.903784] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.919975] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.920025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.935798] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.935845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.945680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.945726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.732 [2024-04-24 15:16:05.961581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.732 [2024-04-24 15:16:05.961628] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.990 [2024-04-24 15:16:05.978557] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.990 [2024-04-24 15:16:05.978609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.990 [2024-04-24 15:16:05.995260] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.990 [2024-04-24 15:16:05.995323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.990 [2024-04-24 15:16:06.011895] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.990 [2024-04-24 15:16:06.011954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.990 [2024-04-24 15:16:06.029568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.990 [2024-04-24 15:16:06.029634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.990 [2024-04-24 15:16:06.044616] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.990 [2024-04-24 15:16:06.044668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.990 [2024-04-24 15:16:06.060042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.990 [2024-04-24 15:16:06.060094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.990 [2024-04-24 15:16:06.077078] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.990 [2024-04-24 15:16:06.077135] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.990 [2024-04-24 15:16:06.095021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.990 [2024-04-24 15:16:06.095074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.990 [2024-04-24 15:16:06.109969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.990 [2024-04-24 15:16:06.110021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.990 [2024-04-24 15:16:06.120041] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.991 [2024-04-24 15:16:06.120090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.991 [2024-04-24 15:16:06.135368] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.991 [2024-04-24 15:16:06.135425] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.991 [2024-04-24 15:16:06.151235] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.991 [2024-04-24 15:16:06.151292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.991 [2024-04-24 15:16:06.161647] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.991 [2024-04-24 15:16:06.161718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.991 [2024-04-24 15:16:06.177128] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.991 [2024-04-24 15:16:06.177180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.991 [2024-04-24 15:16:06.193699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.991 [2024-04-24 15:16:06.193756] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.991 [2024-04-24 15:16:06.211918] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.991 [2024-04-24 15:16:06.211987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.991 [2024-04-24 15:16:06.227354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.991 [2024-04-24 15:16:06.227412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.244046] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.244110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.262796] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.262861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.278326] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.278390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.288902] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.288949] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.303804] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.303859] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.320454] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.320511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.337241] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.337300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.355735] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.355792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.370666] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.370717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.380412] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.380472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.396764] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.396826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.411719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.411773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.427759] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.427813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.444250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.444303] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.462310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.462368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.477621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.477673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.249 [2024-04-24 15:16:06.488101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.249 [2024-04-24 15:16:06.488147] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.502656] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.502709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.512659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.512702] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.528671] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.528725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.544193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.544256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.553684] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.553731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.570195] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.570246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.584912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.584971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.601022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.601070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.617692] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.617742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.636237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.636292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.651699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.651750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.668874] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.668926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.685802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.685853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.701522] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.701572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.719270] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.719318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.734645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.734688] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.507 [2024-04-24 15:16:06.744860] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.507 [2024-04-24 15:16:06.744900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.760467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.760524] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.776484] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.776528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.794150] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.794198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.809641] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.809684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.825103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.825147] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.841835] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.841879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.858327] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.858377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.876174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.876224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.890826] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.890873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.907245] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.907295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.923719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.923767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.940083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.940138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.950054] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.950100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.965897] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.965949] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:06.982747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:06.982802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.766 [2024-04-24 15:16:07.000480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.766 [2024-04-24 15:16:07.000531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.016502] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.016552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.032963] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.033015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.052145] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.052194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.067307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.067351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.084211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.084257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.100776] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.100820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.117753] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.117800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.136059] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.136106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.152173] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.152220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.162248] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.162291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.174998] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.175080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.190133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.190203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.200294] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.200343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.216676] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.216732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.231185] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.231231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.246452] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.246494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.024 [2024-04-24 15:16:07.256406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.024 [2024-04-24 15:16:07.256464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.273367] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.273420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.288652] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.288705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.298620] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.298671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.310891] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.310936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.326361] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.326408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.337511] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.337554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 00:11:58.283 Latency(us) 00:11:58.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.283 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:58.283 Nvme1n1 : 5.01 11026.79 86.15 0.00 0.00 11595.02 4647.10 27048.49 00:11:58.283 =================================================================================================================== 00:11:58.283 Total : 11026.79 86.15 0.00 0.00 11595.02 4647.10 27048.49 00:11:58.283 [2024-04-24 15:16:07.349501] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.349543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.361526] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.361584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.373541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.373589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.385536] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.385584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.397535] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.397587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.409554] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.409615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.421547] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.421597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.433544] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.433594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.445540] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.445590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.457541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.457588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.469534] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.469578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.481543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.481586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.283 [2024-04-24 15:16:07.493536] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.283 [2024-04-24 15:16:07.493579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.284 [2024-04-24 15:16:07.505544] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.284 [2024-04-24 15:16:07.505591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.284 [2024-04-24 15:16:07.517580] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.284 [2024-04-24 15:16:07.517649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.541 [2024-04-24 15:16:07.529567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.541 [2024-04-24 15:16:07.529616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.541 [2024-04-24 15:16:07.541554] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.541 [2024-04-24 15:16:07.541596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.541 [2024-04-24 15:16:07.553554] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.541 [2024-04-24 15:16:07.553596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.541 [2024-04-24 15:16:07.565581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.541 [2024-04-24 15:16:07.565647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.541 [2024-04-24 15:16:07.577573] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.541 [2024-04-24 15:16:07.577623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.541 [2024-04-24 15:16:07.589571] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.541 [2024-04-24 15:16:07.589610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.541 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68182) - No such process 00:11:58.541 15:16:07 -- target/zcopy.sh@49 -- # wait 68182 00:11:58.541 15:16:07 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.541 15:16:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.541 15:16:07 -- common/autotest_common.sh@10 -- # set +x 00:11:58.542 15:16:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.542 15:16:07 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:58.542 15:16:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.542 15:16:07 -- common/autotest_common.sh@10 -- # set +x 00:11:58.542 delay0 00:11:58.542 15:16:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.542 15:16:07 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:58.542 15:16:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.542 15:16:07 -- common/autotest_common.sh@10 -- # set +x 00:11:58.542 15:16:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.542 15:16:07 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:58.542 [2024-04-24 15:16:07.775753] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:05.136 Initializing NVMe Controllers 00:12:05.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:05.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:05.136 Initialization complete. Launching workers. 00:12:05.136 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 329 00:12:05.136 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 616, failed to submit 33 00:12:05.136 success 500, unsuccess 116, failed 0 00:12:05.136 15:16:13 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:05.136 15:16:13 -- target/zcopy.sh@60 -- # nvmftestfini 00:12:05.136 15:16:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:05.136 15:16:13 -- nvmf/common.sh@117 -- # sync 00:12:05.136 15:16:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:05.136 15:16:13 -- nvmf/common.sh@120 -- # set +e 00:12:05.136 15:16:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:05.136 15:16:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:05.136 rmmod nvme_tcp 00:12:05.136 rmmod nvme_fabrics 00:12:05.136 rmmod nvme_keyring 00:12:05.136 15:16:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:05.136 15:16:13 -- nvmf/common.sh@124 -- # set -e 00:12:05.136 15:16:13 -- nvmf/common.sh@125 -- # return 0 00:12:05.136 15:16:13 -- nvmf/common.sh@478 -- # '[' -n 68027 ']' 00:12:05.136 15:16:13 -- nvmf/common.sh@479 -- # killprocess 68027 00:12:05.136 15:16:13 -- common/autotest_common.sh@936 -- # '[' -z 68027 ']' 00:12:05.136 15:16:13 -- common/autotest_common.sh@940 -- # kill -0 68027 00:12:05.136 15:16:13 -- common/autotest_common.sh@941 -- # uname 00:12:05.136 15:16:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:05.136 15:16:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68027 00:12:05.136 killing process with pid 68027 00:12:05.136 15:16:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:05.136 15:16:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:05.136 15:16:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68027' 00:12:05.136 15:16:13 -- common/autotest_common.sh@955 -- # kill 68027 00:12:05.136 15:16:13 -- common/autotest_common.sh@960 -- # wait 68027 00:12:05.136 15:16:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:05.136 15:16:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:05.136 15:16:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:05.136 15:16:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:05.136 15:16:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:05.136 15:16:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.136 15:16:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:05.136 15:16:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.136 15:16:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:05.136 ************************************ 00:12:05.136 END TEST nvmf_zcopy 00:12:05.136 ************************************ 00:12:05.136 00:12:05.136 real 0m24.784s 00:12:05.136 user 0m40.471s 00:12:05.136 sys 0m7.013s 00:12:05.136 15:16:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:05.136 15:16:14 -- common/autotest_common.sh@10 -- # set +x 00:12:05.136 15:16:14 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:05.136 15:16:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:05.136 15:16:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:05.136 15:16:14 -- common/autotest_common.sh@10 -- # set +x 00:12:05.396 ************************************ 00:12:05.396 START TEST nvmf_nmic 00:12:05.396 ************************************ 00:12:05.396 15:16:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:05.396 * Looking for test storage... 00:12:05.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:05.396 15:16:14 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:05.396 15:16:14 -- nvmf/common.sh@7 -- # uname -s 00:12:05.396 15:16:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.396 15:16:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.396 15:16:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.396 15:16:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.396 15:16:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.396 15:16:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.396 15:16:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.396 15:16:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.396 15:16:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.396 15:16:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.396 15:16:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:12:05.396 15:16:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:12:05.396 15:16:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.396 15:16:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.396 15:16:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:05.396 15:16:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.396 15:16:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.396 15:16:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.396 15:16:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.396 15:16:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.396 15:16:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.397 15:16:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.397 15:16:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.397 15:16:14 -- paths/export.sh@5 -- # export PATH 00:12:05.397 15:16:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.397 15:16:14 -- nvmf/common.sh@47 -- # : 0 00:12:05.397 15:16:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:05.397 15:16:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:05.397 15:16:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.397 15:16:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.397 15:16:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.397 15:16:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:05.397 15:16:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:05.397 15:16:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:05.397 15:16:14 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:05.397 15:16:14 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:05.397 15:16:14 -- target/nmic.sh@14 -- # nvmftestinit 00:12:05.397 15:16:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:05.397 15:16:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.397 15:16:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:05.397 15:16:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:05.397 15:16:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:05.397 15:16:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.397 15:16:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:05.397 15:16:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.397 15:16:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:05.397 15:16:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:05.397 15:16:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:05.397 15:16:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:05.397 15:16:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:05.397 15:16:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:05.397 15:16:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.397 15:16:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.397 15:16:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:05.397 15:16:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:05.397 15:16:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:05.397 15:16:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:05.397 15:16:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:05.397 15:16:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.397 15:16:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:05.397 15:16:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:05.397 15:16:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:05.397 15:16:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:05.397 15:16:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:05.397 15:16:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:05.397 Cannot find device "nvmf_tgt_br" 00:12:05.397 15:16:14 -- nvmf/common.sh@155 -- # true 00:12:05.397 15:16:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:05.397 Cannot find device "nvmf_tgt_br2" 00:12:05.397 15:16:14 -- nvmf/common.sh@156 -- # true 00:12:05.397 15:16:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:05.397 15:16:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:05.397 Cannot find device "nvmf_tgt_br" 00:12:05.397 15:16:14 -- nvmf/common.sh@158 -- # true 00:12:05.397 15:16:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:05.397 Cannot find device "nvmf_tgt_br2" 00:12:05.397 15:16:14 -- nvmf/common.sh@159 -- # true 00:12:05.397 15:16:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:05.657 15:16:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:05.657 15:16:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:05.657 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.657 15:16:14 -- nvmf/common.sh@162 -- # true 00:12:05.657 15:16:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:05.657 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.657 15:16:14 -- nvmf/common.sh@163 -- # true 00:12:05.657 15:16:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:05.657 15:16:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:05.657 15:16:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:05.657 15:16:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:05.657 15:16:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:05.657 15:16:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:05.657 15:16:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:05.657 15:16:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:05.657 15:16:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:05.657 15:16:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:05.657 15:16:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:05.657 15:16:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:05.657 15:16:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:05.657 15:16:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:05.657 15:16:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:05.657 15:16:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:05.657 15:16:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:05.657 15:16:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:05.657 15:16:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:05.657 15:16:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:05.657 15:16:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:05.657 15:16:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:05.657 15:16:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:05.657 15:16:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:05.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:12:05.657 00:12:05.657 --- 10.0.0.2 ping statistics --- 00:12:05.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.657 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:12:05.657 15:16:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:05.657 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:05.657 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:12:05.657 00:12:05.657 --- 10.0.0.3 ping statistics --- 00:12:05.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.657 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:05.657 15:16:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:05.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:12:05.657 00:12:05.657 --- 10.0.0.1 ping statistics --- 00:12:05.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.657 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:05.657 15:16:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.657 15:16:14 -- nvmf/common.sh@422 -- # return 0 00:12:05.657 15:16:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:05.657 15:16:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.657 15:16:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:05.657 15:16:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:05.657 15:16:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.657 15:16:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:05.657 15:16:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:05.657 15:16:14 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:05.657 15:16:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:05.657 15:16:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:05.657 15:16:14 -- common/autotest_common.sh@10 -- # set +x 00:12:05.657 15:16:14 -- nvmf/common.sh@470 -- # nvmfpid=68507 00:12:05.657 15:16:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.657 15:16:14 -- nvmf/common.sh@471 -- # waitforlisten 68507 00:12:05.657 15:16:14 -- common/autotest_common.sh@817 -- # '[' -z 68507 ']' 00:12:05.657 15:16:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.657 15:16:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:05.657 15:16:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.657 15:16:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:05.657 15:16:14 -- common/autotest_common.sh@10 -- # set +x 00:12:05.916 [2024-04-24 15:16:14.918172] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:05.916 [2024-04-24 15:16:14.918415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.916 [2024-04-24 15:16:15.059674] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.174 [2024-04-24 15:16:15.183150] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.174 [2024-04-24 15:16:15.183399] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.174 [2024-04-24 15:16:15.183553] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.174 [2024-04-24 15:16:15.183607] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.174 [2024-04-24 15:16:15.183704] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.174 [2024-04-24 15:16:15.183852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.174 [2024-04-24 15:16:15.184024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.175 [2024-04-24 15:16:15.184630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.175 [2024-04-24 15:16:15.184648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.753 15:16:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:06.753 15:16:15 -- common/autotest_common.sh@850 -- # return 0 00:12:06.753 15:16:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:06.753 15:16:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:06.753 15:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:06.753 15:16:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.753 15:16:15 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:06.753 15:16:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.753 15:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:06.753 [2024-04-24 15:16:15.939008] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.753 15:16:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.753 15:16:15 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:06.753 15:16:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.753 15:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:06.753 Malloc0 00:12:06.753 15:16:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.753 15:16:15 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:06.753 15:16:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.753 15:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:07.012 15:16:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.012 15:16:16 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:07.012 15:16:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.012 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:12:07.012 15:16:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.012 15:16:16 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.012 15:16:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.012 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:12:07.012 [2024-04-24 15:16:16.019618] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.012 test case1: single bdev can't be used in multiple subsystems 00:12:07.012 15:16:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.012 15:16:16 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:07.012 15:16:16 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:07.012 15:16:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.012 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:12:07.012 15:16:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.012 15:16:16 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:07.012 15:16:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.012 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:12:07.012 15:16:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.012 15:16:16 -- target/nmic.sh@28 -- # nmic_status=0 00:12:07.012 15:16:16 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:07.012 15:16:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.012 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:12:07.012 [2024-04-24 15:16:16.047449] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:07.012 [2024-04-24 15:16:16.047490] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:07.012 [2024-04-24 15:16:16.047503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.012 request: 00:12:07.012 { 00:12:07.012 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:07.012 "namespace": { 00:12:07.012 "bdev_name": "Malloc0", 00:12:07.012 "no_auto_visible": false 00:12:07.012 }, 00:12:07.012 "method": "nvmf_subsystem_add_ns", 00:12:07.012 "req_id": 1 00:12:07.012 } 00:12:07.012 Got JSON-RPC error response 00:12:07.012 response: 00:12:07.012 { 00:12:07.012 "code": -32602, 00:12:07.012 "message": "Invalid parameters" 00:12:07.012 } 00:12:07.012 Adding namespace failed - expected result. 00:12:07.012 test case2: host connect to nvmf target in multiple paths 00:12:07.012 15:16:16 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:12:07.012 15:16:16 -- target/nmic.sh@29 -- # nmic_status=1 00:12:07.012 15:16:16 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:07.012 15:16:16 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:07.012 15:16:16 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:07.012 15:16:16 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:07.012 15:16:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:07.012 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:12:07.012 [2024-04-24 15:16:16.063609] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:07.012 15:16:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:07.012 15:16:16 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab --hostid=bfc76e61-7421-4d42-8df7-48fb051b4cab -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.012 15:16:16 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab --hostid=bfc76e61-7421-4d42-8df7-48fb051b4cab -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:07.272 15:16:16 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.272 15:16:16 -- common/autotest_common.sh@1184 -- # local i=0 00:12:07.272 15:16:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.272 15:16:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:07.272 15:16:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:09.174 15:16:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:09.174 15:16:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:09.174 15:16:18 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.175 15:16:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:09.175 15:16:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.175 15:16:18 -- common/autotest_common.sh@1194 -- # return 0 00:12:09.175 15:16:18 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:09.175 [global] 00:12:09.175 thread=1 00:12:09.175 invalidate=1 00:12:09.175 rw=write 00:12:09.175 time_based=1 00:12:09.175 runtime=1 00:12:09.175 ioengine=libaio 00:12:09.175 direct=1 00:12:09.175 bs=4096 00:12:09.175 iodepth=1 00:12:09.175 norandommap=0 00:12:09.175 numjobs=1 00:12:09.175 00:12:09.175 verify_dump=1 00:12:09.175 verify_backlog=512 00:12:09.175 verify_state_save=0 00:12:09.175 do_verify=1 00:12:09.175 verify=crc32c-intel 00:12:09.175 [job0] 00:12:09.175 filename=/dev/nvme0n1 00:12:09.175 Could not set queue depth (nvme0n1) 00:12:09.433 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.433 fio-3.35 00:12:09.433 Starting 1 thread 00:12:10.809 00:12:10.809 job0: (groupid=0, jobs=1): err= 0: pid=68599: Wed Apr 24 15:16:19 2024 00:12:10.809 read: IOPS=2701, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:12:10.809 slat (usec): min=12, max=109, avg=18.87, stdev= 6.70 00:12:10.809 clat (usec): min=142, max=1426, avg=184.46, stdev=39.86 00:12:10.809 lat (usec): min=156, max=1446, avg=203.34, stdev=41.23 00:12:10.809 clat percentiles (usec): 00:12:10.809 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:12:10.809 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:12:10.809 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 215], 00:12:10.809 | 99.00th=[ 245], 99.50th=[ 285], 99.90th=[ 603], 99.95th=[ 1205], 00:12:10.809 | 99.99th=[ 1434] 00:12:10.809 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:10.809 slat (nsec): min=17829, max=81690, avg=24923.38, stdev=5900.05 00:12:10.809 clat (usec): min=84, max=7829, avg=117.38, stdev=211.08 00:12:10.809 lat (usec): min=105, max=7858, avg=142.31, stdev=211.48 00:12:10.809 clat percentiles (usec): 00:12:10.809 | 1.00th=[ 90], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 100], 00:12:10.809 | 30.00th=[ 103], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 111], 00:12:10.809 | 70.00th=[ 114], 80.00th=[ 119], 90.00th=[ 127], 95.00th=[ 135], 00:12:10.809 | 99.00th=[ 151], 99.50th=[ 159], 99.90th=[ 3130], 99.95th=[ 7373], 00:12:10.809 | 99.99th=[ 7832] 00:12:10.809 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:12:10.809 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:10.809 lat (usec) : 100=11.22%, 250=88.23%, 500=0.35%, 750=0.07%, 1000=0.02% 00:12:10.809 lat (msec) : 2=0.05%, 4=0.03%, 10=0.03% 00:12:10.809 cpu : usr=2.40%, sys=10.30%, ctx=5776, majf=0, minf=2 00:12:10.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.809 issued rwts: total=2704,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.809 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.809 00:12:10.809 Run status group 0 (all jobs): 00:12:10.809 READ: bw=10.6MiB/s (11.1MB/s), 10.6MiB/s-10.6MiB/s (11.1MB/s-11.1MB/s), io=10.6MiB (11.1MB), run=1001-1001msec 00:12:10.809 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:12:10.809 00:12:10.809 Disk stats (read/write): 00:12:10.809 nvme0n1: ios=2586/2560, merge=0/0, ticks=486/314, in_queue=800, util=90.58% 00:12:10.809 15:16:19 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:10.809 15:16:19 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.809 15:16:19 -- common/autotest_common.sh@1205 -- # local i=0 00:12:10.809 15:16:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.809 15:16:19 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:10.809 15:16:19 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.809 15:16:19 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:10.809 15:16:19 -- common/autotest_common.sh@1217 -- # return 0 00:12:10.809 15:16:19 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:10.809 15:16:19 -- target/nmic.sh@53 -- # nvmftestfini 00:12:10.809 15:16:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:10.809 15:16:19 -- nvmf/common.sh@117 -- # sync 00:12:10.810 15:16:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.810 15:16:19 -- nvmf/common.sh@120 -- # set +e 00:12:10.810 15:16:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.810 15:16:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.810 rmmod nvme_tcp 00:12:10.810 rmmod nvme_fabrics 00:12:10.810 rmmod nvme_keyring 00:12:10.810 15:16:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.810 15:16:19 -- nvmf/common.sh@124 -- # set -e 00:12:10.810 15:16:19 -- nvmf/common.sh@125 -- # return 0 00:12:10.810 15:16:19 -- nvmf/common.sh@478 -- # '[' -n 68507 ']' 00:12:10.810 15:16:19 -- nvmf/common.sh@479 -- # killprocess 68507 00:12:10.810 15:16:19 -- common/autotest_common.sh@936 -- # '[' -z 68507 ']' 00:12:10.810 15:16:19 -- common/autotest_common.sh@940 -- # kill -0 68507 00:12:10.810 15:16:19 -- common/autotest_common.sh@941 -- # uname 00:12:10.810 15:16:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:10.810 15:16:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68507 00:12:10.810 killing process with pid 68507 00:12:10.810 15:16:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:10.810 15:16:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:10.810 15:16:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68507' 00:12:10.810 15:16:19 -- common/autotest_common.sh@955 -- # kill 68507 00:12:10.810 15:16:19 -- common/autotest_common.sh@960 -- # wait 68507 00:12:11.068 15:16:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:11.068 15:16:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:11.068 15:16:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:11.068 15:16:20 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.068 15:16:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.068 15:16:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.068 15:16:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.068 15:16:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.068 15:16:20 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:11.068 00:12:11.068 real 0m5.770s 00:12:11.068 user 0m18.404s 00:12:11.068 sys 0m2.151s 00:12:11.068 15:16:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:11.068 ************************************ 00:12:11.068 15:16:20 -- common/autotest_common.sh@10 -- # set +x 00:12:11.068 END TEST nvmf_nmic 00:12:11.068 ************************************ 00:12:11.068 15:16:20 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:11.068 15:16:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:11.068 15:16:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:11.068 15:16:20 -- common/autotest_common.sh@10 -- # set +x 00:12:11.068 ************************************ 00:12:11.068 START TEST nvmf_fio_target 00:12:11.068 ************************************ 00:12:11.068 15:16:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:11.327 * Looking for test storage... 00:12:11.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:11.327 15:16:20 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:11.327 15:16:20 -- nvmf/common.sh@7 -- # uname -s 00:12:11.327 15:16:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.327 15:16:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.327 15:16:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.327 15:16:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.327 15:16:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.327 15:16:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.327 15:16:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.327 15:16:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.327 15:16:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.327 15:16:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.327 15:16:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:12:11.327 15:16:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:12:11.327 15:16:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.327 15:16:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.327 15:16:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:11.327 15:16:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.327 15:16:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:11.327 15:16:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.327 15:16:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.327 15:16:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.327 15:16:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.327 15:16:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.327 15:16:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.327 15:16:20 -- paths/export.sh@5 -- # export PATH 00:12:11.327 15:16:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.327 15:16:20 -- nvmf/common.sh@47 -- # : 0 00:12:11.327 15:16:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:11.327 15:16:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:11.327 15:16:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.327 15:16:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.327 15:16:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.327 15:16:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:11.327 15:16:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:11.327 15:16:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:11.327 15:16:20 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:11.327 15:16:20 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:11.327 15:16:20 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:11.327 15:16:20 -- target/fio.sh@16 -- # nvmftestinit 00:12:11.327 15:16:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:11.327 15:16:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.327 15:16:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:11.327 15:16:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:11.327 15:16:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:11.327 15:16:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.327 15:16:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.327 15:16:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.327 15:16:20 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:11.327 15:16:20 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:11.327 15:16:20 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:11.327 15:16:20 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:11.327 15:16:20 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:11.327 15:16:20 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:11.327 15:16:20 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.327 15:16:20 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.327 15:16:20 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:11.327 15:16:20 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:11.327 15:16:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:11.327 15:16:20 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:11.327 15:16:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:11.327 15:16:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.327 15:16:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:11.327 15:16:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:11.327 15:16:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:11.327 15:16:20 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:11.327 15:16:20 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:11.327 15:16:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:11.327 Cannot find device "nvmf_tgt_br" 00:12:11.327 15:16:20 -- nvmf/common.sh@155 -- # true 00:12:11.327 15:16:20 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:11.327 Cannot find device "nvmf_tgt_br2" 00:12:11.327 15:16:20 -- nvmf/common.sh@156 -- # true 00:12:11.327 15:16:20 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:11.327 15:16:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:11.327 Cannot find device "nvmf_tgt_br" 00:12:11.327 15:16:20 -- nvmf/common.sh@158 -- # true 00:12:11.327 15:16:20 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:11.327 Cannot find device "nvmf_tgt_br2" 00:12:11.327 15:16:20 -- nvmf/common.sh@159 -- # true 00:12:11.327 15:16:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:11.327 15:16:20 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:11.327 15:16:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:11.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:11.327 15:16:20 -- nvmf/common.sh@162 -- # true 00:12:11.327 15:16:20 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:11.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:11.600 15:16:20 -- nvmf/common.sh@163 -- # true 00:12:11.600 15:16:20 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:11.600 15:16:20 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:11.600 15:16:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:11.600 15:16:20 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:11.600 15:16:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:11.600 15:16:20 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:11.600 15:16:20 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:11.600 15:16:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:11.600 15:16:20 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:11.600 15:16:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:11.600 15:16:20 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:11.600 15:16:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:11.600 15:16:20 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:11.600 15:16:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:11.600 15:16:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:11.600 15:16:20 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:11.600 15:16:20 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:11.600 15:16:20 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:11.600 15:16:20 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:11.600 15:16:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:11.600 15:16:20 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:11.600 15:16:20 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:11.600 15:16:20 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:11.600 15:16:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:11.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:12:11.600 00:12:11.600 --- 10.0.0.2 ping statistics --- 00:12:11.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.600 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:12:11.600 15:16:20 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:11.600 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:11.600 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:12:11.600 00:12:11.600 --- 10.0.0.3 ping statistics --- 00:12:11.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.601 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:11.601 15:16:20 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:11.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:12:11.601 00:12:11.601 --- 10.0.0.1 ping statistics --- 00:12:11.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.601 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:11.601 15:16:20 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.601 15:16:20 -- nvmf/common.sh@422 -- # return 0 00:12:11.601 15:16:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:11.601 15:16:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.601 15:16:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:11.601 15:16:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:11.601 15:16:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.601 15:16:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:11.601 15:16:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:11.601 15:16:20 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:11.601 15:16:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:11.601 15:16:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:11.601 15:16:20 -- common/autotest_common.sh@10 -- # set +x 00:12:11.601 15:16:20 -- nvmf/common.sh@470 -- # nvmfpid=68783 00:12:11.601 15:16:20 -- nvmf/common.sh@471 -- # waitforlisten 68783 00:12:11.601 15:16:20 -- common/autotest_common.sh@817 -- # '[' -z 68783 ']' 00:12:11.601 15:16:20 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.601 15:16:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.601 15:16:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:11.601 15:16:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.601 15:16:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:11.601 15:16:20 -- common/autotest_common.sh@10 -- # set +x 00:12:11.860 [2024-04-24 15:16:20.860189] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:11.860 [2024-04-24 15:16:20.860318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.860 [2024-04-24 15:16:21.005803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.130 [2024-04-24 15:16:21.143534] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.130 [2024-04-24 15:16:21.143887] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.130 [2024-04-24 15:16:21.144231] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.130 [2024-04-24 15:16:21.144470] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.130 [2024-04-24 15:16:21.144590] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.130 [2024-04-24 15:16:21.144897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.130 [2024-04-24 15:16:21.145026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.130 [2024-04-24 15:16:21.145114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.130 [2024-04-24 15:16:21.145113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.696 15:16:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:12.696 15:16:21 -- common/autotest_common.sh@850 -- # return 0 00:12:12.696 15:16:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:12.696 15:16:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:12.696 15:16:21 -- common/autotest_common.sh@10 -- # set +x 00:12:12.697 15:16:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.697 15:16:21 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:12.954 [2024-04-24 15:16:22.170947] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.213 15:16:22 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:13.472 15:16:22 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:13.472 15:16:22 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:13.731 15:16:22 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:13.731 15:16:22 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:13.989 15:16:23 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:13.989 15:16:23 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:14.247 15:16:23 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:14.247 15:16:23 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:14.506 15:16:23 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:14.765 15:16:23 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:14.765 15:16:23 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:15.023 15:16:24 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:15.023 15:16:24 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:15.281 15:16:24 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:15.281 15:16:24 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:15.539 15:16:24 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:15.797 15:16:24 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:15.797 15:16:24 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:16.055 15:16:25 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:16.055 15:16:25 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.313 15:16:25 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.572 [2024-04-24 15:16:25.611264] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.572 15:16:25 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:16.831 15:16:25 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:17.090 15:16:26 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab --hostid=bfc76e61-7421-4d42-8df7-48fb051b4cab -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.090 15:16:26 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:17.090 15:16:26 -- common/autotest_common.sh@1184 -- # local i=0 00:12:17.090 15:16:26 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.090 15:16:26 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:12:17.090 15:16:26 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:12:17.090 15:16:26 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:19.633 15:16:28 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:19.633 15:16:28 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:19.633 15:16:28 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.633 15:16:28 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:12:19.633 15:16:28 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.633 15:16:28 -- common/autotest_common.sh@1194 -- # return 0 00:12:19.633 15:16:28 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:19.633 [global] 00:12:19.633 thread=1 00:12:19.633 invalidate=1 00:12:19.633 rw=write 00:12:19.633 time_based=1 00:12:19.633 runtime=1 00:12:19.633 ioengine=libaio 00:12:19.633 direct=1 00:12:19.633 bs=4096 00:12:19.633 iodepth=1 00:12:19.633 norandommap=0 00:12:19.633 numjobs=1 00:12:19.633 00:12:19.633 verify_dump=1 00:12:19.633 verify_backlog=512 00:12:19.633 verify_state_save=0 00:12:19.633 do_verify=1 00:12:19.633 verify=crc32c-intel 00:12:19.633 [job0] 00:12:19.633 filename=/dev/nvme0n1 00:12:19.633 [job1] 00:12:19.633 filename=/dev/nvme0n2 00:12:19.633 [job2] 00:12:19.633 filename=/dev/nvme0n3 00:12:19.633 [job3] 00:12:19.633 filename=/dev/nvme0n4 00:12:19.633 Could not set queue depth (nvme0n1) 00:12:19.633 Could not set queue depth (nvme0n2) 00:12:19.633 Could not set queue depth (nvme0n3) 00:12:19.633 Could not set queue depth (nvme0n4) 00:12:19.633 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.633 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.633 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.633 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.633 fio-3.35 00:12:19.633 Starting 4 threads 00:12:20.569 00:12:20.569 job0: (groupid=0, jobs=1): err= 0: pid=68973: Wed Apr 24 15:16:29 2024 00:12:20.569 read: IOPS=2786, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:12:20.569 slat (nsec): min=12896, max=64524, avg=17424.65, stdev=4516.84 00:12:20.569 clat (usec): min=139, max=2129, avg=169.40, stdev=39.02 00:12:20.569 lat (usec): min=154, max=2156, avg=186.83, stdev=39.63 00:12:20.569 clat percentiles (usec): 00:12:20.569 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:12:20.569 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:12:20.569 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 188], 00:12:20.569 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 273], 99.95th=[ 445], 00:12:20.569 | 99.99th=[ 2114] 00:12:20.569 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:20.569 slat (usec): min=15, max=112, avg=23.56, stdev= 5.57 00:12:20.570 clat (usec): min=95, max=2354, avg=128.14, stdev=44.01 00:12:20.570 lat (usec): min=116, max=2397, avg=151.70, stdev=44.69 00:12:20.570 clat percentiles (usec): 00:12:20.570 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 119], 00:12:20.570 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 129], 00:12:20.570 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 145], 00:12:20.570 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 343], 99.95th=[ 799], 00:12:20.570 | 99.99th=[ 2343] 00:12:20.570 bw ( KiB/s): min=12288, max=12288, per=25.09%, avg=12288.00, stdev= 0.00, samples=1 00:12:20.570 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:20.570 lat (usec) : 100=0.10%, 250=99.76%, 500=0.07%, 750=0.02%, 1000=0.02% 00:12:20.570 lat (msec) : 4=0.03% 00:12:20.570 cpu : usr=2.70%, sys=9.50%, ctx=5861, majf=0, minf=8 00:12:20.570 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:20.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.570 issued rwts: total=2789,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.570 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:20.570 job1: (groupid=0, jobs=1): err= 0: pid=68974: Wed Apr 24 15:16:29 2024 00:12:20.570 read: IOPS=2831, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:12:20.570 slat (nsec): min=13126, max=82819, avg=15984.29, stdev=3647.95 00:12:20.570 clat (usec): min=138, max=488, avg=170.39, stdev=12.83 00:12:20.570 lat (usec): min=153, max=503, avg=186.38, stdev=13.34 00:12:20.570 clat percentiles (usec): 00:12:20.570 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:12:20.570 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:12:20.570 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 190], 00:12:20.570 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 219], 99.95th=[ 229], 00:12:20.570 | 99.99th=[ 490] 00:12:20.570 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:20.570 slat (usec): min=15, max=122, avg=21.65, stdev= 3.80 00:12:20.570 clat (usec): min=95, max=734, avg=128.17, stdev=15.55 00:12:20.570 lat (usec): min=116, max=755, avg=149.82, stdev=16.15 00:12:20.570 clat percentiles (usec): 00:12:20.570 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 120], 00:12:20.570 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 130], 00:12:20.570 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 147], 00:12:20.570 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 200], 99.95th=[ 269], 00:12:20.570 | 99.99th=[ 734] 00:12:20.570 bw ( KiB/s): min=12288, max=12288, per=25.09%, avg=12288.00, stdev= 0.00, samples=1 00:12:20.570 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:20.570 lat (usec) : 100=0.03%, 250=99.90%, 500=0.05%, 750=0.02% 00:12:20.570 cpu : usr=2.30%, sys=8.90%, ctx=5906, majf=0, minf=7 00:12:20.570 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:20.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.570 issued rwts: total=2834,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.570 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:20.570 job2: (groupid=0, jobs=1): err= 0: pid=68975: Wed Apr 24 15:16:29 2024 00:12:20.570 read: IOPS=2594, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec) 00:12:20.570 slat (nsec): min=12527, max=38700, avg=15267.26, stdev=2671.52 00:12:20.570 clat (usec): min=146, max=1817, avg=177.70, stdev=36.11 00:12:20.570 lat (usec): min=159, max=1856, avg=192.97, stdev=36.78 00:12:20.570 clat percentiles (usec): 00:12:20.570 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:12:20.570 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:12:20.570 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:12:20.570 | 99.00th=[ 208], 99.50th=[ 217], 99.90th=[ 482], 99.95th=[ 553], 00:12:20.570 | 99.99th=[ 1811] 00:12:20.570 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:20.570 slat (usec): min=14, max=120, avg=22.44, stdev= 5.34 00:12:20.570 clat (usec): min=101, max=746, avg=136.62, stdev=17.41 00:12:20.570 lat (usec): min=121, max=766, avg=159.06, stdev=18.70 00:12:20.570 clat percentiles (usec): 00:12:20.570 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 123], 20.00th=[ 127], 00:12:20.570 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:12:20.570 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 151], 95.00th=[ 157], 00:12:20.570 | 99.00th=[ 172], 99.50th=[ 180], 99.90th=[ 273], 99.95th=[ 375], 00:12:20.570 | 99.99th=[ 750] 00:12:20.570 bw ( KiB/s): min=12288, max=12288, per=25.09%, avg=12288.00, stdev= 0.00, samples=1 00:12:20.570 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:20.570 lat (usec) : 250=99.79%, 500=0.16%, 750=0.04% 00:12:20.570 lat (msec) : 2=0.02% 00:12:20.570 cpu : usr=2.80%, sys=8.00%, ctx=5670, majf=0, minf=13 00:12:20.570 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:20.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.570 issued rwts: total=2597,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.570 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:20.570 job3: (groupid=0, jobs=1): err= 0: pid=68976: Wed Apr 24 15:16:29 2024 00:12:20.570 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:20.570 slat (nsec): min=12417, max=35506, avg=15800.44, stdev=2566.07 00:12:20.570 clat (usec): min=142, max=560, avg=178.63, stdev=20.05 00:12:20.570 lat (usec): min=155, max=587, avg=194.43, stdev=20.49 00:12:20.570 clat percentiles (usec): 00:12:20.570 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:12:20.570 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:12:20.570 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:12:20.570 | 99.00th=[ 223], 99.50th=[ 297], 99.90th=[ 412], 99.95th=[ 519], 00:12:20.570 | 99.99th=[ 562] 00:12:20.570 write: IOPS=3037, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1001msec); 0 zone resets 00:12:20.570 slat (usec): min=15, max=122, avg=23.19, stdev= 5.52 00:12:20.570 clat (usec): min=104, max=1036, avg=138.31, stdev=21.07 00:12:20.570 lat (usec): min=123, max=1056, avg=161.50, stdev=22.07 00:12:20.570 clat percentiles (usec): 00:12:20.570 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 125], 20.00th=[ 129], 00:12:20.570 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:12:20.570 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:12:20.570 | 99.00th=[ 172], 99.50th=[ 180], 99.90th=[ 273], 99.95th=[ 371], 00:12:20.570 | 99.99th=[ 1037] 00:12:20.570 bw ( KiB/s): min=12288, max=12288, per=25.09%, avg=12288.00, stdev= 0.00, samples=1 00:12:20.570 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:20.570 lat (usec) : 250=99.54%, 500=0.41%, 750=0.04% 00:12:20.570 lat (msec) : 2=0.02% 00:12:20.570 cpu : usr=2.60%, sys=8.40%, ctx=5605, majf=0, minf=7 00:12:20.570 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:20.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.570 issued rwts: total=2560,3041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.570 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:20.570 00:12:20.570 Run status group 0 (all jobs): 00:12:20.570 READ: bw=42.1MiB/s (44.1MB/s), 9.99MiB/s-11.1MiB/s (10.5MB/s-11.6MB/s), io=42.1MiB (44.2MB), run=1001-1001msec 00:12:20.570 WRITE: bw=47.8MiB/s (50.2MB/s), 11.9MiB/s-12.0MiB/s (12.4MB/s-12.6MB/s), io=47.9MiB (50.2MB), run=1001-1001msec 00:12:20.570 00:12:20.570 Disk stats (read/write): 00:12:20.570 nvme0n1: ios=2373/2560, merge=0/0, ticks=429/347, in_queue=776, util=86.04% 00:12:20.570 nvme0n2: ios=2396/2560, merge=0/0, ticks=432/352, in_queue=784, util=86.46% 00:12:20.570 nvme0n3: ios=2165/2560, merge=0/0, ticks=394/384, in_queue=778, util=88.77% 00:12:20.570 nvme0n4: ios=2126/2560, merge=0/0, ticks=393/369, in_queue=762, util=89.43% 00:12:20.570 15:16:29 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:20.570 [global] 00:12:20.570 thread=1 00:12:20.570 invalidate=1 00:12:20.570 rw=randwrite 00:12:20.570 time_based=1 00:12:20.570 runtime=1 00:12:20.570 ioengine=libaio 00:12:20.570 direct=1 00:12:20.570 bs=4096 00:12:20.570 iodepth=1 00:12:20.570 norandommap=0 00:12:20.570 numjobs=1 00:12:20.570 00:12:20.570 verify_dump=1 00:12:20.570 verify_backlog=512 00:12:20.570 verify_state_save=0 00:12:20.570 do_verify=1 00:12:20.570 verify=crc32c-intel 00:12:20.570 [job0] 00:12:20.570 filename=/dev/nvme0n1 00:12:20.570 [job1] 00:12:20.570 filename=/dev/nvme0n2 00:12:20.570 [job2] 00:12:20.570 filename=/dev/nvme0n3 00:12:20.570 [job3] 00:12:20.570 filename=/dev/nvme0n4 00:12:20.570 Could not set queue depth (nvme0n1) 00:12:20.570 Could not set queue depth (nvme0n2) 00:12:20.570 Could not set queue depth (nvme0n3) 00:12:20.570 Could not set queue depth (nvme0n4) 00:12:20.843 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:20.844 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:20.844 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:20.844 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:20.844 fio-3.35 00:12:20.844 Starting 4 threads 00:12:21.822 00:12:21.822 job0: (groupid=0, jobs=1): err= 0: pid=69029: Wed Apr 24 15:16:31 2024 00:12:21.822 read: IOPS=2053, BW=8216KiB/s (8413kB/s)(8224KiB/1001msec) 00:12:21.822 slat (nsec): min=9467, max=47714, avg=14269.23, stdev=2896.81 00:12:21.822 clat (usec): min=136, max=1040, avg=231.96, stdev=66.24 00:12:21.822 lat (usec): min=149, max=1051, avg=246.23, stdev=66.17 00:12:21.822 clat percentiles (usec): 00:12:21.822 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 165], 00:12:21.822 | 30.00th=[ 174], 40.00th=[ 215], 50.00th=[ 241], 60.00th=[ 249], 00:12:21.822 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 343], 95.00th=[ 355], 00:12:21.822 | 99.00th=[ 375], 99.50th=[ 379], 99.90th=[ 486], 99.95th=[ 553], 00:12:21.822 | 99.99th=[ 1037] 00:12:21.822 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:21.822 slat (usec): min=12, max=105, avg=21.28, stdev= 4.97 00:12:21.822 clat (usec): min=89, max=878, avg=168.21, stdev=47.79 00:12:21.822 lat (usec): min=114, max=917, avg=189.49, stdev=47.54 00:12:21.822 clat percentiles (usec): 00:12:21.822 | 1.00th=[ 103], 5.00th=[ 112], 10.00th=[ 118], 20.00th=[ 125], 00:12:21.822 | 30.00th=[ 133], 40.00th=[ 141], 50.00th=[ 161], 60.00th=[ 186], 00:12:21.822 | 70.00th=[ 196], 80.00th=[ 212], 90.00th=[ 231], 95.00th=[ 241], 00:12:21.822 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 537], 99.95th=[ 660], 00:12:21.822 | 99.99th=[ 881] 00:12:21.822 bw ( KiB/s): min=12288, max=12288, per=28.88%, avg=12288.00, stdev= 0.00, samples=1 00:12:21.822 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:21.822 lat (usec) : 100=0.28%, 250=80.96%, 500=18.65%, 750=0.06%, 1000=0.02% 00:12:21.822 lat (msec) : 2=0.02% 00:12:21.822 cpu : usr=1.50%, sys=7.40%, ctx=4616, majf=0, minf=17 00:12:21.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.822 issued rwts: total=2056,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:21.822 job1: (groupid=0, jobs=1): err= 0: pid=69030: Wed Apr 24 15:16:31 2024 00:12:21.822 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:21.822 slat (nsec): min=12843, max=57405, avg=16012.26, stdev=4089.39 00:12:21.822 clat (usec): min=136, max=1698, avg=181.95, stdev=50.94 00:12:21.822 lat (usec): min=150, max=1712, avg=197.96, stdev=51.38 00:12:21.822 clat percentiles (usec): 00:12:21.822 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:12:21.822 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:12:21.822 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 265], 95.00th=[ 273], 00:12:21.822 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 482], 99.95th=[ 742], 00:12:21.822 | 99.99th=[ 1696] 00:12:21.822 write: IOPS=2966, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec); 0 zone resets 00:12:21.822 slat (usec): min=14, max=107, avg=22.30, stdev= 5.66 00:12:21.822 clat (usec): min=91, max=445, avg=139.98, stdev=38.47 00:12:21.822 lat (usec): min=113, max=490, avg=162.28, stdev=40.54 00:12:21.822 clat percentiles (usec): 00:12:21.822 | 1.00th=[ 101], 5.00th=[ 106], 10.00th=[ 111], 20.00th=[ 116], 00:12:21.822 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 130], 00:12:21.822 | 70.00th=[ 135], 80.00th=[ 172], 90.00th=[ 206], 95.00th=[ 217], 00:12:21.822 | 99.00th=[ 243], 99.50th=[ 314], 99.90th=[ 379], 99.95th=[ 400], 00:12:21.822 | 99.99th=[ 445] 00:12:21.822 bw ( KiB/s): min=11824, max=11824, per=27.79%, avg=11824.00, stdev= 0.00, samples=1 00:12:21.822 iops : min= 2956, max= 2956, avg=2956.00, stdev= 0.00, samples=1 00:12:21.822 lat (usec) : 100=0.45%, 250=92.73%, 500=6.78%, 750=0.02% 00:12:21.822 lat (msec) : 2=0.02% 00:12:21.822 cpu : usr=2.70%, sys=8.00%, ctx=5529, majf=0, minf=13 00:12:21.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.822 issued rwts: total=2560,2969,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:21.822 job2: (groupid=0, jobs=1): err= 0: pid=69031: Wed Apr 24 15:16:31 2024 00:12:21.822 read: IOPS=1761, BW=7045KiB/s (7214kB/s)(7052KiB/1001msec) 00:12:21.822 slat (nsec): min=9507, max=62595, avg=16895.45, stdev=6363.52 00:12:21.823 clat (usec): min=164, max=7509, avg=291.50, stdev=244.62 00:12:21.823 lat (usec): min=177, max=7534, avg=308.39, stdev=245.60 00:12:21.823 clat percentiles (usec): 00:12:21.823 | 1.00th=[ 212], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 245], 00:12:21.823 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:12:21.823 | 70.00th=[ 273], 80.00th=[ 322], 90.00th=[ 347], 95.00th=[ 429], 00:12:21.823 | 99.00th=[ 515], 99.50th=[ 725], 99.90th=[ 4228], 99.95th=[ 7504], 00:12:21.823 | 99.99th=[ 7504] 00:12:21.823 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:21.823 slat (usec): min=11, max=100, avg=20.77, stdev= 7.08 00:12:21.823 clat (usec): min=110, max=374, avg=198.27, stdev=29.36 00:12:21.823 lat (usec): min=148, max=474, avg=219.04, stdev=29.78 00:12:21.823 clat percentiles (usec): 00:12:21.823 | 1.00th=[ 137], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 174], 00:12:21.823 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:12:21.823 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 245], 00:12:21.823 | 99.00th=[ 258], 99.50th=[ 262], 99.90th=[ 273], 99.95th=[ 302], 00:12:21.823 | 99.99th=[ 375] 00:12:21.823 bw ( KiB/s): min= 8192, max= 8192, per=19.25%, avg=8192.00, stdev= 0.00, samples=1 00:12:21.823 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:21.823 lat (usec) : 250=66.96%, 500=32.46%, 750=0.37%, 1000=0.05% 00:12:21.823 lat (msec) : 4=0.10%, 10=0.05% 00:12:21.823 cpu : usr=1.40%, sys=6.40%, ctx=3811, majf=0, minf=9 00:12:21.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.823 issued rwts: total=1763,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:21.823 job3: (groupid=0, jobs=1): err= 0: pid=69032: Wed Apr 24 15:16:31 2024 00:12:21.823 read: IOPS=2641, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:12:21.823 slat (nsec): min=12402, max=36126, avg=14597.61, stdev=1861.90 00:12:21.823 clat (usec): min=146, max=452, avg=175.19, stdev=16.83 00:12:21.823 lat (usec): min=159, max=471, avg=189.79, stdev=17.23 00:12:21.823 clat percentiles (usec): 00:12:21.823 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:12:21.823 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:12:21.823 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:12:21.823 | 99.00th=[ 210], 99.50th=[ 223], 99.90th=[ 416], 99.95th=[ 424], 00:12:21.823 | 99.99th=[ 453] 00:12:21.823 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:21.823 slat (usec): min=14, max=379, avg=21.75, stdev= 9.26 00:12:21.823 clat (usec): min=14, max=2502, avg=136.91, stdev=64.37 00:12:21.823 lat (usec): min=121, max=2533, avg=158.66, stdev=65.66 00:12:21.823 clat percentiles (usec): 00:12:21.823 | 1.00th=[ 109], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 123], 00:12:21.823 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:12:21.823 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 155], 00:12:21.823 | 99.00th=[ 235], 99.50th=[ 322], 99.90th=[ 930], 99.95th=[ 1598], 00:12:21.823 | 99.99th=[ 2507] 00:12:21.823 bw ( KiB/s): min=12288, max=12288, per=28.88%, avg=12288.00, stdev= 0.00, samples=1 00:12:21.823 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:21.823 lat (usec) : 20=0.02%, 250=99.39%, 500=0.47%, 750=0.05%, 1000=0.02% 00:12:21.823 lat (msec) : 2=0.03%, 4=0.02% 00:12:21.823 cpu : usr=2.60%, sys=7.90%, ctx=5721, majf=0, minf=6 00:12:21.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.823 issued rwts: total=2644,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:21.823 00:12:21.823 Run status group 0 (all jobs): 00:12:21.823 READ: bw=35.2MiB/s (36.9MB/s), 7045KiB/s-10.3MiB/s (7214kB/s-10.8MB/s), io=35.2MiB (37.0MB), run=1001-1001msec 00:12:21.823 WRITE: bw=41.6MiB/s (43.6MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=41.6MiB (43.6MB), run=1001-1001msec 00:12:21.823 00:12:21.823 Disk stats (read/write): 00:12:21.823 nvme0n1: ios=1997/2048, merge=0/0, ticks=457/348, in_queue=805, util=88.16% 00:12:21.823 nvme0n2: ios=2136/2560, merge=0/0, ticks=418/376, in_queue=794, util=88.13% 00:12:21.823 nvme0n3: ios=1536/1703, merge=0/0, ticks=432/327, in_queue=759, util=88.05% 00:12:21.823 nvme0n4: ios=2302/2560, merge=0/0, ticks=414/363, in_queue=777, util=89.44% 00:12:21.823 15:16:31 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:22.082 [global] 00:12:22.082 thread=1 00:12:22.082 invalidate=1 00:12:22.082 rw=write 00:12:22.082 time_based=1 00:12:22.082 runtime=1 00:12:22.082 ioengine=libaio 00:12:22.082 direct=1 00:12:22.082 bs=4096 00:12:22.082 iodepth=128 00:12:22.082 norandommap=0 00:12:22.082 numjobs=1 00:12:22.082 00:12:22.082 verify_dump=1 00:12:22.082 verify_backlog=512 00:12:22.082 verify_state_save=0 00:12:22.082 do_verify=1 00:12:22.082 verify=crc32c-intel 00:12:22.082 [job0] 00:12:22.082 filename=/dev/nvme0n1 00:12:22.082 [job1] 00:12:22.082 filename=/dev/nvme0n2 00:12:22.082 [job2] 00:12:22.082 filename=/dev/nvme0n3 00:12:22.082 [job3] 00:12:22.082 filename=/dev/nvme0n4 00:12:22.082 Could not set queue depth (nvme0n1) 00:12:22.082 Could not set queue depth (nvme0n2) 00:12:22.082 Could not set queue depth (nvme0n3) 00:12:22.082 Could not set queue depth (nvme0n4) 00:12:22.082 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:22.082 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:22.082 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:22.082 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:22.082 fio-3.35 00:12:22.082 Starting 4 threads 00:12:23.456 00:12:23.456 job0: (groupid=0, jobs=1): err= 0: pid=69092: Wed Apr 24 15:16:32 2024 00:12:23.456 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:12:23.456 slat (usec): min=5, max=6983, avg=105.93, stdev=533.66 00:12:23.456 clat (usec): min=7972, max=21239, avg=13427.73, stdev=1646.32 00:12:23.456 lat (usec): min=8042, max=21258, avg=13533.66, stdev=1683.49 00:12:23.456 clat percentiles (usec): 00:12:23.456 | 1.00th=[ 9110], 5.00th=[10421], 10.00th=[11731], 20.00th=[12518], 00:12:23.456 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13435], 60.00th=[13566], 00:12:23.456 | 70.00th=[13698], 80.00th=[14222], 90.00th=[15008], 95.00th=[16188], 00:12:23.456 | 99.00th=[18482], 99.50th=[19530], 99.90th=[21103], 99.95th=[21365], 00:12:23.456 | 99.99th=[21365] 00:12:23.456 write: IOPS=4958, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1005msec); 0 zone resets 00:12:23.456 slat (usec): min=14, max=5308, avg=94.90, stdev=424.44 00:12:23.456 clat (usec): min=3731, max=21891, avg=13076.71, stdev=1765.68 00:12:23.456 lat (usec): min=4417, max=22071, avg=13171.61, stdev=1806.26 00:12:23.456 clat percentiles (usec): 00:12:23.456 | 1.00th=[ 7111], 5.00th=[10290], 10.00th=[11731], 20.00th=[12387], 00:12:23.456 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[13304], 00:12:23.456 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14746], 95.00th=[16057], 00:12:23.456 | 99.00th=[19006], 99.50th=[20317], 99.90th=[21890], 99.95th=[21890], 00:12:23.456 | 99.99th=[21890] 00:12:23.456 bw ( KiB/s): min=18368, max=20439, per=26.19%, avg=19403.50, stdev=1464.42, samples=2 00:12:23.456 iops : min= 4592, max= 5109, avg=4850.50, stdev=365.57, samples=2 00:12:23.456 lat (msec) : 4=0.01%, 10=3.76%, 20=95.76%, 50=0.47% 00:12:23.456 cpu : usr=4.68%, sys=13.75%, ctx=514, majf=0, minf=2 00:12:23.456 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:23.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:23.456 issued rwts: total=4608,4983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.456 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:23.456 job1: (groupid=0, jobs=1): err= 0: pid=69093: Wed Apr 24 15:16:32 2024 00:12:23.456 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:12:23.456 slat (usec): min=5, max=5431, avg=103.29, stdev=498.32 00:12:23.456 clat (usec): min=6217, max=23301, avg=13724.23, stdev=1882.69 00:12:23.456 lat (usec): min=6228, max=23332, avg=13827.52, stdev=1831.61 00:12:23.456 clat percentiles (usec): 00:12:23.456 | 1.00th=[ 9765], 5.00th=[12649], 10.00th=[12911], 20.00th=[13173], 00:12:23.456 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13435], 60.00th=[13566], 00:12:23.456 | 70.00th=[13698], 80.00th=[13829], 90.00th=[14222], 95.00th=[16581], 00:12:23.456 | 99.00th=[22414], 99.50th=[22938], 99.90th=[23200], 99.95th=[23200], 00:12:23.456 | 99.99th=[23200] 00:12:23.456 write: IOPS=4631, BW=18.1MiB/s (19.0MB/s)(18.1MiB/1002msec); 0 zone resets 00:12:23.456 slat (usec): min=12, max=5323, avg=104.63, stdev=461.90 00:12:23.456 clat (usec): min=330, max=23259, avg=13626.66, stdev=2796.96 00:12:23.456 lat (usec): min=2730, max=23292, avg=13731.28, stdev=2775.58 00:12:23.456 clat percentiles (usec): 00:12:23.456 | 1.00th=[10159], 5.00th=[12256], 10.00th=[12518], 20.00th=[12649], 00:12:23.456 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:12:23.456 | 70.00th=[13173], 80.00th=[13173], 90.00th=[18220], 95.00th=[22414], 00:12:23.456 | 99.00th=[22938], 99.50th=[22938], 99.90th=[23200], 99.95th=[23200], 00:12:23.456 | 99.99th=[23200] 00:12:23.456 bw ( KiB/s): min=20439, max=20439, per=27.59%, avg=20439.00, stdev= 0.00, samples=1 00:12:23.456 iops : min= 5109, max= 5109, avg=5109.00, stdev= 0.00, samples=1 00:12:23.456 lat (usec) : 500=0.01% 00:12:23.456 lat (msec) : 4=0.35%, 10=0.68%, 20=93.93%, 50=5.03% 00:12:23.456 cpu : usr=3.30%, sys=14.69%, ctx=293, majf=0, minf=5 00:12:23.456 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:23.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:23.456 issued rwts: total=4608,4641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.456 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:23.456 job2: (groupid=0, jobs=1): err= 0: pid=69094: Wed Apr 24 15:16:32 2024 00:12:23.456 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:12:23.456 slat (usec): min=6, max=4128, avg=112.41, stdev=449.84 00:12:23.456 clat (usec): min=11233, max=19365, avg=14898.78, stdev=1009.87 00:12:23.457 lat (usec): min=11258, max=19386, avg=15011.19, stdev=1072.32 00:12:23.457 clat percentiles (usec): 00:12:23.457 | 1.00th=[11994], 5.00th=[13173], 10.00th=[14091], 20.00th=[14484], 00:12:23.457 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14746], 60.00th=[14877], 00:12:23.457 | 70.00th=[15008], 80.00th=[15139], 90.00th=[16450], 95.00th=[16909], 00:12:23.457 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18744], 99.95th=[19006], 00:12:23.457 | 99.99th=[19268] 00:12:23.457 write: IOPS=4555, BW=17.8MiB/s (18.7MB/s)(17.8MiB/1002msec); 0 zone resets 00:12:23.457 slat (usec): min=9, max=4719, avg=110.27, stdev=517.01 00:12:23.457 clat (usec): min=413, max=19406, avg=14336.17, stdev=1497.49 00:12:23.457 lat (usec): min=4066, max=19438, avg=14446.44, stdev=1566.29 00:12:23.457 clat percentiles (usec): 00:12:23.457 | 1.00th=[ 9110], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:12:23.457 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14222], 60.00th=[14353], 00:12:23.457 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15270], 95.00th=[16909], 00:12:23.457 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:12:23.457 | 99.99th=[19530] 00:12:23.457 bw ( KiB/s): min=17296, max=17296, per=23.35%, avg=17296.00, stdev= 0.00, samples=1 00:12:23.457 iops : min= 4324, max= 4324, avg=4324.00, stdev= 0.00, samples=1 00:12:23.457 lat (usec) : 500=0.01% 00:12:23.457 lat (msec) : 10=0.87%, 20=99.12% 00:12:23.457 cpu : usr=3.70%, sys=13.39%, ctx=358, majf=0, minf=1 00:12:23.457 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:23.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:23.457 issued rwts: total=4096,4565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.457 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:23.457 job3: (groupid=0, jobs=1): err= 0: pid=69095: Wed Apr 24 15:16:32 2024 00:12:23.457 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:12:23.457 slat (usec): min=6, max=4628, avg=113.74, stdev=547.82 00:12:23.457 clat (usec): min=11226, max=18976, avg=15208.51, stdev=1026.36 00:12:23.457 lat (usec): min=14176, max=19007, avg=15322.25, stdev=872.14 00:12:23.457 clat percentiles (usec): 00:12:23.457 | 1.00th=[11863], 5.00th=[14484], 10.00th=[14484], 20.00th=[14746], 00:12:23.457 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:12:23.457 | 70.00th=[15270], 80.00th=[15401], 90.00th=[15664], 95.00th=[17957], 00:12:23.457 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19006], 99.95th=[19006], 00:12:23.457 | 99.99th=[19006] 00:12:23.457 write: IOPS=4420, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1001msec); 0 zone resets 00:12:23.457 slat (usec): min=9, max=4291, avg=112.68, stdev=484.08 00:12:23.457 clat (usec): min=949, max=17421, avg=14482.78, stdev=1425.62 00:12:23.457 lat (usec): min=974, max=18217, avg=14595.46, stdev=1346.29 00:12:23.457 clat percentiles (usec): 00:12:23.457 | 1.00th=[ 7570], 5.00th=[12387], 10.00th=[14091], 20.00th=[14353], 00:12:23.457 | 30.00th=[14484], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:12:23.457 | 70.00th=[14877], 80.00th=[15008], 90.00th=[15139], 95.00th=[15401], 00:12:23.457 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17433], 99.95th=[17433], 00:12:23.457 | 99.99th=[17433] 00:12:23.457 bw ( KiB/s): min=17125, max=17125, per=23.12%, avg=17125.00, stdev= 0.00, samples=1 00:12:23.457 iops : min= 4281, max= 4281, avg=4281.00, stdev= 0.00, samples=1 00:12:23.457 lat (usec) : 1000=0.02% 00:12:23.457 lat (msec) : 2=0.08%, 4=0.12%, 10=0.63%, 20=99.14% 00:12:23.457 cpu : usr=3.80%, sys=13.19%, ctx=267, majf=0, minf=3 00:12:23.457 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:23.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:23.457 issued rwts: total=4096,4425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.457 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:23.457 00:12:23.457 Run status group 0 (all jobs): 00:12:23.457 READ: bw=67.7MiB/s (70.9MB/s), 16.0MiB/s-18.0MiB/s (16.7MB/s-18.8MB/s), io=68.0MiB (71.3MB), run=1001-1005msec 00:12:23.457 WRITE: bw=72.3MiB/s (75.9MB/s), 17.3MiB/s-19.4MiB/s (18.1MB/s-20.3MB/s), io=72.7MiB (76.2MB), run=1001-1005msec 00:12:23.457 00:12:23.457 Disk stats (read/write): 00:12:23.457 nvme0n1: ios=4146/4311, merge=0/0, ticks=26229/24260, in_queue=50489, util=89.18% 00:12:23.457 nvme0n2: ios=3910/4096, merge=0/0, ticks=12104/12127, in_queue=24231, util=89.38% 00:12:23.457 nvme0n3: ios=3590/3926, merge=0/0, ticks=16964/16019, in_queue=32983, util=89.44% 00:12:23.457 nvme0n4: ios=3584/3808, merge=0/0, ticks=12451/12183, in_queue=24634, util=89.79% 00:12:23.457 15:16:32 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:23.457 [global] 00:12:23.457 thread=1 00:12:23.457 invalidate=1 00:12:23.457 rw=randwrite 00:12:23.457 time_based=1 00:12:23.457 runtime=1 00:12:23.457 ioengine=libaio 00:12:23.457 direct=1 00:12:23.457 bs=4096 00:12:23.457 iodepth=128 00:12:23.457 norandommap=0 00:12:23.457 numjobs=1 00:12:23.457 00:12:23.457 verify_dump=1 00:12:23.457 verify_backlog=512 00:12:23.457 verify_state_save=0 00:12:23.457 do_verify=1 00:12:23.457 verify=crc32c-intel 00:12:23.457 [job0] 00:12:23.457 filename=/dev/nvme0n1 00:12:23.457 [job1] 00:12:23.457 filename=/dev/nvme0n2 00:12:23.457 [job2] 00:12:23.457 filename=/dev/nvme0n3 00:12:23.457 [job3] 00:12:23.457 filename=/dev/nvme0n4 00:12:23.457 Could not set queue depth (nvme0n1) 00:12:23.457 Could not set queue depth (nvme0n2) 00:12:23.457 Could not set queue depth (nvme0n3) 00:12:23.457 Could not set queue depth (nvme0n4) 00:12:23.457 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:23.457 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:23.457 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:23.457 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:23.457 fio-3.35 00:12:23.457 Starting 4 threads 00:12:24.852 00:12:24.852 job0: (groupid=0, jobs=1): err= 0: pid=69152: Wed Apr 24 15:16:33 2024 00:12:24.852 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:12:24.852 slat (usec): min=7, max=13279, avg=185.88, stdev=1248.60 00:12:24.852 clat (usec): min=14523, max=39778, avg=25392.66, stdev=2857.09 00:12:24.852 lat (usec): min=14544, max=48664, avg=25578.54, stdev=2923.51 00:12:24.852 clat percentiles (usec): 00:12:24.852 | 1.00th=[15270], 5.00th=[21890], 10.00th=[23462], 20.00th=[23987], 00:12:24.852 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:12:24.852 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27395], 95.00th=[28967], 00:12:24.852 | 99.00th=[38536], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:12:24.852 | 99.99th=[39584] 00:12:24.852 write: IOPS=2745, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1002msec); 0 zone resets 00:12:24.852 slat (usec): min=6, max=20334, avg=182.84, stdev=1245.04 00:12:24.852 clat (usec): min=1348, max=34751, avg=22523.20, stdev=4175.67 00:12:24.852 lat (usec): min=6269, max=34786, avg=22706.04, stdev=4043.74 00:12:24.852 clat percentiles (usec): 00:12:24.852 | 1.00th=[ 6980], 5.00th=[13698], 10.00th=[19792], 20.00th=[21890], 00:12:24.852 | 30.00th=[22152], 40.00th=[22938], 50.00th=[23462], 60.00th=[23725], 00:12:24.852 | 70.00th=[23987], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:12:24.852 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:12:24.853 | 99.99th=[34866] 00:12:24.853 bw ( KiB/s): min= 9224, max=11768, per=16.90%, avg=10496.00, stdev=1798.88, samples=2 00:12:24.853 iops : min= 2306, max= 2942, avg=2624.00, stdev=449.72, samples=2 00:12:24.853 lat (msec) : 2=0.02%, 10=1.56%, 20=6.33%, 50=92.09% 00:12:24.853 cpu : usr=2.00%, sys=7.99%, ctx=114, majf=0, minf=8 00:12:24.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:24.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:24.853 issued rwts: total=2560,2751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:24.853 job1: (groupid=0, jobs=1): err= 0: pid=69153: Wed Apr 24 15:16:33 2024 00:12:24.853 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:12:24.853 slat (usec): min=6, max=2981, avg=91.29, stdev=351.71 00:12:24.853 clat (usec): min=9549, max=17157, avg=12322.78, stdev=1178.52 00:12:24.853 lat (usec): min=9574, max=17696, avg=12414.07, stdev=1217.41 00:12:24.853 clat percentiles (usec): 00:12:24.853 | 1.00th=[10159], 5.00th=[10945], 10.00th=[11338], 20.00th=[11600], 00:12:24.853 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11863], 60.00th=[12125], 00:12:24.853 | 70.00th=[12387], 80.00th=[13304], 90.00th=[14091], 95.00th=[14877], 00:12:24.853 | 99.00th=[16057], 99.50th=[16319], 99.90th=[16909], 99.95th=[16909], 00:12:24.853 | 99.99th=[17171] 00:12:24.853 write: IOPS=5577, BW=21.8MiB/s (22.8MB/s)(21.8MiB/1002msec); 0 zone resets 00:12:24.853 slat (usec): min=9, max=3572, avg=87.34, stdev=387.58 00:12:24.853 clat (usec): min=339, max=17346, avg=11375.16, stdev=1287.79 00:12:24.853 lat (usec): min=3301, max=17390, avg=11462.50, stdev=1338.92 00:12:24.853 clat percentiles (usec): 00:12:24.853 | 1.00th=[ 6718], 5.00th=[10421], 10.00th=[10552], 20.00th=[10814], 00:12:24.853 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:12:24.853 | 70.00th=[11469], 80.00th=[11731], 90.00th=[13304], 95.00th=[13829], 00:12:24.853 | 99.00th=[14222], 99.50th=[15926], 99.90th=[16909], 99.95th=[17171], 00:12:24.853 | 99.99th=[17433] 00:12:24.853 bw ( KiB/s): min=21656, max=22040, per=35.18%, avg=21848.00, stdev=271.53, samples=2 00:12:24.853 iops : min= 5414, max= 5510, avg=5462.00, stdev=67.88, samples=2 00:12:24.853 lat (usec) : 500=0.01% 00:12:24.853 lat (msec) : 4=0.30%, 10=1.64%, 20=98.05% 00:12:24.853 cpu : usr=5.79%, sys=15.58%, ctx=384, majf=0, minf=1 00:12:24.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:24.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:24.853 issued rwts: total=5120,5589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:24.853 job2: (groupid=0, jobs=1): err= 0: pid=69155: Wed Apr 24 15:16:33 2024 00:12:24.853 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:12:24.853 slat (usec): min=7, max=13257, avg=188.11, stdev=1273.33 00:12:24.853 clat (usec): min=14703, max=39742, avg=25381.35, stdev=2911.82 00:12:24.853 lat (usec): min=14718, max=48704, avg=25569.45, stdev=2964.41 00:12:24.853 clat percentiles (usec): 00:12:24.853 | 1.00th=[15270], 5.00th=[20841], 10.00th=[23987], 20.00th=[24249], 00:12:24.853 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:12:24.853 | 70.00th=[25822], 80.00th=[26084], 90.00th=[27132], 95.00th=[29230], 00:12:24.853 | 99.00th=[37487], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:12:24.853 | 99.99th=[39584] 00:12:24.853 write: IOPS=2680, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1014msec); 0 zone resets 00:12:24.853 slat (usec): min=6, max=19836, avg=183.56, stdev=1249.57 00:12:24.853 clat (usec): min=9441, max=36141, avg=23366.50, stdev=2937.07 00:12:24.853 lat (usec): min=9460, max=36166, avg=23550.06, stdev=2729.25 00:12:24.853 clat percentiles (usec): 00:12:24.853 | 1.00th=[13173], 5.00th=[20055], 10.00th=[21627], 20.00th=[22152], 00:12:24.853 | 30.00th=[22414], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:12:24.853 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[28967], 00:12:24.853 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[35914], 00:12:24.853 | 99.99th=[35914] 00:12:24.853 bw ( KiB/s): min= 8960, max=11768, per=16.69%, avg=10364.00, stdev=1985.56, samples=2 00:12:24.853 iops : min= 2240, max= 2942, avg=2591.00, stdev=496.39, samples=2 00:12:24.853 lat (msec) : 10=0.28%, 20=4.51%, 50=95.21% 00:12:24.853 cpu : usr=2.96%, sys=6.52%, ctx=154, majf=0, minf=7 00:12:24.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:24.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:24.853 issued rwts: total=2560,2718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:24.853 job3: (groupid=0, jobs=1): err= 0: pid=69156: Wed Apr 24 15:16:33 2024 00:12:24.853 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:12:24.853 slat (usec): min=6, max=7469, avg=103.88, stdev=656.13 00:12:24.853 clat (usec): min=8134, max=26087, avg=14422.27, stdev=1955.16 00:12:24.853 lat (usec): min=8145, max=31361, avg=14526.15, stdev=1983.55 00:12:24.853 clat percentiles (usec): 00:12:24.853 | 1.00th=[ 8848], 5.00th=[12518], 10.00th=[13173], 20.00th=[13304], 00:12:24.853 | 30.00th=[13566], 40.00th=[13829], 50.00th=[13960], 60.00th=[14353], 00:12:24.853 | 70.00th=[14484], 80.00th=[15926], 90.00th=[16909], 95.00th=[17433], 00:12:24.853 | 99.00th=[20841], 99.50th=[22152], 99.90th=[26084], 99.95th=[26084], 00:12:24.853 | 99.99th=[26084] 00:12:24.853 write: IOPS=4672, BW=18.3MiB/s (19.1MB/s)(18.3MiB/1003msec); 0 zone resets 00:12:24.853 slat (usec): min=14, max=9387, avg=103.71, stdev=603.53 00:12:24.853 clat (usec): min=772, max=20583, avg=12938.93, stdev=1732.60 00:12:24.853 lat (usec): min=5421, max=20839, avg=13042.64, stdev=1652.04 00:12:24.853 clat percentiles (usec): 00:12:24.853 | 1.00th=[ 6915], 5.00th=[11207], 10.00th=[11600], 20.00th=[12125], 00:12:24.853 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:12:24.853 | 70.00th=[13173], 80.00th=[13566], 90.00th=[15664], 95.00th=[16057], 00:12:24.853 | 99.00th=[17433], 99.50th=[17433], 99.90th=[20579], 99.95th=[20579], 00:12:24.853 | 99.99th=[20579] 00:12:24.853 bw ( KiB/s): min=16384, max=20480, per=29.68%, avg=18432.00, stdev=2896.31, samples=2 00:12:24.853 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:12:24.853 lat (usec) : 1000=0.01% 00:12:24.853 lat (msec) : 10=3.31%, 20=95.83%, 50=0.85% 00:12:24.853 cpu : usr=4.69%, sys=13.07%, ctx=201, majf=0, minf=7 00:12:24.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:24.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:24.853 issued rwts: total=4608,4687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:24.853 00:12:24.853 Run status group 0 (all jobs): 00:12:24.853 READ: bw=57.2MiB/s (60.0MB/s), 9.86MiB/s-20.0MiB/s (10.3MB/s-20.9MB/s), io=58.0MiB (60.8MB), run=1002-1014msec 00:12:24.853 WRITE: bw=60.7MiB/s (63.6MB/s), 10.5MiB/s-21.8MiB/s (11.0MB/s-22.8MB/s), io=61.5MiB (64.5MB), run=1002-1014msec 00:12:24.853 00:12:24.853 Disk stats (read/write): 00:12:24.853 nvme0n1: ios=2098/2368, merge=0/0, ticks=50169/52538, in_queue=102707, util=87.66% 00:12:24.853 nvme0n2: ios=4457/4608, merge=0/0, ticks=17386/14344, in_queue=31730, util=87.21% 00:12:24.853 nvme0n3: ios=2048/2368, merge=0/0, ticks=50203/52531, in_queue=102734, util=88.87% 00:12:24.853 nvme0n4: ios=3652/4096, merge=0/0, ticks=50742/49536, in_queue=100278, util=89.61% 00:12:24.853 15:16:33 -- target/fio.sh@55 -- # sync 00:12:24.853 15:16:33 -- target/fio.sh@59 -- # fio_pid=69169 00:12:24.853 15:16:33 -- target/fio.sh@61 -- # sleep 3 00:12:24.853 15:16:33 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:24.853 [global] 00:12:24.853 thread=1 00:12:24.853 invalidate=1 00:12:24.853 rw=read 00:12:24.853 time_based=1 00:12:24.853 runtime=10 00:12:24.853 ioengine=libaio 00:12:24.853 direct=1 00:12:24.853 bs=4096 00:12:24.853 iodepth=1 00:12:24.853 norandommap=1 00:12:24.853 numjobs=1 00:12:24.853 00:12:24.853 [job0] 00:12:24.853 filename=/dev/nvme0n1 00:12:24.853 [job1] 00:12:24.853 filename=/dev/nvme0n2 00:12:24.853 [job2] 00:12:24.853 filename=/dev/nvme0n3 00:12:24.853 [job3] 00:12:24.853 filename=/dev/nvme0n4 00:12:24.853 Could not set queue depth (nvme0n1) 00:12:24.853 Could not set queue depth (nvme0n2) 00:12:24.853 Could not set queue depth (nvme0n3) 00:12:24.853 Could not set queue depth (nvme0n4) 00:12:24.853 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:24.853 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:24.853 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:24.853 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:24.853 fio-3.35 00:12:24.853 Starting 4 threads 00:12:28.143 15:16:36 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:28.143 fio: pid=69212, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:28.143 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=57339904, buflen=4096 00:12:28.143 15:16:37 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:28.143 fio: pid=69211, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:28.143 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=48893952, buflen=4096 00:12:28.143 15:16:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:28.143 15:16:37 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:28.400 fio: pid=69209, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:28.400 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=54104064, buflen=4096 00:12:28.400 15:16:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:28.400 15:16:37 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:28.659 fio: pid=69210, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:28.659 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=9285632, buflen=4096 00:12:28.659 00:12:28.659 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69209: Wed Apr 24 15:16:37 2024 00:12:28.659 read: IOPS=3834, BW=15.0MiB/s (15.7MB/s)(51.6MiB/3445msec) 00:12:28.659 slat (usec): min=8, max=15799, avg=19.57, stdev=223.84 00:12:28.659 clat (usec): min=129, max=2774, avg=239.42, stdev=71.61 00:12:28.659 lat (usec): min=145, max=16032, avg=258.99, stdev=235.57 00:12:28.659 clat percentiles (usec): 00:12:28.659 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 169], 00:12:28.659 | 30.00th=[ 184], 40.00th=[ 239], 50.00th=[ 253], 60.00th=[ 265], 00:12:28.659 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 322], 00:12:28.659 | 99.00th=[ 367], 99.50th=[ 404], 99.90th=[ 619], 99.95th=[ 963], 00:12:28.659 | 99.99th=[ 2474] 00:12:28.659 bw ( KiB/s): min=12512, max=20768, per=24.45%, avg=15192.67, stdev=3219.07, samples=6 00:12:28.659 iops : min= 3128, max= 5192, avg=3798.17, stdev=804.77, samples=6 00:12:28.659 lat (usec) : 250=47.88%, 500=51.92%, 750=0.11%, 1000=0.05% 00:12:28.659 lat (msec) : 2=0.02%, 4=0.02% 00:12:28.659 cpu : usr=1.51%, sys=5.72%, ctx=13222, majf=0, minf=1 00:12:28.659 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.659 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.659 issued rwts: total=13210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.659 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:28.659 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69210: Wed Apr 24 15:16:37 2024 00:12:28.659 read: IOPS=5013, BW=19.6MiB/s (20.5MB/s)(72.9MiB/3720msec) 00:12:28.659 slat (usec): min=11, max=16136, avg=18.43, stdev=162.31 00:12:28.659 clat (usec): min=26, max=27021, avg=179.16, stdev=202.76 00:12:28.659 lat (usec): min=132, max=27043, avg=197.59, stdev=260.55 00:12:28.659 clat percentiles (usec): 00:12:28.659 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 159], 00:12:28.659 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 178], 00:12:28.659 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 208], 95.00th=[ 221], 00:12:28.659 | 99.00th=[ 251], 99.50th=[ 273], 99.90th=[ 644], 99.95th=[ 1434], 00:12:28.659 | 99.99th=[ 2671] 00:12:28.659 bw ( KiB/s): min=18744, max=21216, per=32.18%, avg=19998.43, stdev=1106.70, samples=7 00:12:28.659 iops : min= 4686, max= 5304, avg=4999.57, stdev=276.70, samples=7 00:12:28.659 lat (usec) : 50=0.01%, 250=98.91%, 500=0.94%, 750=0.05%, 1000=0.01% 00:12:28.659 lat (msec) : 2=0.05%, 4=0.02%, 50=0.01% 00:12:28.659 cpu : usr=1.86%, sys=7.10%, ctx=18663, majf=0, minf=1 00:12:28.659 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.659 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.659 issued rwts: total=18652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.659 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:28.659 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69211: Wed Apr 24 15:16:37 2024 00:12:28.659 read: IOPS=3713, BW=14.5MiB/s (15.2MB/s)(46.6MiB/3215msec) 00:12:28.659 slat (usec): min=8, max=10898, avg=15.03, stdev=139.97 00:12:28.659 clat (usec): min=143, max=7842, avg=252.73, stdev=100.23 00:12:28.659 lat (usec): min=157, max=11156, avg=267.75, stdev=171.64 00:12:28.659 clat percentiles (usec): 00:12:28.659 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 184], 00:12:28.659 | 30.00th=[ 225], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 273], 00:12:28.659 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 330], 00:12:28.659 | 99.00th=[ 363], 99.50th=[ 396], 99.90th=[ 791], 99.95th=[ 1532], 00:12:28.659 | 99.99th=[ 3032] 00:12:28.659 bw ( KiB/s): min=12504, max=19944, per=23.92%, avg=14864.67, stdev=2836.60, samples=6 00:12:28.659 iops : min= 3126, max= 4986, avg=3716.17, stdev=709.15, samples=6 00:12:28.659 lat (usec) : 250=40.29%, 500=59.47%, 750=0.13%, 1000=0.03% 00:12:28.659 lat (msec) : 2=0.03%, 4=0.03%, 10=0.01% 00:12:28.659 cpu : usr=1.43%, sys=4.26%, ctx=11951, majf=0, minf=1 00:12:28.659 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.659 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.659 issued rwts: total=11938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.659 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:28.659 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69212: Wed Apr 24 15:16:37 2024 00:12:28.659 read: IOPS=4789, BW=18.7MiB/s (19.6MB/s)(54.7MiB/2923msec) 00:12:28.659 slat (usec): min=11, max=103, avg=16.10, stdev= 4.93 00:12:28.659 clat (usec): min=139, max=670, avg=190.99, stdev=28.37 00:12:28.659 lat (usec): min=153, max=683, avg=207.09, stdev=28.69 00:12:28.659 clat percentiles (usec): 00:12:28.659 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:12:28.659 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 194], 00:12:28.659 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 225], 95.00th=[ 237], 00:12:28.659 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 465], 99.95th=[ 553], 00:12:28.659 | 99.99th=[ 644] 00:12:28.659 bw ( KiB/s): min=17600, max=20912, per=30.77%, avg=19121.60, stdev=1479.46, samples=5 00:12:28.659 iops : min= 4400, max= 5228, avg=4780.40, stdev=369.87, samples=5 00:12:28.659 lat (usec) : 250=97.81%, 500=2.11%, 750=0.07% 00:12:28.659 cpu : usr=1.71%, sys=6.88%, ctx=14005, majf=0, minf=1 00:12:28.659 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.659 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.659 issued rwts: total=14000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.659 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:28.659 00:12:28.659 Run status group 0 (all jobs): 00:12:28.659 READ: bw=60.7MiB/s (63.6MB/s), 14.5MiB/s-19.6MiB/s (15.2MB/s-20.5MB/s), io=226MiB (237MB), run=2923-3720msec 00:12:28.659 00:12:28.659 Disk stats (read/write): 00:12:28.659 nvme0n1: ios=12946/0, merge=0/0, ticks=3099/0, in_queue=3099, util=95.11% 00:12:28.659 nvme0n2: ios=18115/0, merge=0/0, ticks=3331/0, in_queue=3331, util=95.75% 00:12:28.659 nvme0n3: ios=11623/0, merge=0/0, ticks=2773/0, in_queue=2773, util=96.06% 00:12:28.659 nvme0n4: ios=13780/0, merge=0/0, ticks=2706/0, in_queue=2706, util=96.81% 00:12:28.659 15:16:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:28.659 15:16:37 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:28.917 15:16:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:28.917 15:16:38 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:29.176 15:16:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:29.176 15:16:38 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:29.435 15:16:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:29.435 15:16:38 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:30.009 15:16:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:30.009 15:16:38 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:30.268 15:16:39 -- target/fio.sh@69 -- # fio_status=0 00:12:30.268 15:16:39 -- target/fio.sh@70 -- # wait 69169 00:12:30.268 15:16:39 -- target/fio.sh@70 -- # fio_status=4 00:12:30.268 15:16:39 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.268 15:16:39 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.268 15:16:39 -- common/autotest_common.sh@1205 -- # local i=0 00:12:30.268 15:16:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:30.268 15:16:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.268 15:16:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:30.269 15:16:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.269 nvmf hotplug test: fio failed as expected 00:12:30.269 15:16:39 -- common/autotest_common.sh@1217 -- # return 0 00:12:30.269 15:16:39 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:30.269 15:16:39 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:30.269 15:16:39 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.527 15:16:39 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:30.527 15:16:39 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:30.527 15:16:39 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:30.527 15:16:39 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:30.527 15:16:39 -- target/fio.sh@91 -- # nvmftestfini 00:12:30.527 15:16:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:30.527 15:16:39 -- nvmf/common.sh@117 -- # sync 00:12:30.527 15:16:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.527 15:16:39 -- nvmf/common.sh@120 -- # set +e 00:12:30.527 15:16:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.527 15:16:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.527 rmmod nvme_tcp 00:12:30.527 rmmod nvme_fabrics 00:12:30.527 rmmod nvme_keyring 00:12:30.527 15:16:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.527 15:16:39 -- nvmf/common.sh@124 -- # set -e 00:12:30.527 15:16:39 -- nvmf/common.sh@125 -- # return 0 00:12:30.527 15:16:39 -- nvmf/common.sh@478 -- # '[' -n 68783 ']' 00:12:30.527 15:16:39 -- nvmf/common.sh@479 -- # killprocess 68783 00:12:30.527 15:16:39 -- common/autotest_common.sh@936 -- # '[' -z 68783 ']' 00:12:30.527 15:16:39 -- common/autotest_common.sh@940 -- # kill -0 68783 00:12:30.527 15:16:39 -- common/autotest_common.sh@941 -- # uname 00:12:30.527 15:16:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.527 15:16:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68783 00:12:30.527 killing process with pid 68783 00:12:30.527 15:16:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.527 15:16:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.527 15:16:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68783' 00:12:30.527 15:16:39 -- common/autotest_common.sh@955 -- # kill 68783 00:12:30.527 15:16:39 -- common/autotest_common.sh@960 -- # wait 68783 00:12:30.786 15:16:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:30.786 15:16:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:30.786 15:16:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:30.786 15:16:39 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.786 15:16:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.786 15:16:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.786 15:16:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.786 15:16:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.786 15:16:39 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:30.786 00:12:30.786 real 0m19.623s 00:12:30.786 user 1m14.379s 00:12:30.786 sys 0m10.077s 00:12:30.786 ************************************ 00:12:30.786 END TEST nvmf_fio_target 00:12:30.786 ************************************ 00:12:30.786 15:16:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:30.786 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:12:30.786 15:16:39 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:30.786 15:16:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:30.786 15:16:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.786 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:12:31.044 ************************************ 00:12:31.044 START TEST nvmf_bdevio 00:12:31.044 ************************************ 00:12:31.044 15:16:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:31.044 * Looking for test storage... 00:12:31.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.044 15:16:40 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.044 15:16:40 -- nvmf/common.sh@7 -- # uname -s 00:12:31.044 15:16:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.044 15:16:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.044 15:16:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.044 15:16:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.044 15:16:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.044 15:16:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.044 15:16:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.044 15:16:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.044 15:16:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.044 15:16:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.044 15:16:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:12:31.044 15:16:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:12:31.044 15:16:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.044 15:16:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.044 15:16:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.044 15:16:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.045 15:16:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.045 15:16:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.045 15:16:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.045 15:16:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.045 15:16:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.045 15:16:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.045 15:16:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.045 15:16:40 -- paths/export.sh@5 -- # export PATH 00:12:31.045 15:16:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.045 15:16:40 -- nvmf/common.sh@47 -- # : 0 00:12:31.045 15:16:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.045 15:16:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.045 15:16:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.045 15:16:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.045 15:16:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.045 15:16:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.045 15:16:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.045 15:16:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.045 15:16:40 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:31.045 15:16:40 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:31.045 15:16:40 -- target/bdevio.sh@14 -- # nvmftestinit 00:12:31.045 15:16:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:31.045 15:16:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.045 15:16:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:31.045 15:16:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:31.045 15:16:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:31.045 15:16:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.045 15:16:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.045 15:16:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.045 15:16:40 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:31.045 15:16:40 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:31.045 15:16:40 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:31.045 15:16:40 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:31.045 15:16:40 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:31.045 15:16:40 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:31.045 15:16:40 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.045 15:16:40 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.045 15:16:40 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:31.045 15:16:40 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:31.045 15:16:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.045 15:16:40 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.045 15:16:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.045 15:16:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.045 15:16:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.045 15:16:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.045 15:16:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.045 15:16:40 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.045 15:16:40 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:31.045 15:16:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:31.045 Cannot find device "nvmf_tgt_br" 00:12:31.045 15:16:40 -- nvmf/common.sh@155 -- # true 00:12:31.045 15:16:40 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.045 Cannot find device "nvmf_tgt_br2" 00:12:31.045 15:16:40 -- nvmf/common.sh@156 -- # true 00:12:31.045 15:16:40 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:31.045 15:16:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:31.045 Cannot find device "nvmf_tgt_br" 00:12:31.045 15:16:40 -- nvmf/common.sh@158 -- # true 00:12:31.045 15:16:40 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:31.045 Cannot find device "nvmf_tgt_br2" 00:12:31.045 15:16:40 -- nvmf/common.sh@159 -- # true 00:12:31.045 15:16:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:31.045 15:16:40 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:31.045 15:16:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.045 15:16:40 -- nvmf/common.sh@162 -- # true 00:12:31.045 15:16:40 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.045 15:16:40 -- nvmf/common.sh@163 -- # true 00:12:31.045 15:16:40 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:31.045 15:16:40 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:31.304 15:16:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:31.304 15:16:40 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:31.304 15:16:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:31.304 15:16:40 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:31.304 15:16:40 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:31.304 15:16:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:31.304 15:16:40 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:31.304 15:16:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:31.304 15:16:40 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:31.304 15:16:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:31.304 15:16:40 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:31.304 15:16:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:31.304 15:16:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:31.304 15:16:40 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:31.304 15:16:40 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:31.304 15:16:40 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:31.304 15:16:40 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:31.304 15:16:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:31.304 15:16:40 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:31.304 15:16:40 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:31.304 15:16:40 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:31.304 15:16:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:31.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:12:31.304 00:12:31.304 --- 10.0.0.2 ping statistics --- 00:12:31.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.304 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:31.304 15:16:40 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:31.304 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:31.304 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:12:31.304 00:12:31.304 --- 10.0.0.3 ping statistics --- 00:12:31.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.304 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:31.304 15:16:40 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:31.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:31.304 00:12:31.304 --- 10.0.0.1 ping statistics --- 00:12:31.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.304 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:31.304 15:16:40 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.304 15:16:40 -- nvmf/common.sh@422 -- # return 0 00:12:31.304 15:16:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:31.304 15:16:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.304 15:16:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:31.304 15:16:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:31.304 15:16:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.304 15:16:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:31.304 15:16:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:31.304 15:16:40 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:31.304 15:16:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:31.304 15:16:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:31.304 15:16:40 -- common/autotest_common.sh@10 -- # set +x 00:12:31.304 15:16:40 -- nvmf/common.sh@470 -- # nvmfpid=69486 00:12:31.304 15:16:40 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:31.304 15:16:40 -- nvmf/common.sh@471 -- # waitforlisten 69486 00:12:31.304 15:16:40 -- common/autotest_common.sh@817 -- # '[' -z 69486 ']' 00:12:31.304 15:16:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.304 15:16:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:31.304 15:16:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.304 15:16:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:31.304 15:16:40 -- common/autotest_common.sh@10 -- # set +x 00:12:31.304 [2024-04-24 15:16:40.518840] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:31.304 [2024-04-24 15:16:40.518944] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.563 [2024-04-24 15:16:40.658005] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.563 [2024-04-24 15:16:40.789312] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.563 [2024-04-24 15:16:40.789393] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.563 [2024-04-24 15:16:40.789410] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.563 [2024-04-24 15:16:40.789420] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.563 [2024-04-24 15:16:40.789449] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.563 [2024-04-24 15:16:40.789567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:31.563 [2024-04-24 15:16:40.789695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:31.563 [2024-04-24 15:16:40.789807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:31.563 [2024-04-24 15:16:40.790286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.496 15:16:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:32.496 15:16:41 -- common/autotest_common.sh@850 -- # return 0 00:12:32.496 15:16:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:32.496 15:16:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:32.497 15:16:41 -- common/autotest_common.sh@10 -- # set +x 00:12:32.497 15:16:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.497 15:16:41 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.497 15:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:32.497 15:16:41 -- common/autotest_common.sh@10 -- # set +x 00:12:32.497 [2024-04-24 15:16:41.606546] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.497 15:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:32.497 15:16:41 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:32.497 15:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:32.497 15:16:41 -- common/autotest_common.sh@10 -- # set +x 00:12:32.497 Malloc0 00:12:32.497 15:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:32.497 15:16:41 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:32.497 15:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:32.497 15:16:41 -- common/autotest_common.sh@10 -- # set +x 00:12:32.497 15:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:32.497 15:16:41 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:32.497 15:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:32.497 15:16:41 -- common/autotest_common.sh@10 -- # set +x 00:12:32.497 15:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:32.497 15:16:41 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.497 15:16:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:32.497 15:16:41 -- common/autotest_common.sh@10 -- # set +x 00:12:32.497 [2024-04-24 15:16:41.669345] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.497 15:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:32.497 15:16:41 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:32.497 15:16:41 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:32.497 15:16:41 -- nvmf/common.sh@521 -- # config=() 00:12:32.497 15:16:41 -- nvmf/common.sh@521 -- # local subsystem config 00:12:32.497 15:16:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:32.497 15:16:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:32.497 { 00:12:32.497 "params": { 00:12:32.497 "name": "Nvme$subsystem", 00:12:32.497 "trtype": "$TEST_TRANSPORT", 00:12:32.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:32.497 "adrfam": "ipv4", 00:12:32.497 "trsvcid": "$NVMF_PORT", 00:12:32.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:32.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:32.497 "hdgst": ${hdgst:-false}, 00:12:32.497 "ddgst": ${ddgst:-false} 00:12:32.497 }, 00:12:32.497 "method": "bdev_nvme_attach_controller" 00:12:32.497 } 00:12:32.497 EOF 00:12:32.497 )") 00:12:32.497 15:16:41 -- nvmf/common.sh@543 -- # cat 00:12:32.497 15:16:41 -- nvmf/common.sh@545 -- # jq . 00:12:32.497 15:16:41 -- nvmf/common.sh@546 -- # IFS=, 00:12:32.497 15:16:41 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:32.497 "params": { 00:12:32.497 "name": "Nvme1", 00:12:32.497 "trtype": "tcp", 00:12:32.497 "traddr": "10.0.0.2", 00:12:32.497 "adrfam": "ipv4", 00:12:32.497 "trsvcid": "4420", 00:12:32.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:32.497 "hdgst": false, 00:12:32.497 "ddgst": false 00:12:32.497 }, 00:12:32.497 "method": "bdev_nvme_attach_controller" 00:12:32.497 }' 00:12:32.497 [2024-04-24 15:16:41.726282] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:32.497 [2024-04-24 15:16:41.726376] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69522 ] 00:12:32.755 [2024-04-24 15:16:41.871731] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:33.014 [2024-04-24 15:16:42.007317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.014 [2024-04-24 15:16:42.007382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.014 [2024-04-24 15:16:42.007389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.014 I/O targets: 00:12:33.014 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:33.014 00:12:33.014 00:12:33.014 CUnit - A unit testing framework for C - Version 2.1-3 00:12:33.014 http://cunit.sourceforge.net/ 00:12:33.014 00:12:33.014 00:12:33.014 Suite: bdevio tests on: Nvme1n1 00:12:33.014 Test: blockdev write read block ...passed 00:12:33.014 Test: blockdev write zeroes read block ...passed 00:12:33.014 Test: blockdev write zeroes read no split ...passed 00:12:33.014 Test: blockdev write zeroes read split ...passed 00:12:33.014 Test: blockdev write zeroes read split partial ...passed 00:12:33.014 Test: blockdev reset ...[2024-04-24 15:16:42.237996] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:33.014 [2024-04-24 15:16:42.238300] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefc660 (9): Bad file descriptor 00:12:33.014 [2024-04-24 15:16:42.256090] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:33.014 passed 00:12:33.014 Test: blockdev write read 8 blocks ...passed 00:12:33.014 Test: blockdev write read size > 128k ...passed 00:12:33.014 Test: blockdev write read invalid size ...passed 00:12:33.014 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.014 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.014 Test: blockdev write read max offset ...passed 00:12:33.274 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.274 Test: blockdev writev readv 8 blocks ...passed 00:12:33.274 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.274 Test: blockdev writev readv block ...passed 00:12:33.274 Test: blockdev writev readv size > 128k ...passed 00:12:33.274 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.274 Test: blockdev comparev and writev ...[2024-04-24 15:16:42.263889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.274 [2024-04-24 15:16:42.263938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:33.274 [2024-04-24 15:16:42.263960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.274 [2024-04-24 15:16:42.263972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:33.274 [2024-04-24 15:16:42.264300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.274 [2024-04-24 15:16:42.264330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:33.274 [2024-04-24 15:16:42.264348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.274 [2024-04-24 15:16:42.264358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:33.274 [2024-04-24 15:16:42.264719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.274 [2024-04-24 15:16:42.264758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:33.274 [2024-04-24 15:16:42.264776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.274 [2024-04-24 15:16:42.264786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:33.274 [2024-04-24 15:16:42.265135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.274 [2024-04-24 15:16:42.265164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:33.274 [2024-04-24 15:16:42.265182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.274 [2024-04-24 15:16:42.265192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:33.274 passed 00:12:33.274 Test: blockdev nvme passthru rw ...passed 00:12:33.274 Test: blockdev nvme passthru vendor specific ...[2024-04-24 15:16:42.266078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:33.274 [2024-04-24 15:16:42.266103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:33.274 [2024-04-24 15:16:42.266212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:33.274 [2024-04-24 15:16:42.266233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:33.274 [2024-04-24 15:16:42.266338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:33.274 [2024-04-24 15:16:42.266359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:33.274 [2024-04-24 15:16:42.266490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:33.274 [2024-04-24 15:16:42.266517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:33.274 passed 00:12:33.274 Test: blockdev nvme admin passthru ...passed 00:12:33.274 Test: blockdev copy ...passed 00:12:33.274 00:12:33.274 Run Summary: Type Total Ran Passed Failed Inactive 00:12:33.274 suites 1 1 n/a 0 0 00:12:33.274 tests 23 23 23 0 0 00:12:33.274 asserts 152 152 152 0 n/a 00:12:33.274 00:12:33.274 Elapsed time = 0.146 seconds 00:12:33.274 15:16:42 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.274 15:16:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.274 15:16:42 -- common/autotest_common.sh@10 -- # set +x 00:12:33.533 15:16:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.533 15:16:42 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:33.533 15:16:42 -- target/bdevio.sh@30 -- # nvmftestfini 00:12:33.533 15:16:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:33.533 15:16:42 -- nvmf/common.sh@117 -- # sync 00:12:33.533 15:16:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:33.533 15:16:42 -- nvmf/common.sh@120 -- # set +e 00:12:33.533 15:16:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:33.533 15:16:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:33.533 rmmod nvme_tcp 00:12:33.533 rmmod nvme_fabrics 00:12:33.533 rmmod nvme_keyring 00:12:33.533 15:16:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:33.533 15:16:42 -- nvmf/common.sh@124 -- # set -e 00:12:33.533 15:16:42 -- nvmf/common.sh@125 -- # return 0 00:12:33.533 15:16:42 -- nvmf/common.sh@478 -- # '[' -n 69486 ']' 00:12:33.533 15:16:42 -- nvmf/common.sh@479 -- # killprocess 69486 00:12:33.533 15:16:42 -- common/autotest_common.sh@936 -- # '[' -z 69486 ']' 00:12:33.533 15:16:42 -- common/autotest_common.sh@940 -- # kill -0 69486 00:12:33.533 15:16:42 -- common/autotest_common.sh@941 -- # uname 00:12:33.533 15:16:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:33.533 15:16:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69486 00:12:33.533 killing process with pid 69486 00:12:33.533 15:16:42 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:12:33.533 15:16:42 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:12:33.533 15:16:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69486' 00:12:33.533 15:16:42 -- common/autotest_common.sh@955 -- # kill 69486 00:12:33.533 15:16:42 -- common/autotest_common.sh@960 -- # wait 69486 00:12:33.792 15:16:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:33.792 15:16:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:33.792 15:16:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:33.792 15:16:42 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.792 15:16:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:33.792 15:16:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.792 15:16:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.792 15:16:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.792 15:16:42 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:33.792 ************************************ 00:12:33.792 END TEST nvmf_bdevio 00:12:33.792 ************************************ 00:12:33.792 00:12:33.792 real 0m2.926s 00:12:33.792 user 0m9.898s 00:12:33.792 sys 0m0.743s 00:12:33.792 15:16:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:33.792 15:16:42 -- common/autotest_common.sh@10 -- # set +x 00:12:33.792 15:16:43 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:12:33.792 15:16:43 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:33.792 15:16:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:33.792 15:16:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.792 15:16:43 -- common/autotest_common.sh@10 -- # set +x 00:12:34.052 ************************************ 00:12:34.052 START TEST nvmf_bdevio_no_huge 00:12:34.052 ************************************ 00:12:34.052 15:16:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:34.052 * Looking for test storage... 00:12:34.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:34.052 15:16:43 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:34.052 15:16:43 -- nvmf/common.sh@7 -- # uname -s 00:12:34.052 15:16:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.052 15:16:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.052 15:16:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.052 15:16:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.052 15:16:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.052 15:16:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.052 15:16:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.052 15:16:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.052 15:16:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.052 15:16:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.052 15:16:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:12:34.052 15:16:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:12:34.052 15:16:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.052 15:16:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.052 15:16:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:34.052 15:16:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.052 15:16:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:34.052 15:16:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.052 15:16:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.052 15:16:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.052 15:16:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.052 15:16:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.052 15:16:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.052 15:16:43 -- paths/export.sh@5 -- # export PATH 00:12:34.052 15:16:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.052 15:16:43 -- nvmf/common.sh@47 -- # : 0 00:12:34.052 15:16:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.052 15:16:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.052 15:16:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.052 15:16:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.052 15:16:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.052 15:16:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.052 15:16:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.052 15:16:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.052 15:16:43 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:34.052 15:16:43 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:34.052 15:16:43 -- target/bdevio.sh@14 -- # nvmftestinit 00:12:34.052 15:16:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:34.052 15:16:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.052 15:16:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:34.052 15:16:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:34.052 15:16:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:34.053 15:16:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.053 15:16:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.053 15:16:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.053 15:16:43 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:34.053 15:16:43 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:34.053 15:16:43 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:34.053 15:16:43 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:34.053 15:16:43 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:34.053 15:16:43 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:34.053 15:16:43 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.053 15:16:43 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.053 15:16:43 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:34.053 15:16:43 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:34.053 15:16:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:34.053 15:16:43 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:34.053 15:16:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:34.053 15:16:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.053 15:16:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:34.053 15:16:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:34.053 15:16:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:34.053 15:16:43 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:34.053 15:16:43 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:34.053 15:16:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:34.053 Cannot find device "nvmf_tgt_br" 00:12:34.053 15:16:43 -- nvmf/common.sh@155 -- # true 00:12:34.053 15:16:43 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:34.053 Cannot find device "nvmf_tgt_br2" 00:12:34.053 15:16:43 -- nvmf/common.sh@156 -- # true 00:12:34.053 15:16:43 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:34.053 15:16:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:34.053 Cannot find device "nvmf_tgt_br" 00:12:34.053 15:16:43 -- nvmf/common.sh@158 -- # true 00:12:34.053 15:16:43 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:34.053 Cannot find device "nvmf_tgt_br2" 00:12:34.053 15:16:43 -- nvmf/common.sh@159 -- # true 00:12:34.053 15:16:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:34.313 15:16:43 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:34.313 15:16:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:34.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.313 15:16:43 -- nvmf/common.sh@162 -- # true 00:12:34.313 15:16:43 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:34.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.313 15:16:43 -- nvmf/common.sh@163 -- # true 00:12:34.313 15:16:43 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:34.313 15:16:43 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:34.313 15:16:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:34.313 15:16:43 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:34.313 15:16:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:34.313 15:16:43 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:34.313 15:16:43 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:34.313 15:16:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:34.313 15:16:43 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:34.313 15:16:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:34.313 15:16:43 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:34.313 15:16:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:34.313 15:16:43 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:34.314 15:16:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:34.314 15:16:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:34.314 15:16:43 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:34.314 15:16:43 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:34.314 15:16:43 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:34.314 15:16:43 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:34.314 15:16:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:34.314 15:16:43 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:34.314 15:16:43 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:34.314 15:16:43 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:34.314 15:16:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:34.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:12:34.314 00:12:34.314 --- 10.0.0.2 ping statistics --- 00:12:34.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.314 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:12:34.314 15:16:43 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:34.314 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:34.314 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:12:34.314 00:12:34.314 --- 10.0.0.3 ping statistics --- 00:12:34.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.314 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:34.314 15:16:43 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:34.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:12:34.314 00:12:34.314 --- 10.0.0.1 ping statistics --- 00:12:34.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.314 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:34.314 15:16:43 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.314 15:16:43 -- nvmf/common.sh@422 -- # return 0 00:12:34.314 15:16:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:34.314 15:16:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.314 15:16:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:34.314 15:16:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:34.314 15:16:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.314 15:16:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:34.314 15:16:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:34.314 15:16:43 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:34.314 15:16:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:34.314 15:16:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:34.314 15:16:43 -- common/autotest_common.sh@10 -- # set +x 00:12:34.314 15:16:43 -- nvmf/common.sh@470 -- # nvmfpid=69702 00:12:34.314 15:16:43 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:34.314 15:16:43 -- nvmf/common.sh@471 -- # waitforlisten 69702 00:12:34.314 15:16:43 -- common/autotest_common.sh@817 -- # '[' -z 69702 ']' 00:12:34.314 15:16:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.314 15:16:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:34.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.314 15:16:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.314 15:16:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:34.314 15:16:43 -- common/autotest_common.sh@10 -- # set +x 00:12:34.573 [2024-04-24 15:16:43.618762] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:34.573 [2024-04-24 15:16:43.618888] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:34.573 [2024-04-24 15:16:43.771819] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.832 [2024-04-24 15:16:43.922935] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.832 [2024-04-24 15:16:43.923479] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.832 [2024-04-24 15:16:43.924080] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.832 [2024-04-24 15:16:43.924624] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.832 [2024-04-24 15:16:43.925054] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.832 [2024-04-24 15:16:43.925490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:34.832 [2024-04-24 15:16:43.925603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:34.832 [2024-04-24 15:16:43.925713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:34.832 [2024-04-24 15:16:43.925715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.439 15:16:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:35.439 15:16:44 -- common/autotest_common.sh@850 -- # return 0 00:12:35.439 15:16:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:35.439 15:16:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:35.439 15:16:44 -- common/autotest_common.sh@10 -- # set +x 00:12:35.439 15:16:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.439 15:16:44 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.439 15:16:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:35.439 15:16:44 -- common/autotest_common.sh@10 -- # set +x 00:12:35.439 [2024-04-24 15:16:44.549357] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.439 15:16:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:35.439 15:16:44 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:35.439 15:16:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:35.439 15:16:44 -- common/autotest_common.sh@10 -- # set +x 00:12:35.439 Malloc0 00:12:35.439 15:16:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:35.439 15:16:44 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:35.439 15:16:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:35.439 15:16:44 -- common/autotest_common.sh@10 -- # set +x 00:12:35.439 15:16:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:35.439 15:16:44 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:35.439 15:16:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:35.439 15:16:44 -- common/autotest_common.sh@10 -- # set +x 00:12:35.439 15:16:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:35.439 15:16:44 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.439 15:16:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:35.439 15:16:44 -- common/autotest_common.sh@10 -- # set +x 00:12:35.439 [2024-04-24 15:16:44.589589] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.439 15:16:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:35.439 15:16:44 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:35.439 15:16:44 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:35.439 15:16:44 -- nvmf/common.sh@521 -- # config=() 00:12:35.439 15:16:44 -- nvmf/common.sh@521 -- # local subsystem config 00:12:35.439 15:16:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:35.439 15:16:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:35.439 { 00:12:35.439 "params": { 00:12:35.439 "name": "Nvme$subsystem", 00:12:35.439 "trtype": "$TEST_TRANSPORT", 00:12:35.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:35.439 "adrfam": "ipv4", 00:12:35.439 "trsvcid": "$NVMF_PORT", 00:12:35.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:35.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:35.439 "hdgst": ${hdgst:-false}, 00:12:35.439 "ddgst": ${ddgst:-false} 00:12:35.439 }, 00:12:35.439 "method": "bdev_nvme_attach_controller" 00:12:35.439 } 00:12:35.439 EOF 00:12:35.439 )") 00:12:35.439 15:16:44 -- nvmf/common.sh@543 -- # cat 00:12:35.439 15:16:44 -- nvmf/common.sh@545 -- # jq . 00:12:35.439 15:16:44 -- nvmf/common.sh@546 -- # IFS=, 00:12:35.439 15:16:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:35.439 "params": { 00:12:35.439 "name": "Nvme1", 00:12:35.439 "trtype": "tcp", 00:12:35.439 "traddr": "10.0.0.2", 00:12:35.439 "adrfam": "ipv4", 00:12:35.439 "trsvcid": "4420", 00:12:35.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:35.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:35.439 "hdgst": false, 00:12:35.439 "ddgst": false 00:12:35.439 }, 00:12:35.439 "method": "bdev_nvme_attach_controller" 00:12:35.439 }' 00:12:35.439 [2024-04-24 15:16:44.639025] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:35.439 [2024-04-24 15:16:44.639107] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid69738 ] 00:12:35.697 [2024-04-24 15:16:44.780078] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:35.955 [2024-04-24 15:16:44.950026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.955 [2024-04-24 15:16:44.950134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.955 [2024-04-24 15:16:44.950140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.955 I/O targets: 00:12:35.955 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:35.955 00:12:35.955 00:12:35.955 CUnit - A unit testing framework for C - Version 2.1-3 00:12:35.955 http://cunit.sourceforge.net/ 00:12:35.955 00:12:35.955 00:12:35.955 Suite: bdevio tests on: Nvme1n1 00:12:35.955 Test: blockdev write read block ...passed 00:12:35.955 Test: blockdev write zeroes read block ...passed 00:12:35.955 Test: blockdev write zeroes read no split ...passed 00:12:35.955 Test: blockdev write zeroes read split ...passed 00:12:35.955 Test: blockdev write zeroes read split partial ...passed 00:12:35.955 Test: blockdev reset ...[2024-04-24 15:16:45.175410] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:35.955 [2024-04-24 15:16:45.175865] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30450 (9): Bad file descriptor 00:12:35.955 [2024-04-24 15:16:45.191920] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:35.955 passed 00:12:35.955 Test: blockdev write read 8 blocks ...passed 00:12:35.955 Test: blockdev write read size > 128k ...passed 00:12:35.956 Test: blockdev write read invalid size ...passed 00:12:35.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:35.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:35.956 Test: blockdev write read max offset ...passed 00:12:35.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:35.956 Test: blockdev writev readv 8 blocks ...passed 00:12:35.956 Test: blockdev writev readv 30 x 1block ...passed 00:12:35.956 Test: blockdev writev readv block ...passed 00:12:35.956 Test: blockdev writev readv size > 128k ...passed 00:12:36.214 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.214 Test: blockdev comparev and writev ...[2024-04-24 15:16:45.201426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.214 [2024-04-24 15:16:45.201616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:36.214 [2024-04-24 15:16:45.201646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.214 [2024-04-24 15:16:45.201658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:36.214 [2024-04-24 15:16:45.201966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.214 [2024-04-24 15:16:45.201985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:36.214 [2024-04-24 15:16:45.202004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.214 [2024-04-24 15:16:45.202022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:36.214 [2024-04-24 15:16:45.202309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.214 [2024-04-24 15:16:45.202327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:36.214 [2024-04-24 15:16:45.202344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.214 [2024-04-24 15:16:45.202354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:36.214 [2024-04-24 15:16:45.202825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.214 [2024-04-24 15:16:45.203023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:36.214 [2024-04-24 15:16:45.203241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:36.214 [2024-04-24 15:16:45.203377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:36.214 passed 00:12:36.214 Test: blockdev nvme passthru rw ...passed 00:12:36.214 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.214 Test: blockdev nvme admin passthru ...[2024-04-24 15:16:45.204532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.214 [2024-04-24 15:16:45.204570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:36.214 [2024-04-24 15:16:45.204709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.215 [2024-04-24 15:16:45.204728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:36.215 [2024-04-24 15:16:45.204834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.215 [2024-04-24 15:16:45.204850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:36.215 [2024-04-24 15:16:45.204957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:36.215 [2024-04-24 15:16:45.204978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:36.215 passed 00:12:36.215 Test: blockdev copy ...passed 00:12:36.215 00:12:36.215 Run Summary: Type Total Ran Passed Failed Inactive 00:12:36.215 suites 1 1 n/a 0 0 00:12:36.215 tests 23 23 23 0 0 00:12:36.215 asserts 152 152 152 0 n/a 00:12:36.215 00:12:36.215 Elapsed time = 0.168 seconds 00:12:36.474 15:16:45 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.474 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:36.474 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:12:36.474 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:36.474 15:16:45 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:36.474 15:16:45 -- target/bdevio.sh@30 -- # nvmftestfini 00:12:36.474 15:16:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:36.474 15:16:45 -- nvmf/common.sh@117 -- # sync 00:12:36.474 15:16:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:36.474 15:16:45 -- nvmf/common.sh@120 -- # set +e 00:12:36.474 15:16:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:36.474 15:16:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:36.474 rmmod nvme_tcp 00:12:36.474 rmmod nvme_fabrics 00:12:36.474 rmmod nvme_keyring 00:12:36.474 15:16:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:36.474 15:16:45 -- nvmf/common.sh@124 -- # set -e 00:12:36.474 15:16:45 -- nvmf/common.sh@125 -- # return 0 00:12:36.474 15:16:45 -- nvmf/common.sh@478 -- # '[' -n 69702 ']' 00:12:36.474 15:16:45 -- nvmf/common.sh@479 -- # killprocess 69702 00:12:36.474 15:16:45 -- common/autotest_common.sh@936 -- # '[' -z 69702 ']' 00:12:36.474 15:16:45 -- common/autotest_common.sh@940 -- # kill -0 69702 00:12:36.474 15:16:45 -- common/autotest_common.sh@941 -- # uname 00:12:36.474 15:16:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:36.474 15:16:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69702 00:12:36.731 killing process with pid 69702 00:12:36.731 15:16:45 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:12:36.731 15:16:45 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:12:36.731 15:16:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69702' 00:12:36.731 15:16:45 -- common/autotest_common.sh@955 -- # kill 69702 00:12:36.731 15:16:45 -- common/autotest_common.sh@960 -- # wait 69702 00:12:36.989 15:16:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:36.989 15:16:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:36.989 15:16:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:36.989 15:16:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.989 15:16:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:36.989 15:16:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.989 15:16:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.989 15:16:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.989 15:16:46 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:36.989 ************************************ 00:12:36.989 END TEST nvmf_bdevio_no_huge 00:12:36.989 ************************************ 00:12:36.989 00:12:36.989 real 0m3.143s 00:12:36.989 user 0m10.078s 00:12:36.989 sys 0m1.268s 00:12:36.989 15:16:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:36.989 15:16:46 -- common/autotest_common.sh@10 -- # set +x 00:12:37.247 15:16:46 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:37.247 15:16:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:37.247 15:16:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:37.247 15:16:46 -- common/autotest_common.sh@10 -- # set +x 00:12:37.247 ************************************ 00:12:37.247 START TEST nvmf_tls 00:12:37.247 ************************************ 00:12:37.248 15:16:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:37.248 * Looking for test storage... 00:12:37.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:37.248 15:16:46 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:37.248 15:16:46 -- nvmf/common.sh@7 -- # uname -s 00:12:37.248 15:16:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.248 15:16:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.248 15:16:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.248 15:16:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.248 15:16:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.248 15:16:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.248 15:16:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.248 15:16:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.248 15:16:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.248 15:16:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.248 15:16:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:12:37.248 15:16:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:12:37.248 15:16:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.248 15:16:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.248 15:16:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:37.248 15:16:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.248 15:16:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:37.248 15:16:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.248 15:16:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.248 15:16:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.248 15:16:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.248 15:16:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.248 15:16:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.248 15:16:46 -- paths/export.sh@5 -- # export PATH 00:12:37.248 15:16:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.248 15:16:46 -- nvmf/common.sh@47 -- # : 0 00:12:37.248 15:16:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.248 15:16:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.248 15:16:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.248 15:16:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.248 15:16:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.248 15:16:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.248 15:16:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.248 15:16:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.248 15:16:46 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:37.248 15:16:46 -- target/tls.sh@62 -- # nvmftestinit 00:12:37.248 15:16:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:37.248 15:16:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.248 15:16:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:37.248 15:16:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:37.248 15:16:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:37.248 15:16:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.248 15:16:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.248 15:16:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.248 15:16:46 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:37.248 15:16:46 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:37.248 15:16:46 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:37.248 15:16:46 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:37.248 15:16:46 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:37.248 15:16:46 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:37.248 15:16:46 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.248 15:16:46 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.248 15:16:46 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:37.248 15:16:46 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:37.248 15:16:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:37.248 15:16:46 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:37.248 15:16:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:37.248 15:16:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.248 15:16:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:37.248 15:16:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:37.248 15:16:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:37.248 15:16:46 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:37.248 15:16:46 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:37.248 15:16:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:37.248 Cannot find device "nvmf_tgt_br" 00:12:37.248 15:16:46 -- nvmf/common.sh@155 -- # true 00:12:37.248 15:16:46 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:37.248 Cannot find device "nvmf_tgt_br2" 00:12:37.248 15:16:46 -- nvmf/common.sh@156 -- # true 00:12:37.248 15:16:46 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:37.248 15:16:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:37.248 Cannot find device "nvmf_tgt_br" 00:12:37.248 15:16:46 -- nvmf/common.sh@158 -- # true 00:12:37.248 15:16:46 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:37.248 Cannot find device "nvmf_tgt_br2" 00:12:37.248 15:16:46 -- nvmf/common.sh@159 -- # true 00:12:37.248 15:16:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:37.506 15:16:46 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:37.506 15:16:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:37.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:37.506 15:16:46 -- nvmf/common.sh@162 -- # true 00:12:37.506 15:16:46 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:37.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:37.506 15:16:46 -- nvmf/common.sh@163 -- # true 00:12:37.506 15:16:46 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:37.506 15:16:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:37.507 15:16:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:37.507 15:16:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:37.507 15:16:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:37.507 15:16:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:37.507 15:16:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:37.507 15:16:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:37.507 15:16:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:37.507 15:16:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:37.507 15:16:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:37.507 15:16:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:37.507 15:16:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:37.507 15:16:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:37.507 15:16:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:37.507 15:16:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:37.507 15:16:46 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:37.507 15:16:46 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:37.507 15:16:46 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:37.507 15:16:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:37.507 15:16:46 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:37.507 15:16:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:37.507 15:16:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:37.507 15:16:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:37.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:12:37.507 00:12:37.507 --- 10.0.0.2 ping statistics --- 00:12:37.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.507 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:12:37.507 15:16:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:37.507 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:37.507 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:12:37.507 00:12:37.507 --- 10.0.0.3 ping statistics --- 00:12:37.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.507 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:37.507 15:16:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:37.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:37.507 00:12:37.507 --- 10.0.0.1 ping statistics --- 00:12:37.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.507 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:37.507 15:16:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.507 15:16:46 -- nvmf/common.sh@422 -- # return 0 00:12:37.507 15:16:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:37.507 15:16:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.507 15:16:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:37.507 15:16:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:37.507 15:16:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.507 15:16:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:37.507 15:16:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:37.507 15:16:46 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:37.507 15:16:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:37.507 15:16:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:37.507 15:16:46 -- common/autotest_common.sh@10 -- # set +x 00:12:37.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.765 15:16:46 -- nvmf/common.sh@470 -- # nvmfpid=69924 00:12:37.765 15:16:46 -- nvmf/common.sh@471 -- # waitforlisten 69924 00:12:37.765 15:16:46 -- common/autotest_common.sh@817 -- # '[' -z 69924 ']' 00:12:37.765 15:16:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.765 15:16:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:37.765 15:16:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:37.765 15:16:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.765 15:16:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:37.765 15:16:46 -- common/autotest_common.sh@10 -- # set +x 00:12:37.765 [2024-04-24 15:16:46.798576] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:37.765 [2024-04-24 15:16:46.798672] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.765 [2024-04-24 15:16:46.935228] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.023 [2024-04-24 15:16:47.055383] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.023 [2024-04-24 15:16:47.055453] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.023 [2024-04-24 15:16:47.055466] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.023 [2024-04-24 15:16:47.055475] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.023 [2024-04-24 15:16:47.055483] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.023 [2024-04-24 15:16:47.055519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.588 15:16:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:38.588 15:16:47 -- common/autotest_common.sh@850 -- # return 0 00:12:38.588 15:16:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:38.588 15:16:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:38.588 15:16:47 -- common/autotest_common.sh@10 -- # set +x 00:12:38.588 15:16:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.588 15:16:47 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:12:38.588 15:16:47 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:38.846 true 00:12:38.846 15:16:48 -- target/tls.sh@73 -- # jq -r .tls_version 00:12:38.846 15:16:48 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:39.411 15:16:48 -- target/tls.sh@73 -- # version=0 00:12:39.411 15:16:48 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:12:39.411 15:16:48 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:39.669 15:16:48 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:39.669 15:16:48 -- target/tls.sh@81 -- # jq -r .tls_version 00:12:39.669 15:16:48 -- target/tls.sh@81 -- # version=13 00:12:39.669 15:16:48 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:12:39.669 15:16:48 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:39.927 15:16:49 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:39.927 15:16:49 -- target/tls.sh@89 -- # jq -r .tls_version 00:12:40.184 15:16:49 -- target/tls.sh@89 -- # version=7 00:12:40.184 15:16:49 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:12:40.184 15:16:49 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:40.184 15:16:49 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:12:40.750 15:16:49 -- target/tls.sh@96 -- # ktls=false 00:12:40.750 15:16:49 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:12:40.750 15:16:49 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:41.007 15:16:49 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:12:41.007 15:16:49 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:41.007 15:16:50 -- target/tls.sh@104 -- # ktls=true 00:12:41.007 15:16:50 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:12:41.007 15:16:50 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:41.265 15:16:50 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:41.265 15:16:50 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:12:41.523 15:16:50 -- target/tls.sh@112 -- # ktls=false 00:12:41.523 15:16:50 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:12:41.523 15:16:50 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:41.523 15:16:50 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:41.523 15:16:50 -- nvmf/common.sh@691 -- # local prefix key digest 00:12:41.523 15:16:50 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:12:41.523 15:16:50 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:12:41.523 15:16:50 -- nvmf/common.sh@693 -- # digest=1 00:12:41.523 15:16:50 -- nvmf/common.sh@694 -- # python - 00:12:41.781 15:16:50 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:41.781 15:16:50 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:41.781 15:16:50 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:41.781 15:16:50 -- nvmf/common.sh@691 -- # local prefix key digest 00:12:41.781 15:16:50 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:12:41.781 15:16:50 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:12:41.781 15:16:50 -- nvmf/common.sh@693 -- # digest=1 00:12:41.781 15:16:50 -- nvmf/common.sh@694 -- # python - 00:12:41.781 15:16:50 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:41.781 15:16:50 -- target/tls.sh@121 -- # mktemp 00:12:41.781 15:16:50 -- target/tls.sh@121 -- # key_path=/tmp/tmp.ZPnMUHTLuz 00:12:41.781 15:16:50 -- target/tls.sh@122 -- # mktemp 00:12:41.781 15:16:50 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.C9pO5u7Q9t 00:12:41.781 15:16:50 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:41.781 15:16:50 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:41.781 15:16:50 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ZPnMUHTLuz 00:12:41.781 15:16:50 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.C9pO5u7Q9t 00:12:41.781 15:16:50 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:42.039 15:16:51 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:42.297 15:16:51 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ZPnMUHTLuz 00:12:42.297 15:16:51 -- target/tls.sh@49 -- # local key=/tmp/tmp.ZPnMUHTLuz 00:12:42.297 15:16:51 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:42.554 [2024-04-24 15:16:51.595504] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.554 15:16:51 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:42.812 15:16:51 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:42.812 [2024-04-24 15:16:52.031570] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:42.812 [2024-04-24 15:16:52.031818] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.812 15:16:52 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:43.070 malloc0 00:12:43.070 15:16:52 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:43.343 15:16:52 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZPnMUHTLuz 00:12:43.629 [2024-04-24 15:16:52.762902] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:43.630 15:16:52 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ZPnMUHTLuz 00:12:55.828 Initializing NVMe Controllers 00:12:55.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:55.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:55.828 Initialization complete. Launching workers. 00:12:55.828 ======================================================== 00:12:55.828 Latency(us) 00:12:55.828 Device Information : IOPS MiB/s Average min max 00:12:55.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9083.00 35.48 7047.85 1133.63 14380.46 00:12:55.828 ======================================================== 00:12:55.829 Total : 9083.00 35.48 7047.85 1133.63 14380.46 00:12:55.829 00:12:55.829 15:17:02 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZPnMUHTLuz 00:12:55.829 15:17:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:55.829 15:17:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:55.829 15:17:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:55.829 15:17:02 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZPnMUHTLuz' 00:12:55.829 15:17:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:55.829 15:17:02 -- target/tls.sh@28 -- # bdevperf_pid=70161 00:12:55.829 15:17:02 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:55.829 15:17:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:55.829 15:17:02 -- target/tls.sh@31 -- # waitforlisten 70161 /var/tmp/bdevperf.sock 00:12:55.829 15:17:02 -- common/autotest_common.sh@817 -- # '[' -z 70161 ']' 00:12:55.829 15:17:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:55.829 15:17:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:55.829 15:17:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:55.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:55.829 15:17:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:55.829 15:17:02 -- common/autotest_common.sh@10 -- # set +x 00:12:55.829 [2024-04-24 15:17:03.027916] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:12:55.829 [2024-04-24 15:17:03.028327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70161 ] 00:12:55.829 [2024-04-24 15:17:03.168991] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.829 [2024-04-24 15:17:03.297320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.829 15:17:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:55.829 15:17:04 -- common/autotest_common.sh@850 -- # return 0 00:12:55.829 15:17:04 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZPnMUHTLuz 00:12:55.829 [2024-04-24 15:17:04.344806] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:55.829 [2024-04-24 15:17:04.344961] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:55.829 TLSTESTn1 00:12:55.829 15:17:04 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:55.829 Running I/O for 10 seconds... 00:13:05.804 00:13:05.804 Latency(us) 00:13:05.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.804 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:05.804 Verification LBA range: start 0x0 length 0x2000 00:13:05.804 TLSTESTn1 : 10.03 3843.07 15.01 0.00 0.00 33240.77 10962.39 30980.65 00:13:05.804 =================================================================================================================== 00:13:05.804 Total : 3843.07 15.01 0.00 0.00 33240.77 10962.39 30980.65 00:13:05.804 0 00:13:05.804 15:17:14 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:05.804 15:17:14 -- target/tls.sh@45 -- # killprocess 70161 00:13:05.804 15:17:14 -- common/autotest_common.sh@936 -- # '[' -z 70161 ']' 00:13:05.804 15:17:14 -- common/autotest_common.sh@940 -- # kill -0 70161 00:13:05.804 15:17:14 -- common/autotest_common.sh@941 -- # uname 00:13:05.804 15:17:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:05.804 15:17:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70161 00:13:05.804 killing process with pid 70161 00:13:05.804 Received shutdown signal, test time was about 10.000000 seconds 00:13:05.804 00:13:05.804 Latency(us) 00:13:05.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.804 =================================================================================================================== 00:13:05.804 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:05.804 15:17:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:05.804 15:17:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:05.804 15:17:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70161' 00:13:05.804 15:17:14 -- common/autotest_common.sh@955 -- # kill 70161 00:13:05.804 [2024-04-24 15:17:14.618870] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:05.804 15:17:14 -- common/autotest_common.sh@960 -- # wait 70161 00:13:05.804 15:17:14 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C9pO5u7Q9t 00:13:05.804 15:17:14 -- common/autotest_common.sh@638 -- # local es=0 00:13:05.804 15:17:14 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C9pO5u7Q9t 00:13:05.804 15:17:14 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:13:05.804 15:17:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:05.804 15:17:14 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:13:05.804 15:17:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:05.804 15:17:14 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C9pO5u7Q9t 00:13:05.804 15:17:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:05.804 15:17:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:05.804 15:17:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:05.804 15:17:14 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.C9pO5u7Q9t' 00:13:05.804 15:17:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:05.804 15:17:14 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:05.804 15:17:14 -- target/tls.sh@28 -- # bdevperf_pid=70294 00:13:05.804 15:17:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:05.804 15:17:14 -- target/tls.sh@31 -- # waitforlisten 70294 /var/tmp/bdevperf.sock 00:13:05.804 15:17:14 -- common/autotest_common.sh@817 -- # '[' -z 70294 ']' 00:13:05.804 15:17:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:05.804 15:17:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:05.804 15:17:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:05.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:05.804 15:17:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:05.804 15:17:14 -- common/autotest_common.sh@10 -- # set +x 00:13:05.804 [2024-04-24 15:17:14.923028] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:05.804 [2024-04-24 15:17:14.923332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70294 ] 00:13:06.081 [2024-04-24 15:17:15.056211] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.081 [2024-04-24 15:17:15.171044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.692 15:17:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:06.692 15:17:15 -- common/autotest_common.sh@850 -- # return 0 00:13:06.692 15:17:15 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C9pO5u7Q9t 00:13:06.951 [2024-04-24 15:17:16.167820] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:06.951 [2024-04-24 15:17:16.168197] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:06.951 [2024-04-24 15:17:16.173659] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:06.951 [2024-04-24 15:17:16.174205] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496a80 (107): Transport endpoint is not connected 00:13:06.951 [2024-04-24 15:17:16.175195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496a80 (9): Bad file descriptor 00:13:06.951 [2024-04-24 15:17:16.176192] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:06.951 [2024-04-24 15:17:16.176376] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:06.951 [2024-04-24 15:17:16.176548] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:06.951 request: 00:13:06.951 { 00:13:06.951 "name": "TLSTEST", 00:13:06.951 "trtype": "tcp", 00:13:06.951 "traddr": "10.0.0.2", 00:13:06.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:06.951 "adrfam": "ipv4", 00:13:06.951 "trsvcid": "4420", 00:13:06.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:06.951 "psk": "/tmp/tmp.C9pO5u7Q9t", 00:13:06.951 "method": "bdev_nvme_attach_controller", 00:13:06.951 "req_id": 1 00:13:06.951 } 00:13:06.951 Got JSON-RPC error response 00:13:06.951 response: 00:13:06.951 { 00:13:06.951 "code": -32602, 00:13:06.951 "message": "Invalid parameters" 00:13:06.951 } 00:13:07.211 15:17:16 -- target/tls.sh@36 -- # killprocess 70294 00:13:07.211 15:17:16 -- common/autotest_common.sh@936 -- # '[' -z 70294 ']' 00:13:07.211 15:17:16 -- common/autotest_common.sh@940 -- # kill -0 70294 00:13:07.211 15:17:16 -- common/autotest_common.sh@941 -- # uname 00:13:07.211 15:17:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:07.211 15:17:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70294 00:13:07.211 killing process with pid 70294 00:13:07.211 Received shutdown signal, test time was about 10.000000 seconds 00:13:07.211 00:13:07.211 Latency(us) 00:13:07.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.211 =================================================================================================================== 00:13:07.211 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:07.211 15:17:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:07.211 15:17:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:07.211 15:17:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70294' 00:13:07.211 15:17:16 -- common/autotest_common.sh@955 -- # kill 70294 00:13:07.211 [2024-04-24 15:17:16.227054] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:07.211 15:17:16 -- common/autotest_common.sh@960 -- # wait 70294 00:13:07.470 15:17:16 -- target/tls.sh@37 -- # return 1 00:13:07.470 15:17:16 -- common/autotest_common.sh@641 -- # es=1 00:13:07.470 15:17:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:07.470 15:17:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:07.470 15:17:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:07.470 15:17:16 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZPnMUHTLuz 00:13:07.470 15:17:16 -- common/autotest_common.sh@638 -- # local es=0 00:13:07.470 15:17:16 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZPnMUHTLuz 00:13:07.470 15:17:16 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:13:07.470 15:17:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:07.470 15:17:16 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:13:07.470 15:17:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:07.470 15:17:16 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZPnMUHTLuz 00:13:07.470 15:17:16 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:07.470 15:17:16 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:07.470 15:17:16 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:07.470 15:17:16 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZPnMUHTLuz' 00:13:07.470 15:17:16 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:07.470 15:17:16 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:07.470 15:17:16 -- target/tls.sh@28 -- # bdevperf_pid=70324 00:13:07.470 15:17:16 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:07.470 15:17:16 -- target/tls.sh@31 -- # waitforlisten 70324 /var/tmp/bdevperf.sock 00:13:07.470 15:17:16 -- common/autotest_common.sh@817 -- # '[' -z 70324 ']' 00:13:07.470 15:17:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:07.470 15:17:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:07.470 15:17:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:07.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:07.470 15:17:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:07.470 15:17:16 -- common/autotest_common.sh@10 -- # set +x 00:13:07.470 [2024-04-24 15:17:16.529804] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:07.470 [2024-04-24 15:17:16.530200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70324 ] 00:13:07.470 [2024-04-24 15:17:16.664153] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.728 [2024-04-24 15:17:16.782732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.298 15:17:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:08.298 15:17:17 -- common/autotest_common.sh@850 -- # return 0 00:13:08.298 15:17:17 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ZPnMUHTLuz 00:13:08.557 [2024-04-24 15:17:17.652405] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:08.557 [2024-04-24 15:17:17.652887] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:08.557 [2024-04-24 15:17:17.662599] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:08.557 [2024-04-24 15:17:17.662887] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:08.557 [2024-04-24 15:17:17.663153] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:08.557 [2024-04-24 15:17:17.663609] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c11a80 (107): Transport endpoint is not connected 00:13:08.557 [2024-04-24 15:17:17.664601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c11a80 (9): Bad file descriptor 00:13:08.557 [2024-04-24 15:17:17.665595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:08.557 [2024-04-24 15:17:17.665621] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:08.557 [2024-04-24 15:17:17.665637] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:08.557 request: 00:13:08.557 { 00:13:08.557 "name": "TLSTEST", 00:13:08.557 "trtype": "tcp", 00:13:08.557 "traddr": "10.0.0.2", 00:13:08.557 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:08.557 "adrfam": "ipv4", 00:13:08.557 "trsvcid": "4420", 00:13:08.557 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.557 "psk": "/tmp/tmp.ZPnMUHTLuz", 00:13:08.557 "method": "bdev_nvme_attach_controller", 00:13:08.557 "req_id": 1 00:13:08.557 } 00:13:08.557 Got JSON-RPC error response 00:13:08.557 response: 00:13:08.557 { 00:13:08.557 "code": -32602, 00:13:08.557 "message": "Invalid parameters" 00:13:08.557 } 00:13:08.557 15:17:17 -- target/tls.sh@36 -- # killprocess 70324 00:13:08.557 15:17:17 -- common/autotest_common.sh@936 -- # '[' -z 70324 ']' 00:13:08.557 15:17:17 -- common/autotest_common.sh@940 -- # kill -0 70324 00:13:08.557 15:17:17 -- common/autotest_common.sh@941 -- # uname 00:13:08.557 15:17:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:08.557 15:17:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70324 00:13:08.558 killing process with pid 70324 00:13:08.558 Received shutdown signal, test time was about 10.000000 seconds 00:13:08.558 00:13:08.558 Latency(us) 00:13:08.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.558 =================================================================================================================== 00:13:08.558 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:08.558 15:17:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:08.558 15:17:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:08.558 15:17:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70324' 00:13:08.558 15:17:17 -- common/autotest_common.sh@955 -- # kill 70324 00:13:08.558 [2024-04-24 15:17:17.723484] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:08.558 15:17:17 -- common/autotest_common.sh@960 -- # wait 70324 00:13:08.816 15:17:17 -- target/tls.sh@37 -- # return 1 00:13:08.816 15:17:17 -- common/autotest_common.sh@641 -- # es=1 00:13:08.816 15:17:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:08.816 15:17:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:08.816 15:17:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:08.816 15:17:17 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZPnMUHTLuz 00:13:08.816 15:17:17 -- common/autotest_common.sh@638 -- # local es=0 00:13:08.817 15:17:17 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZPnMUHTLuz 00:13:08.817 15:17:17 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:13:08.817 15:17:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:08.817 15:17:17 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:13:08.817 15:17:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:08.817 15:17:17 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZPnMUHTLuz 00:13:08.817 15:17:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:08.817 15:17:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:08.817 15:17:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:08.817 15:17:17 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZPnMUHTLuz' 00:13:08.817 15:17:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:08.817 15:17:17 -- target/tls.sh@28 -- # bdevperf_pid=70346 00:13:08.817 15:17:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:08.817 15:17:17 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:08.817 15:17:17 -- target/tls.sh@31 -- # waitforlisten 70346 /var/tmp/bdevperf.sock 00:13:08.817 15:17:17 -- common/autotest_common.sh@817 -- # '[' -z 70346 ']' 00:13:08.817 15:17:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:08.817 15:17:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:08.817 15:17:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:08.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:08.817 15:17:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:08.817 15:17:17 -- common/autotest_common.sh@10 -- # set +x 00:13:08.817 [2024-04-24 15:17:18.040199] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:08.817 [2024-04-24 15:17:18.040299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70346 ] 00:13:09.075 [2024-04-24 15:17:18.185964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.075 [2024-04-24 15:17:18.302630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.010 15:17:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:10.010 15:17:19 -- common/autotest_common.sh@850 -- # return 0 00:13:10.010 15:17:19 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZPnMUHTLuz 00:13:10.010 [2024-04-24 15:17:19.228963] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:10.010 [2024-04-24 15:17:19.229101] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:10.010 [2024-04-24 15:17:19.233959] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:10.010 [2024-04-24 15:17:19.234013] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:10.010 [2024-04-24 15:17:19.234066] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:10.010 [2024-04-24 15:17:19.234690] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd36a80 (107): Transport endpoint is not connected 00:13:10.010 [2024-04-24 15:17:19.235677] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd36a80 (9): Bad file descriptor 00:13:10.010 [2024-04-24 15:17:19.236673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:10.010 [2024-04-24 15:17:19.236694] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:10.010 [2024-04-24 15:17:19.236707] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:10.010 request: 00:13:10.010 { 00:13:10.010 "name": "TLSTEST", 00:13:10.010 "trtype": "tcp", 00:13:10.010 "traddr": "10.0.0.2", 00:13:10.010 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:10.010 "adrfam": "ipv4", 00:13:10.010 "trsvcid": "4420", 00:13:10.010 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:10.010 "psk": "/tmp/tmp.ZPnMUHTLuz", 00:13:10.010 "method": "bdev_nvme_attach_controller", 00:13:10.010 "req_id": 1 00:13:10.010 } 00:13:10.010 Got JSON-RPC error response 00:13:10.010 response: 00:13:10.010 { 00:13:10.010 "code": -32602, 00:13:10.010 "message": "Invalid parameters" 00:13:10.010 } 00:13:10.268 15:17:19 -- target/tls.sh@36 -- # killprocess 70346 00:13:10.268 15:17:19 -- common/autotest_common.sh@936 -- # '[' -z 70346 ']' 00:13:10.268 15:17:19 -- common/autotest_common.sh@940 -- # kill -0 70346 00:13:10.268 15:17:19 -- common/autotest_common.sh@941 -- # uname 00:13:10.268 15:17:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:10.268 15:17:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70346 00:13:10.268 killing process with pid 70346 00:13:10.268 Received shutdown signal, test time was about 10.000000 seconds 00:13:10.268 00:13:10.268 Latency(us) 00:13:10.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.268 =================================================================================================================== 00:13:10.268 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:10.268 15:17:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:10.268 15:17:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:10.268 15:17:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70346' 00:13:10.268 15:17:19 -- common/autotest_common.sh@955 -- # kill 70346 00:13:10.268 [2024-04-24 15:17:19.283958] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:10.268 15:17:19 -- common/autotest_common.sh@960 -- # wait 70346 00:13:10.527 15:17:19 -- target/tls.sh@37 -- # return 1 00:13:10.527 15:17:19 -- common/autotest_common.sh@641 -- # es=1 00:13:10.527 15:17:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:10.527 15:17:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:10.527 15:17:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:10.527 15:17:19 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:10.527 15:17:19 -- common/autotest_common.sh@638 -- # local es=0 00:13:10.527 15:17:19 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:10.527 15:17:19 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:13:10.527 15:17:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:10.527 15:17:19 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:13:10.527 15:17:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:10.527 15:17:19 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:10.527 15:17:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:10.527 15:17:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:10.527 15:17:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:10.527 15:17:19 -- target/tls.sh@23 -- # psk= 00:13:10.527 15:17:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:10.527 15:17:19 -- target/tls.sh@28 -- # bdevperf_pid=70379 00:13:10.527 15:17:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:10.527 15:17:19 -- target/tls.sh@31 -- # waitforlisten 70379 /var/tmp/bdevperf.sock 00:13:10.527 15:17:19 -- common/autotest_common.sh@817 -- # '[' -z 70379 ']' 00:13:10.527 15:17:19 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:10.527 15:17:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:10.527 15:17:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:10.527 15:17:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:10.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:10.527 15:17:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:10.527 15:17:19 -- common/autotest_common.sh@10 -- # set +x 00:13:10.527 [2024-04-24 15:17:19.599176] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:10.527 [2024-04-24 15:17:19.599596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70379 ] 00:13:10.527 [2024-04-24 15:17:19.738297] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.800 [2024-04-24 15:17:19.858166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.368 15:17:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:11.368 15:17:20 -- common/autotest_common.sh@850 -- # return 0 00:13:11.368 15:17:20 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:11.626 [2024-04-24 15:17:20.823107] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:11.626 [2024-04-24 15:17:20.824747] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab5dc0 (9): Bad file descriptor 00:13:11.626 [2024-04-24 15:17:20.825743] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:11.626 [2024-04-24 15:17:20.825765] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:11.626 [2024-04-24 15:17:20.825778] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:11.626 request: 00:13:11.626 { 00:13:11.626 "name": "TLSTEST", 00:13:11.626 "trtype": "tcp", 00:13:11.626 "traddr": "10.0.0.2", 00:13:11.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:11.626 "adrfam": "ipv4", 00:13:11.626 "trsvcid": "4420", 00:13:11.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:11.626 "method": "bdev_nvme_attach_controller", 00:13:11.626 "req_id": 1 00:13:11.626 } 00:13:11.626 Got JSON-RPC error response 00:13:11.626 response: 00:13:11.626 { 00:13:11.626 "code": -32602, 00:13:11.627 "message": "Invalid parameters" 00:13:11.627 } 00:13:11.627 15:17:20 -- target/tls.sh@36 -- # killprocess 70379 00:13:11.627 15:17:20 -- common/autotest_common.sh@936 -- # '[' -z 70379 ']' 00:13:11.627 15:17:20 -- common/autotest_common.sh@940 -- # kill -0 70379 00:13:11.627 15:17:20 -- common/autotest_common.sh@941 -- # uname 00:13:11.627 15:17:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:11.627 15:17:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70379 00:13:11.885 killing process with pid 70379 00:13:11.885 Received shutdown signal, test time was about 10.000000 seconds 00:13:11.885 00:13:11.885 Latency(us) 00:13:11.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.885 =================================================================================================================== 00:13:11.885 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:11.885 15:17:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:11.885 15:17:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:11.885 15:17:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70379' 00:13:11.885 15:17:20 -- common/autotest_common.sh@955 -- # kill 70379 00:13:11.885 15:17:20 -- common/autotest_common.sh@960 -- # wait 70379 00:13:12.144 15:17:21 -- target/tls.sh@37 -- # return 1 00:13:12.144 15:17:21 -- common/autotest_common.sh@641 -- # es=1 00:13:12.144 15:17:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:12.144 15:17:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:12.144 15:17:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:12.144 15:17:21 -- target/tls.sh@158 -- # killprocess 69924 00:13:12.144 15:17:21 -- common/autotest_common.sh@936 -- # '[' -z 69924 ']' 00:13:12.144 15:17:21 -- common/autotest_common.sh@940 -- # kill -0 69924 00:13:12.144 15:17:21 -- common/autotest_common.sh@941 -- # uname 00:13:12.144 15:17:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:12.144 15:17:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69924 00:13:12.144 killing process with pid 69924 00:13:12.144 15:17:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:12.144 15:17:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:12.144 15:17:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69924' 00:13:12.144 15:17:21 -- common/autotest_common.sh@955 -- # kill 69924 00:13:12.144 [2024-04-24 15:17:21.156353] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:12.144 15:17:21 -- common/autotest_common.sh@960 -- # wait 69924 00:13:12.402 15:17:21 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:12.402 15:17:21 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:12.402 15:17:21 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:12.402 15:17:21 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:13:12.402 15:17:21 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:12.402 15:17:21 -- nvmf/common.sh@693 -- # digest=2 00:13:12.402 15:17:21 -- nvmf/common.sh@694 -- # python - 00:13:12.402 15:17:21 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:12.402 15:17:21 -- target/tls.sh@160 -- # mktemp 00:13:12.402 15:17:21 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Bs9Dou4SoA 00:13:12.402 15:17:21 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:12.402 15:17:21 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Bs9Dou4SoA 00:13:12.402 15:17:21 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:12.402 15:17:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:12.402 15:17:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:12.402 15:17:21 -- common/autotest_common.sh@10 -- # set +x 00:13:12.402 15:17:21 -- nvmf/common.sh@470 -- # nvmfpid=70421 00:13:12.402 15:17:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:12.402 15:17:21 -- nvmf/common.sh@471 -- # waitforlisten 70421 00:13:12.402 15:17:21 -- common/autotest_common.sh@817 -- # '[' -z 70421 ']' 00:13:12.402 15:17:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.402 15:17:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:12.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.402 15:17:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.402 15:17:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:12.402 15:17:21 -- common/autotest_common.sh@10 -- # set +x 00:13:12.402 [2024-04-24 15:17:21.557916] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:12.402 [2024-04-24 15:17:21.558012] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.665 [2024-04-24 15:17:21.695005] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.665 [2024-04-24 15:17:21.814761] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.665 [2024-04-24 15:17:21.814827] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.665 [2024-04-24 15:17:21.814840] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.665 [2024-04-24 15:17:21.814849] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.665 [2024-04-24 15:17:21.814858] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.665 [2024-04-24 15:17:21.814890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.608 15:17:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:13.608 15:17:22 -- common/autotest_common.sh@850 -- # return 0 00:13:13.608 15:17:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:13.608 15:17:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:13.608 15:17:22 -- common/autotest_common.sh@10 -- # set +x 00:13:13.608 15:17:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.608 15:17:22 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Bs9Dou4SoA 00:13:13.608 15:17:22 -- target/tls.sh@49 -- # local key=/tmp/tmp.Bs9Dou4SoA 00:13:13.608 15:17:22 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:13.608 [2024-04-24 15:17:22.823500] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.608 15:17:22 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:13.867 15:17:23 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:14.126 [2024-04-24 15:17:23.271672] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:14.126 [2024-04-24 15:17:23.271946] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.126 15:17:23 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:14.384 malloc0 00:13:14.384 15:17:23 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:14.644 15:17:23 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bs9Dou4SoA 00:13:14.903 [2024-04-24 15:17:23.960157] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:14.903 15:17:23 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bs9Dou4SoA 00:13:14.903 15:17:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:14.903 15:17:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:14.903 15:17:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:14.903 15:17:23 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Bs9Dou4SoA' 00:13:14.903 15:17:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:14.903 15:17:23 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:14.903 15:17:23 -- target/tls.sh@28 -- # bdevperf_pid=70471 00:13:14.903 15:17:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:14.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:14.903 15:17:23 -- target/tls.sh@31 -- # waitforlisten 70471 /var/tmp/bdevperf.sock 00:13:14.903 15:17:23 -- common/autotest_common.sh@817 -- # '[' -z 70471 ']' 00:13:14.903 15:17:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:14.903 15:17:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:14.903 15:17:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:14.903 15:17:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:14.903 15:17:23 -- common/autotest_common.sh@10 -- # set +x 00:13:14.903 [2024-04-24 15:17:24.019126] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:14.903 [2024-04-24 15:17:24.019230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70471 ] 00:13:15.162 [2024-04-24 15:17:24.156406] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.162 [2024-04-24 15:17:24.293218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.100 15:17:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:16.100 15:17:24 -- common/autotest_common.sh@850 -- # return 0 00:13:16.100 15:17:24 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bs9Dou4SoA 00:13:16.100 [2024-04-24 15:17:25.197741] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:16.100 [2024-04-24 15:17:25.199072] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:16.100 TLSTESTn1 00:13:16.100 15:17:25 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:16.362 Running I/O for 10 seconds... 00:13:26.339 00:13:26.339 Latency(us) 00:13:26.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.339 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:26.339 Verification LBA range: start 0x0 length 0x2000 00:13:26.339 TLSTESTn1 : 10.03 3865.32 15.10 0.00 0.00 33047.50 7685.59 31218.97 00:13:26.339 =================================================================================================================== 00:13:26.339 Total : 3865.32 15.10 0.00 0.00 33047.50 7685.59 31218.97 00:13:26.339 0 00:13:26.339 15:17:35 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:26.339 15:17:35 -- target/tls.sh@45 -- # killprocess 70471 00:13:26.339 15:17:35 -- common/autotest_common.sh@936 -- # '[' -z 70471 ']' 00:13:26.339 15:17:35 -- common/autotest_common.sh@940 -- # kill -0 70471 00:13:26.339 15:17:35 -- common/autotest_common.sh@941 -- # uname 00:13:26.339 15:17:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:26.339 15:17:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70471 00:13:26.339 15:17:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:26.339 killing process with pid 70471 00:13:26.339 Received shutdown signal, test time was about 10.000000 seconds 00:13:26.339 00:13:26.339 Latency(us) 00:13:26.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.339 =================================================================================================================== 00:13:26.339 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:26.339 15:17:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:26.339 15:17:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70471' 00:13:26.339 15:17:35 -- common/autotest_common.sh@955 -- # kill 70471 00:13:26.339 [2024-04-24 15:17:35.503496] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:26.339 15:17:35 -- common/autotest_common.sh@960 -- # wait 70471 00:13:26.597 15:17:35 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Bs9Dou4SoA 00:13:26.597 15:17:35 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bs9Dou4SoA 00:13:26.597 15:17:35 -- common/autotest_common.sh@638 -- # local es=0 00:13:26.597 15:17:35 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bs9Dou4SoA 00:13:26.597 15:17:35 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:13:26.597 15:17:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:26.597 15:17:35 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:13:26.597 15:17:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:26.597 15:17:35 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bs9Dou4SoA 00:13:26.597 15:17:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:26.597 15:17:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:26.597 15:17:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:26.597 15:17:35 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Bs9Dou4SoA' 00:13:26.597 15:17:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:26.598 15:17:35 -- target/tls.sh@28 -- # bdevperf_pid=70606 00:13:26.598 15:17:35 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:26.598 15:17:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:26.598 15:17:35 -- target/tls.sh@31 -- # waitforlisten 70606 /var/tmp/bdevperf.sock 00:13:26.598 15:17:35 -- common/autotest_common.sh@817 -- # '[' -z 70606 ']' 00:13:26.598 15:17:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:26.598 15:17:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:26.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:26.598 15:17:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:26.598 15:17:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:26.598 15:17:35 -- common/autotest_common.sh@10 -- # set +x 00:13:26.598 [2024-04-24 15:17:35.829288] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:26.598 [2024-04-24 15:17:35.829394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70606 ] 00:13:26.856 [2024-04-24 15:17:35.972124] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.856 [2024-04-24 15:17:36.099203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.788 15:17:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:27.788 15:17:36 -- common/autotest_common.sh@850 -- # return 0 00:13:27.788 15:17:36 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bs9Dou4SoA 00:13:28.046 [2024-04-24 15:17:37.061045] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:28.046 [2024-04-24 15:17:37.061966] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:28.046 [2024-04-24 15:17:37.062275] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Bs9Dou4SoA 00:13:28.046 request: 00:13:28.046 { 00:13:28.046 "name": "TLSTEST", 00:13:28.046 "trtype": "tcp", 00:13:28.046 "traddr": "10.0.0.2", 00:13:28.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:28.046 "adrfam": "ipv4", 00:13:28.046 "trsvcid": "4420", 00:13:28.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:28.046 "psk": "/tmp/tmp.Bs9Dou4SoA", 00:13:28.046 "method": "bdev_nvme_attach_controller", 00:13:28.046 "req_id": 1 00:13:28.046 } 00:13:28.046 Got JSON-RPC error response 00:13:28.046 response: 00:13:28.046 { 00:13:28.046 "code": -1, 00:13:28.046 "message": "Operation not permitted" 00:13:28.046 } 00:13:28.046 15:17:37 -- target/tls.sh@36 -- # killprocess 70606 00:13:28.046 15:17:37 -- common/autotest_common.sh@936 -- # '[' -z 70606 ']' 00:13:28.046 15:17:37 -- common/autotest_common.sh@940 -- # kill -0 70606 00:13:28.046 15:17:37 -- common/autotest_common.sh@941 -- # uname 00:13:28.046 15:17:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:28.046 15:17:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70606 00:13:28.046 killing process with pid 70606 00:13:28.046 Received shutdown signal, test time was about 10.000000 seconds 00:13:28.046 00:13:28.046 Latency(us) 00:13:28.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.046 =================================================================================================================== 00:13:28.046 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:28.046 15:17:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:28.046 15:17:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:28.046 15:17:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70606' 00:13:28.046 15:17:37 -- common/autotest_common.sh@955 -- # kill 70606 00:13:28.046 15:17:37 -- common/autotest_common.sh@960 -- # wait 70606 00:13:28.303 15:17:37 -- target/tls.sh@37 -- # return 1 00:13:28.303 15:17:37 -- common/autotest_common.sh@641 -- # es=1 00:13:28.303 15:17:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:28.303 15:17:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:28.303 15:17:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:28.303 15:17:37 -- target/tls.sh@174 -- # killprocess 70421 00:13:28.303 15:17:37 -- common/autotest_common.sh@936 -- # '[' -z 70421 ']' 00:13:28.303 15:17:37 -- common/autotest_common.sh@940 -- # kill -0 70421 00:13:28.303 15:17:37 -- common/autotest_common.sh@941 -- # uname 00:13:28.303 15:17:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:28.304 15:17:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70421 00:13:28.304 killing process with pid 70421 00:13:28.304 15:17:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:28.304 15:17:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:28.304 15:17:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70421' 00:13:28.304 15:17:37 -- common/autotest_common.sh@955 -- # kill 70421 00:13:28.304 [2024-04-24 15:17:37.406213] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:28.304 15:17:37 -- common/autotest_common.sh@960 -- # wait 70421 00:13:28.561 15:17:37 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:13:28.561 15:17:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:28.561 15:17:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:28.561 15:17:37 -- common/autotest_common.sh@10 -- # set +x 00:13:28.561 15:17:37 -- nvmf/common.sh@470 -- # nvmfpid=70644 00:13:28.561 15:17:37 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:28.561 15:17:37 -- nvmf/common.sh@471 -- # waitforlisten 70644 00:13:28.561 15:17:37 -- common/autotest_common.sh@817 -- # '[' -z 70644 ']' 00:13:28.561 15:17:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.561 15:17:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:28.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.561 15:17:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.561 15:17:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:28.561 15:17:37 -- common/autotest_common.sh@10 -- # set +x 00:13:28.561 [2024-04-24 15:17:37.734518] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:28.561 [2024-04-24 15:17:37.734613] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.819 [2024-04-24 15:17:37.870159] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.819 [2024-04-24 15:17:37.979619] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.819 [2024-04-24 15:17:37.979677] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.819 [2024-04-24 15:17:37.979688] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.819 [2024-04-24 15:17:37.979696] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.819 [2024-04-24 15:17:37.979703] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.819 [2024-04-24 15:17:37.979745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.385 15:17:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:29.385 15:17:38 -- common/autotest_common.sh@850 -- # return 0 00:13:29.385 15:17:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:29.385 15:17:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:29.385 15:17:38 -- common/autotest_common.sh@10 -- # set +x 00:13:29.644 15:17:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.644 15:17:38 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Bs9Dou4SoA 00:13:29.644 15:17:38 -- common/autotest_common.sh@638 -- # local es=0 00:13:29.644 15:17:38 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Bs9Dou4SoA 00:13:29.644 15:17:38 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:13:29.644 15:17:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:29.644 15:17:38 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:13:29.644 15:17:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:29.644 15:17:38 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.Bs9Dou4SoA 00:13:29.644 15:17:38 -- target/tls.sh@49 -- # local key=/tmp/tmp.Bs9Dou4SoA 00:13:29.644 15:17:38 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:29.644 [2024-04-24 15:17:38.868277] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.644 15:17:38 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:29.902 15:17:39 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:30.160 [2024-04-24 15:17:39.344351] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:30.160 [2024-04-24 15:17:39.344606] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.160 15:17:39 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:30.418 malloc0 00:13:30.418 15:17:39 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:30.676 15:17:39 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bs9Dou4SoA 00:13:30.934 [2024-04-24 15:17:40.059960] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:30.934 [2024-04-24 15:17:40.060003] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:30.934 [2024-04-24 15:17:40.060043] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:13:30.934 request: 00:13:30.934 { 00:13:30.934 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.934 "host": "nqn.2016-06.io.spdk:host1", 00:13:30.934 "psk": "/tmp/tmp.Bs9Dou4SoA", 00:13:30.934 "method": "nvmf_subsystem_add_host", 00:13:30.934 "req_id": 1 00:13:30.934 } 00:13:30.934 Got JSON-RPC error response 00:13:30.934 response: 00:13:30.934 { 00:13:30.934 "code": -32603, 00:13:30.934 "message": "Internal error" 00:13:30.934 } 00:13:30.934 15:17:40 -- common/autotest_common.sh@641 -- # es=1 00:13:30.934 15:17:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:30.934 15:17:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:30.934 15:17:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:30.934 15:17:40 -- target/tls.sh@180 -- # killprocess 70644 00:13:30.934 15:17:40 -- common/autotest_common.sh@936 -- # '[' -z 70644 ']' 00:13:30.934 15:17:40 -- common/autotest_common.sh@940 -- # kill -0 70644 00:13:30.934 15:17:40 -- common/autotest_common.sh@941 -- # uname 00:13:30.934 15:17:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:30.934 15:17:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70644 00:13:30.934 killing process with pid 70644 00:13:30.935 15:17:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:30.935 15:17:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:30.935 15:17:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70644' 00:13:30.935 15:17:40 -- common/autotest_common.sh@955 -- # kill 70644 00:13:30.935 15:17:40 -- common/autotest_common.sh@960 -- # wait 70644 00:13:31.193 15:17:40 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Bs9Dou4SoA 00:13:31.193 15:17:40 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:31.193 15:17:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:31.193 15:17:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:31.193 15:17:40 -- common/autotest_common.sh@10 -- # set +x 00:13:31.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.193 15:17:40 -- nvmf/common.sh@470 -- # nvmfpid=70701 00:13:31.193 15:17:40 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:31.193 15:17:40 -- nvmf/common.sh@471 -- # waitforlisten 70701 00:13:31.193 15:17:40 -- common/autotest_common.sh@817 -- # '[' -z 70701 ']' 00:13:31.193 15:17:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.193 15:17:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:31.193 15:17:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.193 15:17:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:31.193 15:17:40 -- common/autotest_common.sh@10 -- # set +x 00:13:31.451 [2024-04-24 15:17:40.438395] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:31.452 [2024-04-24 15:17:40.438700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.452 [2024-04-24 15:17:40.576577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.452 [2024-04-24 15:17:40.686953] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.452 [2024-04-24 15:17:40.687215] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.452 [2024-04-24 15:17:40.687360] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.452 [2024-04-24 15:17:40.687509] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.452 [2024-04-24 15:17:40.687624] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.452 [2024-04-24 15:17:40.687693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.386 15:17:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:32.386 15:17:41 -- common/autotest_common.sh@850 -- # return 0 00:13:32.386 15:17:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:32.386 15:17:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:32.386 15:17:41 -- common/autotest_common.sh@10 -- # set +x 00:13:32.387 15:17:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.387 15:17:41 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Bs9Dou4SoA 00:13:32.387 15:17:41 -- target/tls.sh@49 -- # local key=/tmp/tmp.Bs9Dou4SoA 00:13:32.387 15:17:41 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:32.644 [2024-04-24 15:17:41.668130] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.644 15:17:41 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:32.902 15:17:41 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:33.159 [2024-04-24 15:17:42.216262] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:33.159 [2024-04-24 15:17:42.216554] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.159 15:17:42 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:33.416 malloc0 00:13:33.416 15:17:42 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:33.673 15:17:42 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bs9Dou4SoA 00:13:33.930 [2024-04-24 15:17:42.985153] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:33.930 15:17:43 -- target/tls.sh@188 -- # bdevperf_pid=70761 00:13:33.930 15:17:43 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:33.930 15:17:43 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:33.930 15:17:43 -- target/tls.sh@191 -- # waitforlisten 70761 /var/tmp/bdevperf.sock 00:13:33.930 15:17:43 -- common/autotest_common.sh@817 -- # '[' -z 70761 ']' 00:13:33.930 15:17:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:33.930 15:17:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:33.930 15:17:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:33.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:33.930 15:17:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:33.930 15:17:43 -- common/autotest_common.sh@10 -- # set +x 00:13:33.930 [2024-04-24 15:17:43.048965] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:33.930 [2024-04-24 15:17:43.049275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70761 ] 00:13:34.187 [2024-04-24 15:17:43.186004] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.187 [2024-04-24 15:17:43.307229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.754 15:17:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:34.754 15:17:43 -- common/autotest_common.sh@850 -- # return 0 00:13:34.754 15:17:43 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bs9Dou4SoA 00:13:35.013 [2024-04-24 15:17:44.203225] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:35.013 [2024-04-24 15:17:44.203349] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:35.272 TLSTESTn1 00:13:35.272 15:17:44 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:35.532 15:17:44 -- target/tls.sh@196 -- # tgtconf='{ 00:13:35.532 "subsystems": [ 00:13:35.532 { 00:13:35.532 "subsystem": "keyring", 00:13:35.532 "config": [] 00:13:35.532 }, 00:13:35.532 { 00:13:35.532 "subsystem": "iobuf", 00:13:35.532 "config": [ 00:13:35.532 { 00:13:35.532 "method": "iobuf_set_options", 00:13:35.532 "params": { 00:13:35.532 "small_pool_count": 8192, 00:13:35.532 "large_pool_count": 1024, 00:13:35.532 "small_bufsize": 8192, 00:13:35.532 "large_bufsize": 135168 00:13:35.532 } 00:13:35.532 } 00:13:35.532 ] 00:13:35.532 }, 00:13:35.532 { 00:13:35.532 "subsystem": "sock", 00:13:35.532 "config": [ 00:13:35.532 { 00:13:35.532 "method": "sock_impl_set_options", 00:13:35.532 "params": { 00:13:35.532 "impl_name": "uring", 00:13:35.532 "recv_buf_size": 2097152, 00:13:35.532 "send_buf_size": 2097152, 00:13:35.532 "enable_recv_pipe": true, 00:13:35.532 "enable_quickack": false, 00:13:35.532 "enable_placement_id": 0, 00:13:35.532 "enable_zerocopy_send_server": false, 00:13:35.532 "enable_zerocopy_send_client": false, 00:13:35.532 "zerocopy_threshold": 0, 00:13:35.532 "tls_version": 0, 00:13:35.532 "enable_ktls": false 00:13:35.532 } 00:13:35.532 }, 00:13:35.532 { 00:13:35.532 "method": "sock_impl_set_options", 00:13:35.532 "params": { 00:13:35.532 "impl_name": "posix", 00:13:35.532 "recv_buf_size": 2097152, 00:13:35.532 "send_buf_size": 2097152, 00:13:35.532 "enable_recv_pipe": true, 00:13:35.532 "enable_quickack": false, 00:13:35.532 "enable_placement_id": 0, 00:13:35.532 "enable_zerocopy_send_server": true, 00:13:35.532 "enable_zerocopy_send_client": false, 00:13:35.532 "zerocopy_threshold": 0, 00:13:35.532 "tls_version": 0, 00:13:35.532 "enable_ktls": false 00:13:35.532 } 00:13:35.532 }, 00:13:35.532 { 00:13:35.532 "method": "sock_impl_set_options", 00:13:35.532 "params": { 00:13:35.532 "impl_name": "ssl", 00:13:35.532 "recv_buf_size": 4096, 00:13:35.532 "send_buf_size": 4096, 00:13:35.532 "enable_recv_pipe": true, 00:13:35.532 "enable_quickack": false, 00:13:35.532 "enable_placement_id": 0, 00:13:35.532 "enable_zerocopy_send_server": true, 00:13:35.532 "enable_zerocopy_send_client": false, 00:13:35.532 "zerocopy_threshold": 0, 00:13:35.532 "tls_version": 0, 00:13:35.532 "enable_ktls": false 00:13:35.532 } 00:13:35.532 } 00:13:35.532 ] 00:13:35.532 }, 00:13:35.532 { 00:13:35.532 "subsystem": "vmd", 00:13:35.532 "config": [] 00:13:35.532 }, 00:13:35.532 { 00:13:35.532 "subsystem": "accel", 00:13:35.532 "config": [ 00:13:35.532 { 00:13:35.532 "method": "accel_set_options", 00:13:35.532 "params": { 00:13:35.532 "small_cache_size": 128, 00:13:35.532 "large_cache_size": 16, 00:13:35.532 "task_count": 2048, 00:13:35.532 "sequence_count": 2048, 00:13:35.532 "buf_count": 2048 00:13:35.532 } 00:13:35.532 } 00:13:35.532 ] 00:13:35.532 }, 00:13:35.532 { 00:13:35.532 "subsystem": "bdev", 00:13:35.532 "config": [ 00:13:35.532 { 00:13:35.532 "method": "bdev_set_options", 00:13:35.532 "params": { 00:13:35.532 "bdev_io_pool_size": 65535, 00:13:35.532 "bdev_io_cache_size": 256, 00:13:35.532 "bdev_auto_examine": true, 00:13:35.532 "iobuf_small_cache_size": 128, 00:13:35.532 "iobuf_large_cache_size": 16 00:13:35.532 } 00:13:35.532 }, 00:13:35.532 { 00:13:35.532 "method": "bdev_raid_set_options", 00:13:35.532 "params": { 00:13:35.532 "process_window_size_kb": 1024 00:13:35.532 } 00:13:35.532 }, 00:13:35.532 { 00:13:35.532 "method": "bdev_iscsi_set_options", 00:13:35.532 "params": { 00:13:35.532 "timeout_sec": 30 00:13:35.532 } 00:13:35.532 }, 00:13:35.532 { 00:13:35.532 "method": "bdev_nvme_set_options", 00:13:35.532 "params": { 00:13:35.532 "action_on_timeout": "none", 00:13:35.532 "timeout_us": 0, 00:13:35.532 "timeout_admin_us": 0, 00:13:35.532 "keep_alive_timeout_ms": 10000, 00:13:35.532 "arbitration_burst": 0, 00:13:35.532 "low_priority_weight": 0, 00:13:35.532 "medium_priority_weight": 0, 00:13:35.532 "high_priority_weight": 0, 00:13:35.532 "nvme_adminq_poll_period_us": 10000, 00:13:35.532 "nvme_ioq_poll_period_us": 0, 00:13:35.532 "io_queue_requests": 0, 00:13:35.532 "delay_cmd_submit": true, 00:13:35.532 "transport_retry_count": 4, 00:13:35.532 "bdev_retry_count": 3, 00:13:35.532 "transport_ack_timeout": 0, 00:13:35.532 "ctrlr_loss_timeout_sec": 0, 00:13:35.532 "reconnect_delay_sec": 0, 00:13:35.532 "fast_io_fail_timeout_sec": 0, 00:13:35.532 "disable_auto_failback": false, 00:13:35.532 "generate_uuids": false, 00:13:35.532 "transport_tos": 0, 00:13:35.532 "nvme_error_stat": false, 00:13:35.532 "rdma_srq_size": 0, 00:13:35.532 "io_path_stat": false, 00:13:35.532 "allow_accel_sequence": false, 00:13:35.532 "rdma_max_cq_size": 0, 00:13:35.532 "rdma_cm_event_timeout_ms": 0, 00:13:35.532 "dhchap_digests": [ 00:13:35.532 "sha256", 00:13:35.532 "sha384", 00:13:35.532 "sha512" 00:13:35.532 ], 00:13:35.532 "dhchap_dhgroups": [ 00:13:35.532 "null", 00:13:35.532 "ffdhe2048", 00:13:35.532 "ffdhe3072", 00:13:35.532 "ffdhe4096", 00:13:35.532 "ffdhe6144", 00:13:35.532 "ffdhe8192" 00:13:35.532 ] 00:13:35.532 } 00:13:35.532 }, 00:13:35.532 { 00:13:35.532 "method": "bdev_nvme_set_hotplug", 00:13:35.532 "params": { 00:13:35.532 "period_us": 100000, 00:13:35.532 "enable": false 00:13:35.532 } 00:13:35.532 }, 00:13:35.532 { 00:13:35.532 "method": "bdev_malloc_create", 00:13:35.532 "params": { 00:13:35.532 "name": "malloc0", 00:13:35.532 "num_blocks": 8192, 00:13:35.533 "block_size": 4096, 00:13:35.533 "physical_block_size": 4096, 00:13:35.533 "uuid": "8ced3558-52a3-4c8e-b869-6cfbc1d6e587", 00:13:35.533 "optimal_io_boundary": 0 00:13:35.533 } 00:13:35.533 }, 00:13:35.533 { 00:13:35.533 "method": "bdev_wait_for_examine" 00:13:35.533 } 00:13:35.533 ] 00:13:35.533 }, 00:13:35.533 { 00:13:35.533 "subsystem": "nbd", 00:13:35.533 "config": [] 00:13:35.533 }, 00:13:35.533 { 00:13:35.533 "subsystem": "scheduler", 00:13:35.533 "config": [ 00:13:35.533 { 00:13:35.533 "method": "framework_set_scheduler", 00:13:35.533 "params": { 00:13:35.533 "name": "static" 00:13:35.533 } 00:13:35.533 } 00:13:35.533 ] 00:13:35.533 }, 00:13:35.533 { 00:13:35.533 "subsystem": "nvmf", 00:13:35.533 "config": [ 00:13:35.533 { 00:13:35.533 "method": "nvmf_set_config", 00:13:35.533 "params": { 00:13:35.533 "discovery_filter": "match_any", 00:13:35.533 "admin_cmd_passthru": { 00:13:35.533 "identify_ctrlr": false 00:13:35.533 } 00:13:35.533 } 00:13:35.533 }, 00:13:35.533 { 00:13:35.533 "method": "nvmf_set_max_subsystems", 00:13:35.533 "params": { 00:13:35.533 "max_subsystems": 1024 00:13:35.533 } 00:13:35.533 }, 00:13:35.533 { 00:13:35.533 "method": "nvmf_set_crdt", 00:13:35.533 "params": { 00:13:35.533 "crdt1": 0, 00:13:35.533 "crdt2": 0, 00:13:35.533 "crdt3": 0 00:13:35.533 } 00:13:35.533 }, 00:13:35.533 { 00:13:35.533 "method": "nvmf_create_transport", 00:13:35.533 "params": { 00:13:35.533 "trtype": "TCP", 00:13:35.533 "max_queue_depth": 128, 00:13:35.533 "max_io_qpairs_per_ctrlr": 127, 00:13:35.533 "in_capsule_data_size": 4096, 00:13:35.533 "max_io_size": 131072, 00:13:35.533 "io_unit_size": 131072, 00:13:35.533 "max_aq_depth": 128, 00:13:35.533 "num_shared_buffers": 511, 00:13:35.533 "buf_cache_size": 4294967295, 00:13:35.533 "dif_insert_or_strip": false, 00:13:35.533 "zcopy": false, 00:13:35.533 "c2h_success": false, 00:13:35.533 "sock_priority": 0, 00:13:35.533 "abort_timeout_sec": 1, 00:13:35.533 "ack_timeout": 0 00:13:35.533 } 00:13:35.533 }, 00:13:35.533 { 00:13:35.533 "method": "nvmf_create_subsystem", 00:13:35.533 "params": { 00:13:35.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:35.533 "allow_any_host": false, 00:13:35.533 "serial_number": "SPDK00000000000001", 00:13:35.533 "model_number": "SPDK bdev Controller", 00:13:35.533 "max_namespaces": 10, 00:13:35.533 "min_cntlid": 1, 00:13:35.533 "max_cntlid": 65519, 00:13:35.533 "ana_reporting": false 00:13:35.533 } 00:13:35.533 }, 00:13:35.533 { 00:13:35.533 "method": "nvmf_subsystem_add_host", 00:13:35.533 "params": { 00:13:35.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:35.533 "host": "nqn.2016-06.io.spdk:host1", 00:13:35.533 "psk": "/tmp/tmp.Bs9Dou4SoA" 00:13:35.533 } 00:13:35.533 }, 00:13:35.533 { 00:13:35.533 "method": "nvmf_subsystem_add_ns", 00:13:35.533 "params": { 00:13:35.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:35.533 "namespace": { 00:13:35.533 "nsid": 1, 00:13:35.533 "bdev_name": "malloc0", 00:13:35.533 "nguid": "8CED355852A34C8EB8696CFBC1D6E587", 00:13:35.533 "uuid": "8ced3558-52a3-4c8e-b869-6cfbc1d6e587", 00:13:35.533 "no_auto_visible": false 00:13:35.533 } 00:13:35.533 } 00:13:35.533 }, 00:13:35.533 { 00:13:35.533 "method": "nvmf_subsystem_add_listener", 00:13:35.533 "params": { 00:13:35.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:35.533 "listen_address": { 00:13:35.533 "trtype": "TCP", 00:13:35.533 "adrfam": "IPv4", 00:13:35.533 "traddr": "10.0.0.2", 00:13:35.533 "trsvcid": "4420" 00:13:35.533 }, 00:13:35.533 "secure_channel": true 00:13:35.533 } 00:13:35.533 } 00:13:35.533 ] 00:13:35.533 } 00:13:35.533 ] 00:13:35.533 }' 00:13:35.533 15:17:44 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:35.793 15:17:44 -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:35.793 "subsystems": [ 00:13:35.793 { 00:13:35.793 "subsystem": "keyring", 00:13:35.793 "config": [] 00:13:35.793 }, 00:13:35.793 { 00:13:35.793 "subsystem": "iobuf", 00:13:35.793 "config": [ 00:13:35.793 { 00:13:35.793 "method": "iobuf_set_options", 00:13:35.793 "params": { 00:13:35.793 "small_pool_count": 8192, 00:13:35.793 "large_pool_count": 1024, 00:13:35.793 "small_bufsize": 8192, 00:13:35.793 "large_bufsize": 135168 00:13:35.793 } 00:13:35.793 } 00:13:35.793 ] 00:13:35.793 }, 00:13:35.793 { 00:13:35.793 "subsystem": "sock", 00:13:35.793 "config": [ 00:13:35.793 { 00:13:35.793 "method": "sock_impl_set_options", 00:13:35.793 "params": { 00:13:35.793 "impl_name": "uring", 00:13:35.793 "recv_buf_size": 2097152, 00:13:35.793 "send_buf_size": 2097152, 00:13:35.793 "enable_recv_pipe": true, 00:13:35.793 "enable_quickack": false, 00:13:35.793 "enable_placement_id": 0, 00:13:35.793 "enable_zerocopy_send_server": false, 00:13:35.793 "enable_zerocopy_send_client": false, 00:13:35.793 "zerocopy_threshold": 0, 00:13:35.793 "tls_version": 0, 00:13:35.793 "enable_ktls": false 00:13:35.793 } 00:13:35.793 }, 00:13:35.793 { 00:13:35.793 "method": "sock_impl_set_options", 00:13:35.793 "params": { 00:13:35.793 "impl_name": "posix", 00:13:35.793 "recv_buf_size": 2097152, 00:13:35.793 "send_buf_size": 2097152, 00:13:35.793 "enable_recv_pipe": true, 00:13:35.793 "enable_quickack": false, 00:13:35.793 "enable_placement_id": 0, 00:13:35.793 "enable_zerocopy_send_server": true, 00:13:35.793 "enable_zerocopy_send_client": false, 00:13:35.793 "zerocopy_threshold": 0, 00:13:35.793 "tls_version": 0, 00:13:35.793 "enable_ktls": false 00:13:35.793 } 00:13:35.793 }, 00:13:35.793 { 00:13:35.793 "method": "sock_impl_set_options", 00:13:35.793 "params": { 00:13:35.793 "impl_name": "ssl", 00:13:35.793 "recv_buf_size": 4096, 00:13:35.793 "send_buf_size": 4096, 00:13:35.793 "enable_recv_pipe": true, 00:13:35.793 "enable_quickack": false, 00:13:35.793 "enable_placement_id": 0, 00:13:35.793 "enable_zerocopy_send_server": true, 00:13:35.793 "enable_zerocopy_send_client": false, 00:13:35.793 "zerocopy_threshold": 0, 00:13:35.793 "tls_version": 0, 00:13:35.793 "enable_ktls": false 00:13:35.793 } 00:13:35.793 } 00:13:35.793 ] 00:13:35.793 }, 00:13:35.793 { 00:13:35.793 "subsystem": "vmd", 00:13:35.793 "config": [] 00:13:35.793 }, 00:13:35.793 { 00:13:35.793 "subsystem": "accel", 00:13:35.793 "config": [ 00:13:35.793 { 00:13:35.793 "method": "accel_set_options", 00:13:35.793 "params": { 00:13:35.793 "small_cache_size": 128, 00:13:35.793 "large_cache_size": 16, 00:13:35.793 "task_count": 2048, 00:13:35.793 "sequence_count": 2048, 00:13:35.793 "buf_count": 2048 00:13:35.793 } 00:13:35.793 } 00:13:35.793 ] 00:13:35.793 }, 00:13:35.793 { 00:13:35.793 "subsystem": "bdev", 00:13:35.794 "config": [ 00:13:35.794 { 00:13:35.794 "method": "bdev_set_options", 00:13:35.794 "params": { 00:13:35.794 "bdev_io_pool_size": 65535, 00:13:35.794 "bdev_io_cache_size": 256, 00:13:35.794 "bdev_auto_examine": true, 00:13:35.794 "iobuf_small_cache_size": 128, 00:13:35.794 "iobuf_large_cache_size": 16 00:13:35.794 } 00:13:35.794 }, 00:13:35.794 { 00:13:35.794 "method": "bdev_raid_set_options", 00:13:35.794 "params": { 00:13:35.794 "process_window_size_kb": 1024 00:13:35.794 } 00:13:35.794 }, 00:13:35.794 { 00:13:35.794 "method": "bdev_iscsi_set_options", 00:13:35.794 "params": { 00:13:35.794 "timeout_sec": 30 00:13:35.794 } 00:13:35.794 }, 00:13:35.794 { 00:13:35.794 "method": "bdev_nvme_set_options", 00:13:35.794 "params": { 00:13:35.794 "action_on_timeout": "none", 00:13:35.794 "timeout_us": 0, 00:13:35.794 "timeout_admin_us": 0, 00:13:35.794 "keep_alive_timeout_ms": 10000, 00:13:35.794 "arbitration_burst": 0, 00:13:35.794 "low_priority_weight": 0, 00:13:35.794 "medium_priority_weight": 0, 00:13:35.794 "high_priority_weight": 0, 00:13:35.794 "nvme_adminq_poll_period_us": 10000, 00:13:35.794 "nvme_ioq_poll_period_us": 0, 00:13:35.794 "io_queue_requests": 512, 00:13:35.794 "delay_cmd_submit": true, 00:13:35.794 "transport_retry_count": 4, 00:13:35.794 "bdev_retry_count": 3, 00:13:35.794 "transport_ack_timeout": 0, 00:13:35.794 "ctrlr_loss_timeout_sec": 0, 00:13:35.794 "reconnect_delay_sec": 0, 00:13:35.794 "fast_io_fail_timeout_sec": 0, 00:13:35.794 "disable_auto_failback": false, 00:13:35.794 "generate_uuids": false, 00:13:35.794 "transport_tos": 0, 00:13:35.794 "nvme_error_stat": false, 00:13:35.794 "rdma_srq_size": 0, 00:13:35.794 "io_path_stat": false, 00:13:35.794 "allow_accel_sequence": false, 00:13:35.794 "rdma_max_cq_size": 0, 00:13:35.794 "rdma_cm_event_timeout_ms": 0, 00:13:35.794 "dhchap_digests": [ 00:13:35.794 "sha256", 00:13:35.794 "sha384", 00:13:35.794 "sha512" 00:13:35.794 ], 00:13:35.794 "dhchap_dhgroups": [ 00:13:35.794 "null", 00:13:35.794 "ffdhe2048", 00:13:35.794 "ffdhe3072", 00:13:35.794 "ffdhe4096", 00:13:35.794 "ffdhe6144", 00:13:35.794 "ffdhe8192" 00:13:35.794 ] 00:13:35.794 } 00:13:35.794 }, 00:13:35.794 { 00:13:35.794 "method": "bdev_nvme_attach_controller", 00:13:35.794 "params": { 00:13:35.794 "name": "TLSTEST", 00:13:35.794 "trtype": "TCP", 00:13:35.794 "adrfam": "IPv4", 00:13:35.794 "traddr": "10.0.0.2", 00:13:35.794 "trsvcid": "4420", 00:13:35.794 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:35.794 "prchk_reftag": false, 00:13:35.794 "prchk_guard": false, 00:13:35.794 "ctrlr_loss_timeout_sec": 0, 00:13:35.794 "reconnect_delay_sec": 0, 00:13:35.794 "fast_io_fail_timeout_sec": 0, 00:13:35.794 "psk": "/tmp/tmp.Bs9Dou4SoA", 00:13:35.794 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:35.794 "hdgst": false, 00:13:35.794 "ddgst": false 00:13:35.794 } 00:13:35.794 }, 00:13:35.794 { 00:13:35.794 "method": "bdev_nvme_set_hotplug", 00:13:35.794 "params": { 00:13:35.794 "period_us": 100000, 00:13:35.794 "enable": false 00:13:35.794 } 00:13:35.794 }, 00:13:35.794 { 00:13:35.794 "method": "bdev_wait_for_examine" 00:13:35.794 } 00:13:35.794 ] 00:13:35.794 }, 00:13:35.794 { 00:13:35.794 "subsystem": "nbd", 00:13:35.794 "config": [] 00:13:35.794 } 00:13:35.794 ] 00:13:35.794 }' 00:13:35.794 15:17:44 -- target/tls.sh@199 -- # killprocess 70761 00:13:35.794 15:17:44 -- common/autotest_common.sh@936 -- # '[' -z 70761 ']' 00:13:35.794 15:17:44 -- common/autotest_common.sh@940 -- # kill -0 70761 00:13:35.794 15:17:44 -- common/autotest_common.sh@941 -- # uname 00:13:35.794 15:17:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:35.794 15:17:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70761 00:13:35.794 killing process with pid 70761 00:13:35.794 Received shutdown signal, test time was about 10.000000 seconds 00:13:35.794 00:13:35.794 Latency(us) 00:13:35.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.794 =================================================================================================================== 00:13:35.794 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:35.794 15:17:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:35.794 15:17:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:35.794 15:17:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70761' 00:13:35.794 15:17:44 -- common/autotest_common.sh@955 -- # kill 70761 00:13:35.794 [2024-04-24 15:17:44.988105] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:35.794 15:17:44 -- common/autotest_common.sh@960 -- # wait 70761 00:13:36.053 15:17:45 -- target/tls.sh@200 -- # killprocess 70701 00:13:36.053 15:17:45 -- common/autotest_common.sh@936 -- # '[' -z 70701 ']' 00:13:36.053 15:17:45 -- common/autotest_common.sh@940 -- # kill -0 70701 00:13:36.053 15:17:45 -- common/autotest_common.sh@941 -- # uname 00:13:36.053 15:17:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:36.053 15:17:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70701 00:13:36.053 killing process with pid 70701 00:13:36.053 15:17:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:36.053 15:17:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:36.053 15:17:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70701' 00:13:36.053 15:17:45 -- common/autotest_common.sh@955 -- # kill 70701 00:13:36.053 [2024-04-24 15:17:45.271192] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:36.053 15:17:45 -- common/autotest_common.sh@960 -- # wait 70701 00:13:36.312 15:17:45 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:36.312 15:17:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:36.312 15:17:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:36.312 15:17:45 -- common/autotest_common.sh@10 -- # set +x 00:13:36.312 15:17:45 -- target/tls.sh@203 -- # echo '{ 00:13:36.312 "subsystems": [ 00:13:36.312 { 00:13:36.312 "subsystem": "keyring", 00:13:36.312 "config": [] 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "subsystem": "iobuf", 00:13:36.312 "config": [ 00:13:36.312 { 00:13:36.312 "method": "iobuf_set_options", 00:13:36.312 "params": { 00:13:36.312 "small_pool_count": 8192, 00:13:36.312 "large_pool_count": 1024, 00:13:36.312 "small_bufsize": 8192, 00:13:36.312 "large_bufsize": 135168 00:13:36.312 } 00:13:36.312 } 00:13:36.312 ] 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "subsystem": "sock", 00:13:36.312 "config": [ 00:13:36.312 { 00:13:36.312 "method": "sock_impl_set_options", 00:13:36.312 "params": { 00:13:36.312 "impl_name": "uring", 00:13:36.312 "recv_buf_size": 2097152, 00:13:36.312 "send_buf_size": 2097152, 00:13:36.312 "enable_recv_pipe": true, 00:13:36.312 "enable_quickack": false, 00:13:36.312 "enable_placement_id": 0, 00:13:36.312 "enable_zerocopy_send_server": false, 00:13:36.312 "enable_zerocopy_send_client": false, 00:13:36.312 "zerocopy_threshold": 0, 00:13:36.312 "tls_version": 0, 00:13:36.312 "enable_ktls": false 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "sock_impl_set_options", 00:13:36.312 "params": { 00:13:36.312 "impl_name": "posix", 00:13:36.312 "recv_buf_size": 2097152, 00:13:36.312 "send_buf_size": 2097152, 00:13:36.312 "enable_recv_pipe": true, 00:13:36.312 "enable_quickack": false, 00:13:36.312 "enable_placement_id": 0, 00:13:36.312 "enable_zerocopy_send_server": true, 00:13:36.312 "enable_zerocopy_send_client": false, 00:13:36.312 "zerocopy_threshold": 0, 00:13:36.312 "tls_version": 0, 00:13:36.312 "enable_ktls": false 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "sock_impl_set_options", 00:13:36.312 "params": { 00:13:36.312 "impl_name": "ssl", 00:13:36.312 "recv_buf_size": 4096, 00:13:36.312 "send_buf_size": 4096, 00:13:36.312 "enable_recv_pipe": true, 00:13:36.312 "enable_quickack": false, 00:13:36.312 "enable_placement_id": 0, 00:13:36.312 "enable_zerocopy_send_server": true, 00:13:36.312 "enable_zerocopy_send_client": false, 00:13:36.312 "zerocopy_threshold": 0, 00:13:36.312 "tls_version": 0, 00:13:36.312 "enable_ktls": false 00:13:36.312 } 00:13:36.312 } 00:13:36.312 ] 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "subsystem": "vmd", 00:13:36.312 "config": [] 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "subsystem": "accel", 00:13:36.312 "config": [ 00:13:36.312 { 00:13:36.312 "method": "accel_set_options", 00:13:36.312 "params": { 00:13:36.312 "small_cache_size": 128, 00:13:36.312 "large_cache_size": 16, 00:13:36.312 "task_count": 2048, 00:13:36.312 "sequence_count": 2048, 00:13:36.312 "buf_count": 2048 00:13:36.312 } 00:13:36.312 } 00:13:36.312 ] 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "subsystem": "bdev", 00:13:36.312 "config": [ 00:13:36.312 { 00:13:36.312 "method": "bdev_set_options", 00:13:36.312 "params": { 00:13:36.312 "bdev_io_pool_size": 65535, 00:13:36.312 "bdev_io_cache_size": 256, 00:13:36.312 "bdev_auto_examine": true, 00:13:36.312 "iobuf_small_cache_size": 128, 00:13:36.312 "iobuf_large_cache_size": 16 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "bdev_raid_set_options", 00:13:36.312 "params": { 00:13:36.312 "process_window_size_kb": 1024 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "bdev_iscsi_set_options", 00:13:36.312 "params": { 00:13:36.312 "timeout_sec": 30 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "bdev_nvme_set_options", 00:13:36.312 "params": { 00:13:36.312 "action_on_timeout": "none", 00:13:36.312 "timeout_us": 0, 00:13:36.312 "timeout_admin_us": 0, 00:13:36.312 "keep_alive_timeout_ms": 10000, 00:13:36.312 "arbitration_burst": 0, 00:13:36.312 "low_priority_weight": 0, 00:13:36.312 "medium_priority_weight": 0, 00:13:36.312 "high_priority_weight": 0, 00:13:36.312 "nvme_adminq_poll_period_us": 10000, 00:13:36.312 "nvme_ioq_poll_period_us": 0, 00:13:36.312 "io_queue_requests": 0, 00:13:36.312 "delay_cmd_submit": true, 00:13:36.312 "transport_retry_count": 4, 00:13:36.312 "bdev_retry_count": 3, 00:13:36.312 "transport_ack_timeout": 0, 00:13:36.312 "ctrlr_loss_timeout_sec": 0, 00:13:36.312 "reconnect_delay_sec": 0, 00:13:36.312 "fast_io_fail_timeout_sec": 0, 00:13:36.312 "disable_auto_failback": false, 00:13:36.312 "generate_uuids": false, 00:13:36.312 "transport_tos": 0, 00:13:36.312 "nvme_error_stat": false, 00:13:36.312 "rdma_srq_size": 0, 00:13:36.312 "io_path_stat": false, 00:13:36.312 "allow_accel_sequence": false, 00:13:36.312 "rdma_max_cq_size": 0, 00:13:36.312 "rdma_cm_event_timeout_ms": 0, 00:13:36.312 "dhchap_digests": [ 00:13:36.312 "sha256", 00:13:36.312 "sha384", 00:13:36.312 "sha512" 00:13:36.312 ], 00:13:36.312 "dhchap_dhgroups": [ 00:13:36.312 "null", 00:13:36.312 "ffdhe2048", 00:13:36.312 "ffdhe3072", 00:13:36.312 "ffdhe4096", 00:13:36.312 "ffdhe6144", 00:13:36.312 "ffdhe8192" 00:13:36.312 ] 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "bdev_nvme_set_hotplug", 00:13:36.312 "params": { 00:13:36.312 "period_us": 100000, 00:13:36.312 "enable": false 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "bdev_malloc_create", 00:13:36.312 "params": { 00:13:36.312 "name": "malloc0", 00:13:36.312 "num_blocks": 8192, 00:13:36.312 "block_size": 4096, 00:13:36.312 "physical_block_size": 4096, 00:13:36.312 "uuid": "8ced3558-52a3-4c8e-b869-6cfbc1d6e587", 00:13:36.312 "optimal_io_boundary": 0 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "bdev_wait_for_examine" 00:13:36.312 } 00:13:36.312 ] 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "subsystem": "nbd", 00:13:36.312 "config": [] 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "subsystem": "scheduler", 00:13:36.312 "config": [ 00:13:36.312 { 00:13:36.312 "method": "framework_set_scheduler", 00:13:36.312 "params": { 00:13:36.312 "name": "static" 00:13:36.312 } 00:13:36.312 } 00:13:36.312 ] 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "subsystem": "nvmf", 00:13:36.312 "config": [ 00:13:36.312 { 00:13:36.312 "method": "nvmf_set_config", 00:13:36.312 "params": { 00:13:36.312 "discovery_filter": "match_any", 00:13:36.312 "admin_cmd_passthru": { 00:13:36.312 "identify_ctrlr": false 00:13:36.312 } 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "nvmf_set_max_subsystems", 00:13:36.312 "params": { 00:13:36.312 "max_subsystems": 1024 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "nvmf_set_crdt", 00:13:36.312 "params": { 00:13:36.312 "crdt1": 0, 00:13:36.312 "crdt2": 0, 00:13:36.312 "crdt3": 0 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "nvmf_create_transport", 00:13:36.312 "params": { 00:13:36.312 "trtype": "TCP", 00:13:36.312 "max_queue_depth": 128, 00:13:36.312 "max_io_qpairs_per_ctrlr": 127, 00:13:36.312 "in_capsule_data_size": 4096, 00:13:36.312 "max_io_size": 131072, 00:13:36.312 "io_unit_size": 131072, 00:13:36.312 "max_aq_depth": 128, 00:13:36.312 "num_shared_buffers": 511, 00:13:36.312 "buf_cache_size": 4294967295, 00:13:36.312 "dif_insert_or_strip": false, 00:13:36.312 "zcopy": false, 00:13:36.312 "c2h_success": false, 00:13:36.312 "sock_priority": 0, 00:13:36.312 "abort_timeout_sec": 1, 00:13:36.312 "ack_timeout": 0 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "nvmf_create_subsystem", 00:13:36.312 "params": { 00:13:36.312 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.312 "allow_any_host": false, 00:13:36.312 "serial_number": "SPDK00000000000001", 00:13:36.312 "model_number": "SPDK bdev Controller", 00:13:36.312 "max_namespaces": 10, 00:13:36.312 "min_cntlid": 1, 00:13:36.312 "max_cntlid": 65519, 00:13:36.312 "ana_reporting": false 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "nvmf_subsystem_add_host", 00:13:36.312 "params": { 00:13:36.312 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.312 "host": "nqn.2016-06.io.spdk:host1", 00:13:36.312 "psk": "/tmp/tmp.Bs9Dou4SoA" 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "nvmf_subsystem_add_ns", 00:13:36.312 "params": { 00:13:36.312 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.312 "namespace": { 00:13:36.312 "nsid": 1, 00:13:36.312 "bdev_name": "malloc0", 00:13:36.312 "nguid": "8CED355852A34C8EB8696CFBC1D6E587", 00:13:36.312 "uuid": "8ced3558-52a3-4c8e-b869-6cfbc1d6e587", 00:13:36.312 "no_auto_visible": false 00:13:36.312 } 00:13:36.312 } 00:13:36.312 }, 00:13:36.312 { 00:13:36.312 "method": "nvmf_subsystem_add_listener", 00:13:36.312 "params": { 00:13:36.312 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.313 "listen_address": { 00:13:36.313 "trtype": "TCP", 00:13:36.313 "adrfam": "IPv4", 00:13:36.313 "traddr": "10.0.0.2", 00:13:36.313 "trsvcid": "4420" 00:13:36.313 }, 00:13:36.313 "secure_channel": true 00:13:36.313 } 00:13:36.313 } 00:13:36.313 ] 00:13:36.313 } 00:13:36.313 ] 00:13:36.313 }' 00:13:36.313 15:17:45 -- nvmf/common.sh@470 -- # nvmfpid=70804 00:13:36.313 15:17:45 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:36.313 15:17:45 -- nvmf/common.sh@471 -- # waitforlisten 70804 00:13:36.313 15:17:45 -- common/autotest_common.sh@817 -- # '[' -z 70804 ']' 00:13:36.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.313 15:17:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.313 15:17:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:36.313 15:17:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.313 15:17:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:36.313 15:17:45 -- common/autotest_common.sh@10 -- # set +x 00:13:36.571 [2024-04-24 15:17:45.600944] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:36.571 [2024-04-24 15:17:45.601016] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.571 [2024-04-24 15:17:45.734929] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.830 [2024-04-24 15:17:45.846129] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.830 [2024-04-24 15:17:45.846179] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.830 [2024-04-24 15:17:45.846191] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.830 [2024-04-24 15:17:45.846199] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.830 [2024-04-24 15:17:45.846205] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.830 [2024-04-24 15:17:45.846284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.089 [2024-04-24 15:17:46.081256] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.089 [2024-04-24 15:17:46.097230] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:37.089 [2024-04-24 15:17:46.113200] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:37.089 [2024-04-24 15:17:46.113385] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.657 15:17:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:37.657 15:17:46 -- common/autotest_common.sh@850 -- # return 0 00:13:37.657 15:17:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:37.657 15:17:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:37.657 15:17:46 -- common/autotest_common.sh@10 -- # set +x 00:13:37.657 15:17:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.657 15:17:46 -- target/tls.sh@207 -- # bdevperf_pid=70842 00:13:37.657 15:17:46 -- target/tls.sh@208 -- # waitforlisten 70842 /var/tmp/bdevperf.sock 00:13:37.657 15:17:46 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:37.657 15:17:46 -- target/tls.sh@204 -- # echo '{ 00:13:37.657 "subsystems": [ 00:13:37.657 { 00:13:37.657 "subsystem": "keyring", 00:13:37.657 "config": [] 00:13:37.657 }, 00:13:37.657 { 00:13:37.657 "subsystem": "iobuf", 00:13:37.657 "config": [ 00:13:37.657 { 00:13:37.657 "method": "iobuf_set_options", 00:13:37.657 "params": { 00:13:37.657 "small_pool_count": 8192, 00:13:37.657 "large_pool_count": 1024, 00:13:37.657 "small_bufsize": 8192, 00:13:37.657 "large_bufsize": 135168 00:13:37.657 } 00:13:37.657 } 00:13:37.657 ] 00:13:37.657 }, 00:13:37.657 { 00:13:37.657 "subsystem": "sock", 00:13:37.657 "config": [ 00:13:37.657 { 00:13:37.657 "method": "sock_impl_set_options", 00:13:37.657 "params": { 00:13:37.657 "impl_name": "uring", 00:13:37.657 "recv_buf_size": 2097152, 00:13:37.657 "send_buf_size": 2097152, 00:13:37.657 "enable_recv_pipe": true, 00:13:37.657 "enable_quickack": false, 00:13:37.657 "enable_placement_id": 0, 00:13:37.657 "enable_zerocopy_send_server": false, 00:13:37.657 "enable_zerocopy_send_client": false, 00:13:37.658 "zerocopy_threshold": 0, 00:13:37.658 "tls_version": 0, 00:13:37.658 "enable_ktls": false 00:13:37.658 } 00:13:37.658 }, 00:13:37.658 { 00:13:37.658 "method": "sock_impl_set_options", 00:13:37.658 "params": { 00:13:37.658 "impl_name": "posix", 00:13:37.658 "recv_buf_size": 2097152, 00:13:37.658 "send_buf_size": 2097152, 00:13:37.658 "enable_recv_pipe": true, 00:13:37.658 "enable_quickack": false, 00:13:37.658 "enable_placement_id": 0, 00:13:37.658 "enable_zerocopy_send_server": true, 00:13:37.658 "enable_zerocopy_send_client": false, 00:13:37.658 "zerocopy_threshold": 0, 00:13:37.658 "tls_version": 0, 00:13:37.658 "enable_ktls": false 00:13:37.658 } 00:13:37.658 }, 00:13:37.658 { 00:13:37.658 "method": "sock_impl_set_options", 00:13:37.658 "params": { 00:13:37.658 "impl_name": "ssl", 00:13:37.658 "recv_buf_size": 4096, 00:13:37.658 "send_buf_size": 4096, 00:13:37.658 "enable_recv_pipe": true, 00:13:37.658 "enable_quickack": false, 00:13:37.658 "enable_placement_id": 0, 00:13:37.658 "enable_zerocopy_send_server": true, 00:13:37.658 "enable_zerocopy_send_client": false, 00:13:37.658 "zerocopy_threshold": 0, 00:13:37.658 "tls_version": 0, 00:13:37.658 "enable_ktls": false 00:13:37.658 } 00:13:37.658 } 00:13:37.658 ] 00:13:37.658 }, 00:13:37.658 { 00:13:37.658 "subsystem": "vmd", 00:13:37.658 "config": [] 00:13:37.658 }, 00:13:37.658 { 00:13:37.658 "subsystem": "accel", 00:13:37.658 "config": [ 00:13:37.658 { 00:13:37.658 "method": "accel_set_options", 00:13:37.658 "params": { 00:13:37.658 "small_cache_size": 128, 00:13:37.658 "large_cache_size": 16, 00:13:37.658 "task_count": 2048, 00:13:37.658 "sequence_count": 2048, 00:13:37.658 "buf_count": 2048 00:13:37.658 } 00:13:37.658 } 00:13:37.658 ] 00:13:37.658 }, 00:13:37.658 { 00:13:37.658 "subsystem": "bdev", 00:13:37.658 "config": [ 00:13:37.658 { 00:13:37.658 "method": "bdev_set_options", 00:13:37.658 "params": { 00:13:37.658 "bdev_io_pool_size": 65535, 00:13:37.658 "bdev_io_cache_size": 256, 00:13:37.658 "bdev_auto_examine": true, 00:13:37.658 "iobuf_small_cache_size": 128, 00:13:37.658 "iobuf_large_cache_size": 16 00:13:37.658 } 00:13:37.658 }, 00:13:37.658 { 00:13:37.658 "method": "bdev_raid_set_options", 00:13:37.658 "params": { 00:13:37.658 "process_window_size_kb": 1024 00:13:37.658 } 00:13:37.658 }, 00:13:37.658 { 00:13:37.658 "method": "bdev_iscsi_set_options", 00:13:37.658 "params": { 00:13:37.658 "timeout_sec": 30 00:13:37.658 } 00:13:37.658 }, 00:13:37.658 { 00:13:37.658 "method": "bdev_nvme_set_options", 00:13:37.658 "params": { 00:13:37.658 "action_on_timeout": "none", 00:13:37.658 "timeout_us": 0, 00:13:37.658 "timeout_admin_us": 0, 00:13:37.658 "keep_alive_timeout_ms": 10000, 00:13:37.658 "arbitration_burst": 0, 00:13:37.658 "low_priority_weight": 0, 00:13:37.658 "medium_priority_weight": 0, 00:13:37.658 "high_priority_weight": 0, 00:13:37.658 "nvme_adminq_poll_period_us": 10000, 00:13:37.658 "nvme_ioq_poll_period_us": 0, 00:13:37.658 "io_queue_requests": 512, 00:13:37.658 "delay_cmd_submit": true, 00:13:37.658 "transport_retry_count": 4, 00:13:37.658 "bdev_retry_count": 3, 00:13:37.658 "transport_ack_timeout": 0, 00:13:37.658 "ctrlr_loss_timeout_sec": 0, 00:13:37.658 "reconnect_delay_sec": 0, 00:13:37.658 "fast_io_fail_timeout_sec": 0, 00:13:37.658 "disable_auto_failback": false, 00:13:37.658 "generate_uuids": false, 00:13:37.658 "transport_tos": 0, 00:13:37.658 "nvme_error_stat": false, 00:13:37.658 "rdma_srq_size": 0, 00:13:37.658 "io_path_stat": false, 00:13:37.658 "allow_accel_sequence": false, 00:13:37.658 "rdma_max_cq_size": 0, 00:13:37.658 "rdma_cm_event_timeout_ms": 0, 00:13:37.658 "dhchap_digests": [ 00:13:37.658 "sha256", 00:13:37.658 "sha384", 00:13:37.658 "sha512" 00:13:37.658 ], 00:13:37.658 "dhchap_dhgroups": [ 00:13:37.658 "null", 00:13:37.658 "ffdhe2048", 00:13:37.658 "ffdhe3072", 00:13:37.658 "ffdhe4096", 00:13:37.658 "ffdhe6144", 00:13:37.658 "ffdhe8192" 00:13:37.658 ] 00:13:37.658 } 00:13:37.658 }, 00:13:37.658 { 00:13:37.658 "method": "bdev_nvme_attach_controller", 00:13:37.658 "params": { 00:13:37.658 "name": "TLSTEST", 00:13:37.658 "trtype": "TCP", 00:13:37.658 "adrfam": "IPv4", 00:13:37.658 "traddr": "10.0.0.2", 00:13:37.658 "trsvcid": "4420", 00:13:37.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.658 "prchk_reftag": false, 00:13:37.658 "prchk_guard": false, 00:13:37.658 "ctrlr_loss_timeout_sec": 0, 00:13:37.658 "reconnect_delay_sec": 0, 00:13:37.658 "fast_io_fail_timeout_sec": 0, 00:13:37.658 "psk": "/tmp/tmp.Bs9Dou4SoA", 00:13:37.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:37.658 "hdgst": false, 00:13:37.658 "ddgst": false 00:13:37.658 } 00:13:37.658 }, 00:13:37.658 { 00:13:37.658 "method": "bdev_nvme_set_hotplug", 00:13:37.658 "params": { 00:13:37.658 "period_us": 100000, 00:13:37.658 "enable": false 00:13:37.658 } 00:13:37.658 }, 00:13:37.658 { 00:13:37.658 "method": "bdev_wait_for_examine" 00:13:37.658 } 00:13:37.658 ] 00:13:37.658 }, 00:13:37.658 { 00:13:37.658 "subsystem": "nbd", 00:13:37.658 "config": [] 00:13:37.658 } 00:13:37.658 ] 00:13:37.658 }' 00:13:37.658 15:17:46 -- common/autotest_common.sh@817 -- # '[' -z 70842 ']' 00:13:37.658 15:17:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:37.658 15:17:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:37.658 15:17:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:37.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:37.658 15:17:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:37.658 15:17:46 -- common/autotest_common.sh@10 -- # set +x 00:13:37.658 [2024-04-24 15:17:46.703976] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:37.658 [2024-04-24 15:17:46.704283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70842 ] 00:13:37.658 [2024-04-24 15:17:46.845801] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.923 [2024-04-24 15:17:46.974490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.923 [2024-04-24 15:17:47.149902] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:37.923 [2024-04-24 15:17:47.150261] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:38.514 15:17:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:38.514 15:17:47 -- common/autotest_common.sh@850 -- # return 0 00:13:38.514 15:17:47 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:38.774 Running I/O for 10 seconds... 00:13:48.775 00:13:48.775 Latency(us) 00:13:48.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.775 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:48.775 Verification LBA range: start 0x0 length 0x2000 00:13:48.775 TLSTESTn1 : 10.02 4136.27 16.16 0.00 0.00 30883.36 7417.48 26691.03 00:13:48.775 =================================================================================================================== 00:13:48.775 Total : 4136.27 16.16 0.00 0.00 30883.36 7417.48 26691.03 00:13:48.775 0 00:13:48.775 15:17:57 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:48.775 15:17:57 -- target/tls.sh@214 -- # killprocess 70842 00:13:48.775 15:17:57 -- common/autotest_common.sh@936 -- # '[' -z 70842 ']' 00:13:48.775 15:17:57 -- common/autotest_common.sh@940 -- # kill -0 70842 00:13:48.775 15:17:57 -- common/autotest_common.sh@941 -- # uname 00:13:48.775 15:17:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:48.775 15:17:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70842 00:13:48.775 killing process with pid 70842 00:13:48.775 Received shutdown signal, test time was about 10.000000 seconds 00:13:48.775 00:13:48.775 Latency(us) 00:13:48.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.775 =================================================================================================================== 00:13:48.775 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:48.775 15:17:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:48.775 15:17:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:48.775 15:17:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70842' 00:13:48.775 15:17:57 -- common/autotest_common.sh@955 -- # kill 70842 00:13:48.775 [2024-04-24 15:17:57.838783] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:48.775 15:17:57 -- common/autotest_common.sh@960 -- # wait 70842 00:13:49.035 15:17:58 -- target/tls.sh@215 -- # killprocess 70804 00:13:49.035 15:17:58 -- common/autotest_common.sh@936 -- # '[' -z 70804 ']' 00:13:49.035 15:17:58 -- common/autotest_common.sh@940 -- # kill -0 70804 00:13:49.035 15:17:58 -- common/autotest_common.sh@941 -- # uname 00:13:49.035 15:17:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:49.035 15:17:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70804 00:13:49.035 killing process with pid 70804 00:13:49.035 15:17:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:49.035 15:17:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:49.035 15:17:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70804' 00:13:49.035 15:17:58 -- common/autotest_common.sh@955 -- # kill 70804 00:13:49.035 [2024-04-24 15:17:58.123623] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:49.035 15:17:58 -- common/autotest_common.sh@960 -- # wait 70804 00:13:49.294 15:17:58 -- target/tls.sh@218 -- # nvmfappstart 00:13:49.294 15:17:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:49.294 15:17:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:49.294 15:17:58 -- common/autotest_common.sh@10 -- # set +x 00:13:49.294 15:17:58 -- nvmf/common.sh@470 -- # nvmfpid=70979 00:13:49.294 15:17:58 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:49.294 15:17:58 -- nvmf/common.sh@471 -- # waitforlisten 70979 00:13:49.294 15:17:58 -- common/autotest_common.sh@817 -- # '[' -z 70979 ']' 00:13:49.294 15:17:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.294 15:17:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:49.294 15:17:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.294 15:17:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:49.294 15:17:58 -- common/autotest_common.sh@10 -- # set +x 00:13:49.294 [2024-04-24 15:17:58.451026] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:49.294 [2024-04-24 15:17:58.451136] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.553 [2024-04-24 15:17:58.591132] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.553 [2024-04-24 15:17:58.706364] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.553 [2024-04-24 15:17:58.706455] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.553 [2024-04-24 15:17:58.706484] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.553 [2024-04-24 15:17:58.706495] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.553 [2024-04-24 15:17:58.706504] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.553 [2024-04-24 15:17:58.706545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.492 15:17:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:50.492 15:17:59 -- common/autotest_common.sh@850 -- # return 0 00:13:50.492 15:17:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:50.492 15:17:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:50.492 15:17:59 -- common/autotest_common.sh@10 -- # set +x 00:13:50.492 15:17:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.492 15:17:59 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Bs9Dou4SoA 00:13:50.492 15:17:59 -- target/tls.sh@49 -- # local key=/tmp/tmp.Bs9Dou4SoA 00:13:50.492 15:17:59 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:50.750 [2024-04-24 15:17:59.753694] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.750 15:17:59 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:51.007 15:18:00 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:51.266 [2024-04-24 15:18:00.289799] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:51.266 [2024-04-24 15:18:00.290046] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.266 15:18:00 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:51.525 malloc0 00:13:51.525 15:18:00 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:51.814 15:18:00 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bs9Dou4SoA 00:13:51.814 [2024-04-24 15:18:01.049232] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:52.073 15:18:01 -- target/tls.sh@222 -- # bdevperf_pid=71035 00:13:52.073 15:18:01 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:52.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:52.073 15:18:01 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:52.073 15:18:01 -- target/tls.sh@225 -- # waitforlisten 71035 /var/tmp/bdevperf.sock 00:13:52.073 15:18:01 -- common/autotest_common.sh@817 -- # '[' -z 71035 ']' 00:13:52.073 15:18:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:52.073 15:18:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:52.073 15:18:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:52.073 15:18:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:52.073 15:18:01 -- common/autotest_common.sh@10 -- # set +x 00:13:52.073 [2024-04-24 15:18:01.115871] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:52.073 [2024-04-24 15:18:01.116120] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71035 ] 00:13:52.073 [2024-04-24 15:18:01.249422] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.334 [2024-04-24 15:18:01.361341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.901 15:18:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:52.901 15:18:02 -- common/autotest_common.sh@850 -- # return 0 00:13:52.901 15:18:02 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Bs9Dou4SoA 00:13:53.159 15:18:02 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:53.418 [2024-04-24 15:18:02.589022] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:53.676 nvme0n1 00:13:53.676 15:18:02 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:53.676 Running I/O for 1 seconds... 00:13:54.624 00:13:54.624 Latency(us) 00:13:54.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.624 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:54.624 Verification LBA range: start 0x0 length 0x2000 00:13:54.625 nvme0n1 : 1.02 3942.46 15.40 0.00 0.00 32019.73 7804.74 21448.15 00:13:54.625 =================================================================================================================== 00:13:54.625 Total : 3942.46 15.40 0.00 0.00 32019.73 7804.74 21448.15 00:13:54.625 0 00:13:54.625 15:18:03 -- target/tls.sh@234 -- # killprocess 71035 00:13:54.625 15:18:03 -- common/autotest_common.sh@936 -- # '[' -z 71035 ']' 00:13:54.625 15:18:03 -- common/autotest_common.sh@940 -- # kill -0 71035 00:13:54.625 15:18:03 -- common/autotest_common.sh@941 -- # uname 00:13:54.625 15:18:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:54.625 15:18:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71035 00:13:54.625 killing process with pid 71035 00:13:54.625 Received shutdown signal, test time was about 1.000000 seconds 00:13:54.625 00:13:54.625 Latency(us) 00:13:54.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.625 =================================================================================================================== 00:13:54.625 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:54.625 15:18:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:54.625 15:18:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:54.625 15:18:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71035' 00:13:54.625 15:18:03 -- common/autotest_common.sh@955 -- # kill 71035 00:13:54.625 15:18:03 -- common/autotest_common.sh@960 -- # wait 71035 00:13:54.884 15:18:04 -- target/tls.sh@235 -- # killprocess 70979 00:13:54.884 15:18:04 -- common/autotest_common.sh@936 -- # '[' -z 70979 ']' 00:13:54.884 15:18:04 -- common/autotest_common.sh@940 -- # kill -0 70979 00:13:54.884 15:18:04 -- common/autotest_common.sh@941 -- # uname 00:13:54.884 15:18:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:54.885 15:18:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70979 00:13:55.143 killing process with pid 70979 00:13:55.143 15:18:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:55.143 15:18:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:55.143 15:18:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70979' 00:13:55.143 15:18:04 -- common/autotest_common.sh@955 -- # kill 70979 00:13:55.143 [2024-04-24 15:18:04.148240] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:55.143 15:18:04 -- common/autotest_common.sh@960 -- # wait 70979 00:13:55.402 15:18:04 -- target/tls.sh@238 -- # nvmfappstart 00:13:55.402 15:18:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:55.402 15:18:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:55.402 15:18:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.402 15:18:04 -- nvmf/common.sh@470 -- # nvmfpid=71086 00:13:55.402 15:18:04 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:55.402 15:18:04 -- nvmf/common.sh@471 -- # waitforlisten 71086 00:13:55.402 15:18:04 -- common/autotest_common.sh@817 -- # '[' -z 71086 ']' 00:13:55.402 15:18:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.402 15:18:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:55.402 15:18:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.402 15:18:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:55.402 15:18:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.402 [2024-04-24 15:18:04.491390] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:55.402 [2024-04-24 15:18:04.491782] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.402 [2024-04-24 15:18:04.624790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.660 [2024-04-24 15:18:04.729903] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.660 [2024-04-24 15:18:04.730220] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.660 [2024-04-24 15:18:04.730363] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.660 [2024-04-24 15:18:04.730379] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.660 [2024-04-24 15:18:04.730386] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.660 [2024-04-24 15:18:04.730417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.236 15:18:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:56.236 15:18:05 -- common/autotest_common.sh@850 -- # return 0 00:13:56.236 15:18:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:56.236 15:18:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:56.236 15:18:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.546 15:18:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.546 15:18:05 -- target/tls.sh@239 -- # rpc_cmd 00:13:56.546 15:18:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.546 15:18:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.547 [2024-04-24 15:18:05.486947] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.547 malloc0 00:13:56.547 [2024-04-24 15:18:05.519056] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:56.547 [2024-04-24 15:18:05.519475] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.547 15:18:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.547 15:18:05 -- target/tls.sh@252 -- # bdevperf_pid=71118 00:13:56.547 15:18:05 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:56.547 15:18:05 -- target/tls.sh@254 -- # waitforlisten 71118 /var/tmp/bdevperf.sock 00:13:56.547 15:18:05 -- common/autotest_common.sh@817 -- # '[' -z 71118 ']' 00:13:56.547 15:18:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:56.547 15:18:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:56.547 15:18:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:56.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:56.547 15:18:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:56.547 15:18:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.547 [2024-04-24 15:18:05.600391] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:13:56.547 [2024-04-24 15:18:05.600767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71118 ] 00:13:56.547 [2024-04-24 15:18:05.739874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.831 [2024-04-24 15:18:05.863306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.398 15:18:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:57.398 15:18:06 -- common/autotest_common.sh@850 -- # return 0 00:13:57.398 15:18:06 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Bs9Dou4SoA 00:13:57.657 15:18:06 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:57.916 [2024-04-24 15:18:07.041026] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:57.916 nvme0n1 00:13:57.916 15:18:07 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:58.175 Running I/O for 1 seconds... 00:13:59.121 00:13:59.121 Latency(us) 00:13:59.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.121 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.121 Verification LBA range: start 0x0 length 0x2000 00:13:59.121 nvme0n1 : 1.02 4194.87 16.39 0.00 0.00 30211.59 6345.08 23354.65 00:13:59.121 =================================================================================================================== 00:13:59.121 Total : 4194.87 16.39 0.00 0.00 30211.59 6345.08 23354.65 00:13:59.121 0 00:13:59.121 15:18:08 -- target/tls.sh@263 -- # rpc_cmd save_config 00:13:59.121 15:18:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.121 15:18:08 -- common/autotest_common.sh@10 -- # set +x 00:13:59.380 15:18:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.380 15:18:08 -- target/tls.sh@263 -- # tgtcfg='{ 00:13:59.380 "subsystems": [ 00:13:59.380 { 00:13:59.380 "subsystem": "keyring", 00:13:59.380 "config": [ 00:13:59.380 { 00:13:59.380 "method": "keyring_file_add_key", 00:13:59.380 "params": { 00:13:59.380 "name": "key0", 00:13:59.380 "path": "/tmp/tmp.Bs9Dou4SoA" 00:13:59.380 } 00:13:59.380 } 00:13:59.380 ] 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "subsystem": "iobuf", 00:13:59.380 "config": [ 00:13:59.380 { 00:13:59.380 "method": "iobuf_set_options", 00:13:59.380 "params": { 00:13:59.380 "small_pool_count": 8192, 00:13:59.380 "large_pool_count": 1024, 00:13:59.380 "small_bufsize": 8192, 00:13:59.380 "large_bufsize": 135168 00:13:59.380 } 00:13:59.380 } 00:13:59.380 ] 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "subsystem": "sock", 00:13:59.380 "config": [ 00:13:59.380 { 00:13:59.380 "method": "sock_impl_set_options", 00:13:59.380 "params": { 00:13:59.380 "impl_name": "uring", 00:13:59.380 "recv_buf_size": 2097152, 00:13:59.380 "send_buf_size": 2097152, 00:13:59.380 "enable_recv_pipe": true, 00:13:59.380 "enable_quickack": false, 00:13:59.380 "enable_placement_id": 0, 00:13:59.380 "enable_zerocopy_send_server": false, 00:13:59.380 "enable_zerocopy_send_client": false, 00:13:59.380 "zerocopy_threshold": 0, 00:13:59.380 "tls_version": 0, 00:13:59.380 "enable_ktls": false 00:13:59.380 } 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "method": "sock_impl_set_options", 00:13:59.380 "params": { 00:13:59.380 "impl_name": "posix", 00:13:59.380 "recv_buf_size": 2097152, 00:13:59.380 "send_buf_size": 2097152, 00:13:59.380 "enable_recv_pipe": true, 00:13:59.380 "enable_quickack": false, 00:13:59.380 "enable_placement_id": 0, 00:13:59.380 "enable_zerocopy_send_server": true, 00:13:59.380 "enable_zerocopy_send_client": false, 00:13:59.380 "zerocopy_threshold": 0, 00:13:59.380 "tls_version": 0, 00:13:59.380 "enable_ktls": false 00:13:59.380 } 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "method": "sock_impl_set_options", 00:13:59.380 "params": { 00:13:59.380 "impl_name": "ssl", 00:13:59.380 "recv_buf_size": 4096, 00:13:59.380 "send_buf_size": 4096, 00:13:59.380 "enable_recv_pipe": true, 00:13:59.380 "enable_quickack": false, 00:13:59.380 "enable_placement_id": 0, 00:13:59.380 "enable_zerocopy_send_server": true, 00:13:59.380 "enable_zerocopy_send_client": false, 00:13:59.380 "zerocopy_threshold": 0, 00:13:59.380 "tls_version": 0, 00:13:59.380 "enable_ktls": false 00:13:59.380 } 00:13:59.380 } 00:13:59.380 ] 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "subsystem": "vmd", 00:13:59.380 "config": [] 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "subsystem": "accel", 00:13:59.380 "config": [ 00:13:59.380 { 00:13:59.380 "method": "accel_set_options", 00:13:59.380 "params": { 00:13:59.380 "small_cache_size": 128, 00:13:59.380 "large_cache_size": 16, 00:13:59.380 "task_count": 2048, 00:13:59.380 "sequence_count": 2048, 00:13:59.380 "buf_count": 2048 00:13:59.380 } 00:13:59.380 } 00:13:59.380 ] 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "subsystem": "bdev", 00:13:59.380 "config": [ 00:13:59.380 { 00:13:59.380 "method": "bdev_set_options", 00:13:59.380 "params": { 00:13:59.380 "bdev_io_pool_size": 65535, 00:13:59.380 "bdev_io_cache_size": 256, 00:13:59.380 "bdev_auto_examine": true, 00:13:59.380 "iobuf_small_cache_size": 128, 00:13:59.380 "iobuf_large_cache_size": 16 00:13:59.380 } 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "method": "bdev_raid_set_options", 00:13:59.380 "params": { 00:13:59.380 "process_window_size_kb": 1024 00:13:59.380 } 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "method": "bdev_iscsi_set_options", 00:13:59.380 "params": { 00:13:59.380 "timeout_sec": 30 00:13:59.380 } 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "method": "bdev_nvme_set_options", 00:13:59.380 "params": { 00:13:59.380 "action_on_timeout": "none", 00:13:59.380 "timeout_us": 0, 00:13:59.380 "timeout_admin_us": 0, 00:13:59.380 "keep_alive_timeout_ms": 10000, 00:13:59.380 "arbitration_burst": 0, 00:13:59.380 "low_priority_weight": 0, 00:13:59.380 "medium_priority_weight": 0, 00:13:59.380 "high_priority_weight": 0, 00:13:59.380 "nvme_adminq_poll_period_us": 10000, 00:13:59.380 "nvme_ioq_poll_period_us": 0, 00:13:59.380 "io_queue_requests": 0, 00:13:59.380 "delay_cmd_submit": true, 00:13:59.380 "transport_retry_count": 4, 00:13:59.380 "bdev_retry_count": 3, 00:13:59.380 "transport_ack_timeout": 0, 00:13:59.380 "ctrlr_loss_timeout_sec": 0, 00:13:59.380 "reconnect_delay_sec": 0, 00:13:59.380 "fast_io_fail_timeout_sec": 0, 00:13:59.380 "disable_auto_failback": false, 00:13:59.380 "generate_uuids": false, 00:13:59.380 "transport_tos": 0, 00:13:59.380 "nvme_error_stat": false, 00:13:59.380 "rdma_srq_size": 0, 00:13:59.380 "io_path_stat": false, 00:13:59.380 "allow_accel_sequence": false, 00:13:59.380 "rdma_max_cq_size": 0, 00:13:59.380 "rdma_cm_event_timeout_ms": 0, 00:13:59.380 "dhchap_digests": [ 00:13:59.380 "sha256", 00:13:59.380 "sha384", 00:13:59.380 "sha512" 00:13:59.380 ], 00:13:59.380 "dhchap_dhgroups": [ 00:13:59.380 "null", 00:13:59.380 "ffdhe2048", 00:13:59.380 "ffdhe3072", 00:13:59.380 "ffdhe4096", 00:13:59.380 "ffdhe6144", 00:13:59.380 "ffdhe8192" 00:13:59.380 ] 00:13:59.380 } 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "method": "bdev_nvme_set_hotplug", 00:13:59.380 "params": { 00:13:59.380 "period_us": 100000, 00:13:59.380 "enable": false 00:13:59.380 } 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "method": "bdev_malloc_create", 00:13:59.380 "params": { 00:13:59.380 "name": "malloc0", 00:13:59.380 "num_blocks": 8192, 00:13:59.380 "block_size": 4096, 00:13:59.380 "physical_block_size": 4096, 00:13:59.380 "uuid": "e5f459f0-7515-4a21-8da0-bcc270f32ddd", 00:13:59.380 "optimal_io_boundary": 0 00:13:59.380 } 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "method": "bdev_wait_for_examine" 00:13:59.380 } 00:13:59.380 ] 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "subsystem": "nbd", 00:13:59.380 "config": [] 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "subsystem": "scheduler", 00:13:59.380 "config": [ 00:13:59.380 { 00:13:59.380 "method": "framework_set_scheduler", 00:13:59.380 "params": { 00:13:59.380 "name": "static" 00:13:59.380 } 00:13:59.380 } 00:13:59.380 ] 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "subsystem": "nvmf", 00:13:59.380 "config": [ 00:13:59.380 { 00:13:59.380 "method": "nvmf_set_config", 00:13:59.380 "params": { 00:13:59.380 "discovery_filter": "match_any", 00:13:59.380 "admin_cmd_passthru": { 00:13:59.380 "identify_ctrlr": false 00:13:59.380 } 00:13:59.380 } 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "method": "nvmf_set_max_subsystems", 00:13:59.380 "params": { 00:13:59.380 "max_subsystems": 1024 00:13:59.380 } 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "method": "nvmf_set_crdt", 00:13:59.380 "params": { 00:13:59.380 "crdt1": 0, 00:13:59.380 "crdt2": 0, 00:13:59.380 "crdt3": 0 00:13:59.380 } 00:13:59.380 }, 00:13:59.380 { 00:13:59.380 "method": "nvmf_create_transport", 00:13:59.380 "params": { 00:13:59.380 "trtype": "TCP", 00:13:59.380 "max_queue_depth": 128, 00:13:59.380 "max_io_qpairs_per_ctrlr": 127, 00:13:59.380 "in_capsule_data_size": 4096, 00:13:59.380 "max_io_size": 131072, 00:13:59.380 "io_unit_size": 131072, 00:13:59.380 "max_aq_depth": 128, 00:13:59.380 "num_shared_buffers": 511, 00:13:59.381 "buf_cache_size": 4294967295, 00:13:59.381 "dif_insert_or_strip": false, 00:13:59.381 "zcopy": false, 00:13:59.381 "c2h_success": false, 00:13:59.381 "sock_priority": 0, 00:13:59.381 "abort_timeout_sec": 1, 00:13:59.381 "ack_timeout": 0 00:13:59.381 } 00:13:59.381 }, 00:13:59.381 { 00:13:59.381 "method": "nvmf_create_subsystem", 00:13:59.381 "params": { 00:13:59.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.381 "allow_any_host": false, 00:13:59.381 "serial_number": "00000000000000000000", 00:13:59.381 "model_number": "SPDK bdev Controller", 00:13:59.381 "max_namespaces": 32, 00:13:59.381 "min_cntlid": 1, 00:13:59.381 "max_cntlid": 65519, 00:13:59.381 "ana_reporting": false 00:13:59.381 } 00:13:59.381 }, 00:13:59.381 { 00:13:59.381 "method": "nvmf_subsystem_add_host", 00:13:59.381 "params": { 00:13:59.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.381 "host": "nqn.2016-06.io.spdk:host1", 00:13:59.381 "psk": "key0" 00:13:59.381 } 00:13:59.381 }, 00:13:59.381 { 00:13:59.381 "method": "nvmf_subsystem_add_ns", 00:13:59.381 "params": { 00:13:59.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.381 "namespace": { 00:13:59.381 "nsid": 1, 00:13:59.381 "bdev_name": "malloc0", 00:13:59.381 "nguid": "E5F459F075154A218DA0BCC270F32DDD", 00:13:59.381 "uuid": "e5f459f0-7515-4a21-8da0-bcc270f32ddd", 00:13:59.381 "no_auto_visible": false 00:13:59.381 } 00:13:59.381 } 00:13:59.381 }, 00:13:59.381 { 00:13:59.381 "method": "nvmf_subsystem_add_listener", 00:13:59.381 "params": { 00:13:59.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.381 "listen_address": { 00:13:59.381 "trtype": "TCP", 00:13:59.381 "adrfam": "IPv4", 00:13:59.381 "traddr": "10.0.0.2", 00:13:59.381 "trsvcid": "4420" 00:13:59.381 }, 00:13:59.381 "secure_channel": true 00:13:59.381 } 00:13:59.381 } 00:13:59.381 ] 00:13:59.381 } 00:13:59.381 ] 00:13:59.381 }' 00:13:59.381 15:18:08 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:59.640 15:18:08 -- target/tls.sh@264 -- # bperfcfg='{ 00:13:59.640 "subsystems": [ 00:13:59.640 { 00:13:59.640 "subsystem": "keyring", 00:13:59.640 "config": [ 00:13:59.640 { 00:13:59.640 "method": "keyring_file_add_key", 00:13:59.640 "params": { 00:13:59.640 "name": "key0", 00:13:59.640 "path": "/tmp/tmp.Bs9Dou4SoA" 00:13:59.640 } 00:13:59.640 } 00:13:59.640 ] 00:13:59.640 }, 00:13:59.640 { 00:13:59.640 "subsystem": "iobuf", 00:13:59.640 "config": [ 00:13:59.640 { 00:13:59.640 "method": "iobuf_set_options", 00:13:59.640 "params": { 00:13:59.640 "small_pool_count": 8192, 00:13:59.640 "large_pool_count": 1024, 00:13:59.640 "small_bufsize": 8192, 00:13:59.640 "large_bufsize": 135168 00:13:59.640 } 00:13:59.640 } 00:13:59.640 ] 00:13:59.640 }, 00:13:59.640 { 00:13:59.640 "subsystem": "sock", 00:13:59.640 "config": [ 00:13:59.640 { 00:13:59.640 "method": "sock_impl_set_options", 00:13:59.640 "params": { 00:13:59.640 "impl_name": "uring", 00:13:59.640 "recv_buf_size": 2097152, 00:13:59.640 "send_buf_size": 2097152, 00:13:59.640 "enable_recv_pipe": true, 00:13:59.640 "enable_quickack": false, 00:13:59.640 "enable_placement_id": 0, 00:13:59.640 "enable_zerocopy_send_server": false, 00:13:59.640 "enable_zerocopy_send_client": false, 00:13:59.640 "zerocopy_threshold": 0, 00:13:59.640 "tls_version": 0, 00:13:59.640 "enable_ktls": false 00:13:59.640 } 00:13:59.640 }, 00:13:59.640 { 00:13:59.640 "method": "sock_impl_set_options", 00:13:59.640 "params": { 00:13:59.640 "impl_name": "posix", 00:13:59.640 "recv_buf_size": 2097152, 00:13:59.640 "send_buf_size": 2097152, 00:13:59.640 "enable_recv_pipe": true, 00:13:59.640 "enable_quickack": false, 00:13:59.640 "enable_placement_id": 0, 00:13:59.640 "enable_zerocopy_send_server": true, 00:13:59.640 "enable_zerocopy_send_client": false, 00:13:59.640 "zerocopy_threshold": 0, 00:13:59.640 "tls_version": 0, 00:13:59.640 "enable_ktls": false 00:13:59.640 } 00:13:59.640 }, 00:13:59.640 { 00:13:59.640 "method": "sock_impl_set_options", 00:13:59.640 "params": { 00:13:59.640 "impl_name": "ssl", 00:13:59.640 "recv_buf_size": 4096, 00:13:59.640 "send_buf_size": 4096, 00:13:59.640 "enable_recv_pipe": true, 00:13:59.640 "enable_quickack": false, 00:13:59.640 "enable_placement_id": 0, 00:13:59.640 "enable_zerocopy_send_server": true, 00:13:59.640 "enable_zerocopy_send_client": false, 00:13:59.640 "zerocopy_threshold": 0, 00:13:59.640 "tls_version": 0, 00:13:59.640 "enable_ktls": false 00:13:59.640 } 00:13:59.640 } 00:13:59.640 ] 00:13:59.640 }, 00:13:59.640 { 00:13:59.640 "subsystem": "vmd", 00:13:59.640 "config": [] 00:13:59.640 }, 00:13:59.640 { 00:13:59.640 "subsystem": "accel", 00:13:59.640 "config": [ 00:13:59.640 { 00:13:59.640 "method": "accel_set_options", 00:13:59.640 "params": { 00:13:59.640 "small_cache_size": 128, 00:13:59.640 "large_cache_size": 16, 00:13:59.640 "task_count": 2048, 00:13:59.640 "sequence_count": 2048, 00:13:59.640 "buf_count": 2048 00:13:59.640 } 00:13:59.640 } 00:13:59.640 ] 00:13:59.640 }, 00:13:59.640 { 00:13:59.640 "subsystem": "bdev", 00:13:59.640 "config": [ 00:13:59.640 { 00:13:59.640 "method": "bdev_set_options", 00:13:59.640 "params": { 00:13:59.640 "bdev_io_pool_size": 65535, 00:13:59.640 "bdev_io_cache_size": 256, 00:13:59.640 "bdev_auto_examine": true, 00:13:59.640 "iobuf_small_cache_size": 128, 00:13:59.640 "iobuf_large_cache_size": 16 00:13:59.640 } 00:13:59.640 }, 00:13:59.640 { 00:13:59.640 "method": "bdev_raid_set_options", 00:13:59.640 "params": { 00:13:59.640 "process_window_size_kb": 1024 00:13:59.640 } 00:13:59.640 }, 00:13:59.640 { 00:13:59.640 "method": "bdev_iscsi_set_options", 00:13:59.640 "params": { 00:13:59.640 "timeout_sec": 30 00:13:59.640 } 00:13:59.640 }, 00:13:59.640 { 00:13:59.640 "method": "bdev_nvme_set_options", 00:13:59.640 "params": { 00:13:59.640 "action_on_timeout": "none", 00:13:59.640 "timeout_us": 0, 00:13:59.640 "timeout_admin_us": 0, 00:13:59.640 "keep_alive_timeout_ms": 10000, 00:13:59.640 "arbitration_burst": 0, 00:13:59.640 "low_priority_weight": 0, 00:13:59.640 "medium_priority_weight": 0, 00:13:59.640 "high_priority_weight": 0, 00:13:59.640 "nvme_adminq_poll_period_us": 10000, 00:13:59.640 "nvme_ioq_poll_period_us": 0, 00:13:59.640 "io_queue_requests": 512, 00:13:59.640 "delay_cmd_submit": true, 00:13:59.640 "transport_retry_count": 4, 00:13:59.640 "bdev_retry_count": 3, 00:13:59.640 "transport_ack_timeout": 0, 00:13:59.640 "ctrlr_loss_timeout_sec": 0, 00:13:59.640 "reconnect_delay_sec": 0, 00:13:59.640 "fast_io_fail_timeout_sec": 0, 00:13:59.640 "disable_auto_failback": false, 00:13:59.641 "generate_uuids": false, 00:13:59.641 "transport_tos": 0, 00:13:59.641 "nvme_error_stat": false, 00:13:59.641 "rdma_srq_size": 0, 00:13:59.641 "io_path_stat": false, 00:13:59.641 "allow_accel_sequence": false, 00:13:59.641 "rdma_max_cq_size": 0, 00:13:59.641 "rdma_cm_event_timeout_ms": 0, 00:13:59.641 "dhchap_digests": [ 00:13:59.641 "sha256", 00:13:59.641 "sha384", 00:13:59.641 "sha512" 00:13:59.641 ], 00:13:59.641 "dhchap_dhgroups": [ 00:13:59.641 "null", 00:13:59.641 "ffdhe2048", 00:13:59.641 "ffdhe3072", 00:13:59.641 "ffdhe4096", 00:13:59.641 "ffdhe6144", 00:13:59.641 "ffdhe8192" 00:13:59.641 ] 00:13:59.641 } 00:13:59.641 }, 00:13:59.641 { 00:13:59.641 "method": "bdev_nvme_attach_controller", 00:13:59.641 "params": { 00:13:59.641 "name": "nvme0", 00:13:59.641 "trtype": "TCP", 00:13:59.641 "adrfam": "IPv4", 00:13:59.641 "traddr": "10.0.0.2", 00:13:59.641 "trsvcid": "4420", 00:13:59.641 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.641 "prchk_reftag": false, 00:13:59.641 "prchk_guard": false, 00:13:59.641 "ctrlr_loss_timeout_sec": 0, 00:13:59.641 "reconnect_delay_sec": 0, 00:13:59.641 "fast_io_fail_timeout_sec": 0, 00:13:59.641 "psk": "key0", 00:13:59.641 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:59.641 "hdgst": false, 00:13:59.641 "ddgst": false 00:13:59.641 } 00:13:59.641 }, 00:13:59.641 { 00:13:59.641 "method": "bdev_nvme_set_hotplug", 00:13:59.641 "params": { 00:13:59.641 "period_us": 100000, 00:13:59.641 "enable": false 00:13:59.641 } 00:13:59.641 }, 00:13:59.641 { 00:13:59.641 "method": "bdev_enable_histogram", 00:13:59.641 "params": { 00:13:59.641 "name": "nvme0n1", 00:13:59.641 "enable": true 00:13:59.641 } 00:13:59.641 }, 00:13:59.641 { 00:13:59.641 "method": "bdev_wait_for_examine" 00:13:59.641 } 00:13:59.641 ] 00:13:59.641 }, 00:13:59.641 { 00:13:59.641 "subsystem": "nbd", 00:13:59.641 "config": [] 00:13:59.641 } 00:13:59.641 ] 00:13:59.641 }' 00:13:59.641 15:18:08 -- target/tls.sh@266 -- # killprocess 71118 00:13:59.641 15:18:08 -- common/autotest_common.sh@936 -- # '[' -z 71118 ']' 00:13:59.641 15:18:08 -- common/autotest_common.sh@940 -- # kill -0 71118 00:13:59.641 15:18:08 -- common/autotest_common.sh@941 -- # uname 00:13:59.641 15:18:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:59.641 15:18:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71118 00:13:59.641 killing process with pid 71118 00:13:59.641 Received shutdown signal, test time was about 1.000000 seconds 00:13:59.641 00:13:59.641 Latency(us) 00:13:59.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.641 =================================================================================================================== 00:13:59.641 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:59.641 15:18:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:59.641 15:18:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:59.641 15:18:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71118' 00:13:59.641 15:18:08 -- common/autotest_common.sh@955 -- # kill 71118 00:13:59.641 15:18:08 -- common/autotest_common.sh@960 -- # wait 71118 00:13:59.900 15:18:09 -- target/tls.sh@267 -- # killprocess 71086 00:13:59.900 15:18:09 -- common/autotest_common.sh@936 -- # '[' -z 71086 ']' 00:13:59.900 15:18:09 -- common/autotest_common.sh@940 -- # kill -0 71086 00:13:59.900 15:18:09 -- common/autotest_common.sh@941 -- # uname 00:13:59.900 15:18:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:59.900 15:18:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71086 00:13:59.900 killing process with pid 71086 00:13:59.900 15:18:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:59.900 15:18:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:59.900 15:18:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71086' 00:13:59.900 15:18:09 -- common/autotest_common.sh@955 -- # kill 71086 00:13:59.900 15:18:09 -- common/autotest_common.sh@960 -- # wait 71086 00:14:00.159 15:18:09 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:00.159 15:18:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:00.159 15:18:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:00.159 15:18:09 -- target/tls.sh@269 -- # echo '{ 00:14:00.159 "subsystems": [ 00:14:00.159 { 00:14:00.159 "subsystem": "keyring", 00:14:00.159 "config": [ 00:14:00.159 { 00:14:00.159 "method": "keyring_file_add_key", 00:14:00.159 "params": { 00:14:00.159 "name": "key0", 00:14:00.159 "path": "/tmp/tmp.Bs9Dou4SoA" 00:14:00.159 } 00:14:00.159 } 00:14:00.159 ] 00:14:00.159 }, 00:14:00.159 { 00:14:00.159 "subsystem": "iobuf", 00:14:00.159 "config": [ 00:14:00.159 { 00:14:00.159 "method": "iobuf_set_options", 00:14:00.159 "params": { 00:14:00.159 "small_pool_count": 8192, 00:14:00.159 "large_pool_count": 1024, 00:14:00.159 "small_bufsize": 8192, 00:14:00.159 "large_bufsize": 135168 00:14:00.159 } 00:14:00.159 } 00:14:00.159 ] 00:14:00.159 }, 00:14:00.159 { 00:14:00.159 "subsystem": "sock", 00:14:00.159 "config": [ 00:14:00.159 { 00:14:00.159 "method": "sock_impl_set_options", 00:14:00.159 "params": { 00:14:00.159 "impl_name": "uring", 00:14:00.159 "recv_buf_size": 2097152, 00:14:00.159 "send_buf_size": 2097152, 00:14:00.159 "enable_recv_pipe": true, 00:14:00.159 "enable_quickack": false, 00:14:00.159 "enable_placement_id": 0, 00:14:00.159 "enable_zerocopy_send_server": false, 00:14:00.159 "enable_zerocopy_send_client": false, 00:14:00.159 "zerocopy_threshold": 0, 00:14:00.159 "tls_version": 0, 00:14:00.159 "enable_ktls": false 00:14:00.159 } 00:14:00.159 }, 00:14:00.159 { 00:14:00.159 "method": "sock_impl_set_options", 00:14:00.159 "params": { 00:14:00.159 "impl_name": "posix", 00:14:00.159 "recv_buf_size": 2097152, 00:14:00.159 "send_buf_size": 2097152, 00:14:00.159 "enable_recv_pipe": true, 00:14:00.159 "enable_quickack": false, 00:14:00.159 "enable_placement_id": 0, 00:14:00.159 "enable_zerocopy_send_server": true, 00:14:00.159 "enable_zerocopy_send_client": false, 00:14:00.159 "zerocopy_threshold": 0, 00:14:00.159 "tls_version": 0, 00:14:00.159 "enable_ktls": false 00:14:00.159 } 00:14:00.159 }, 00:14:00.159 { 00:14:00.159 "method": "sock_impl_set_options", 00:14:00.159 "params": { 00:14:00.159 "impl_name": "ssl", 00:14:00.159 "recv_buf_size": 4096, 00:14:00.159 "send_buf_size": 4096, 00:14:00.159 "enable_recv_pipe": true, 00:14:00.159 "enable_quickack": false, 00:14:00.159 "enable_placement_id": 0, 00:14:00.159 "enable_zerocopy_send_server": true, 00:14:00.159 "enable_zerocopy_send_client": false, 00:14:00.159 "zerocopy_threshold": 0, 00:14:00.159 "tls_version": 0, 00:14:00.159 "enable_ktls": false 00:14:00.159 } 00:14:00.159 } 00:14:00.159 ] 00:14:00.159 }, 00:14:00.159 { 00:14:00.159 "subsystem": "vmd", 00:14:00.159 "config": [] 00:14:00.159 }, 00:14:00.159 { 00:14:00.159 "subsystem": "accel", 00:14:00.159 "config": [ 00:14:00.159 { 00:14:00.159 "method": "accel_set_options", 00:14:00.160 "params": { 00:14:00.160 "small_cache_size": 128, 00:14:00.160 "large_cache_size": 16, 00:14:00.160 "task_count": 2048, 00:14:00.160 "sequence_count": 2048, 00:14:00.160 "buf_count": 2048 00:14:00.160 } 00:14:00.160 } 00:14:00.160 ] 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "subsystem": "bdev", 00:14:00.160 "config": [ 00:14:00.160 { 00:14:00.160 "method": "bdev_set_options", 00:14:00.160 "params": { 00:14:00.160 "bdev_io_pool_size": 65535, 00:14:00.160 "bdev_io_cache_size": 256, 00:14:00.160 "bdev_auto_examine": true, 00:14:00.160 "iobuf_small_cache_size": 128, 00:14:00.160 "iobuf_large_cache_size": 16 00:14:00.160 } 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "method": "bdev_raid_set_options", 00:14:00.160 "params": { 00:14:00.160 "process_window_size_kb": 1024 00:14:00.160 } 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "method": "bdev_iscsi_set_options", 00:14:00.160 "params": { 00:14:00.160 "timeout_sec": 30 00:14:00.160 } 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "method": "bdev_nvme_set_options", 00:14:00.160 "params": { 00:14:00.160 "action_on_timeout": "none", 00:14:00.160 "timeout_us": 0, 00:14:00.160 "timeout_admin_us": 0, 00:14:00.160 "keep_alive_timeout_ms": 10000, 00:14:00.160 "arbitration_burst": 0, 00:14:00.160 "low_priority_weight": 0, 00:14:00.160 "medium_priority_weight": 0, 00:14:00.160 "high_priority_weight": 0, 00:14:00.160 "nvme_adminq_poll_period_us": 10000, 00:14:00.160 "nvme_ioq_poll_period_us": 0, 00:14:00.160 "io_queue_requests": 0, 00:14:00.160 "delay_cmd_submit": true, 00:14:00.160 "transport_retry_count": 4, 00:14:00.160 "bdev_retry_count": 3, 00:14:00.160 "transport_ack_timeout": 0, 00:14:00.160 "ctrlr_loss_timeout_sec": 0, 00:14:00.160 "reconnect_delay_sec": 0, 00:14:00.160 "fast_io_fail_timeout_sec": 0, 00:14:00.160 "disable_auto_failback": false, 00:14:00.160 "generate_uuids": false, 00:14:00.160 "transport_tos": 0, 00:14:00.160 "nvme_error_stat": false, 00:14:00.160 "rdma_srq_size": 0, 00:14:00.160 "io_path_stat": false, 00:14:00.160 "allow_accel_sequence": false, 00:14:00.160 "rdma_max_cq_size": 0, 00:14:00.160 "rdma_cm_event_timeout_ms": 0, 00:14:00.160 "dhchap_digests": [ 00:14:00.160 "sha256", 00:14:00.160 "sha384", 00:14:00.160 "sha512" 00:14:00.160 ], 00:14:00.160 "dhchap_dhgroups": [ 00:14:00.160 "null", 00:14:00.160 "ffdhe2048", 00:14:00.160 "ffdhe3072", 00:14:00.160 "ffdhe4096", 00:14:00.160 "ffdhe6144", 00:14:00.160 "ffdhe8192" 00:14:00.160 ] 00:14:00.160 } 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "method": "bdev_nvme_set_hotplug", 00:14:00.160 "params": { 00:14:00.160 "period_us": 100000, 00:14:00.160 "enable": false 00:14:00.160 } 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "method": "bdev_malloc_create", 00:14:00.160 "params": { 00:14:00.160 "name": "malloc0", 00:14:00.160 "num_blocks": 8192, 00:14:00.160 "block_size": 4096, 00:14:00.160 "physical_block_size": 4096, 00:14:00.160 "uuid": "e5f459f0-7515-4a21-8da0-bcc270f32ddd", 00:14:00.160 "optimal_io_boundary": 0 00:14:00.160 } 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "method": "bdev_wait_for_examine" 00:14:00.160 } 00:14:00.160 ] 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "subsystem": "nbd", 00:14:00.160 "config": [] 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "subsystem": "scheduler", 00:14:00.160 "config": [ 00:14:00.160 { 00:14:00.160 "method": "framework_set_scheduler", 00:14:00.160 "params": { 00:14:00.160 "name": "static" 00:14:00.160 } 00:14:00.160 } 00:14:00.160 ] 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "subsystem": "nvmf", 00:14:00.160 "config": [ 00:14:00.160 { 00:14:00.160 "method": "nvmf_set_config", 00:14:00.160 "params": { 00:14:00.160 "discovery_filter": "match_any", 00:14:00.160 "admin_cmd_passthru": { 00:14:00.160 "identify_ctrlr": false 00:14:00.160 } 00:14:00.160 } 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "method": "nvmf_set_max_subsystems", 00:14:00.160 "params": { 00:14:00.160 "max_subsystems": 1024 00:14:00.160 } 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "method": "nvmf_set_crdt", 00:14:00.160 "params": { 00:14:00.160 "crdt1": 0, 00:14:00.160 "crdt2": 0, 00:14:00.160 "crdt3": 0 00:14:00.160 } 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "method": "nvmf_create_transport", 00:14:00.160 "params": { 00:14:00.160 "trtype": "TCP", 00:14:00.160 "max_queue_depth": 128, 00:14:00.160 "max_io_qpairs_per_ctrlr": 127, 00:14:00.160 "in_capsule_data_size": 4096, 00:14:00.160 "max_io_size": 131072, 00:14:00.160 "io_unit_size": 131072, 00:14:00.160 "max_aq_depth": 128, 00:14:00.160 "num_shared_buffers": 511, 00:14:00.160 "buf_cache_size": 4294967295, 00:14:00.160 "dif_insert_or_strip": false, 00:14:00.160 "zcopy": false, 00:14:00.160 "c2h_success": false, 00:14:00.160 "sock_priority": 0, 00:14:00.160 "abort_timeout_sec": 1, 00:14:00.160 "ack_timeout": 0 00:14:00.160 } 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "method": "nvmf_create_subsystem", 00:14:00.160 "params": { 00:14:00.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.160 "allow_any_host": false, 00:14:00.160 "serial_number": "00000000000000000000", 00:14:00.160 "model_number": "SPDK bdev Controller", 00:14:00.160 "max_namespaces": 32, 00:14:00.160 "min_cntlid": 1, 00:14:00.160 "max_cntlid": 65519, 00:14:00.160 "ana_reporting": false 00:14:00.160 } 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "method": "nvmf_subsystem_add_host", 00:14:00.160 "params": { 00:14:00.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.160 "host": "nqn.2016-06.io.spdk:host1", 00:14:00.160 "psk": "key0" 00:14:00.160 } 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "method": "nvmf_subsystem_add_ns", 00:14:00.160 "params": { 00:14:00.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.160 "namespace": { 00:14:00.160 "nsid": 1, 00:14:00.160 "bdev_name": "malloc0", 00:14:00.160 "nguid": "E5F459F075154A218DA0BCC270F32DDD", 00:14:00.160 "uuid": "e5f459f0-7515-4a21-8da0-bcc270f32ddd", 00:14:00.160 "no_auto_visible": false 00:14:00.160 } 00:14:00.160 } 00:14:00.160 }, 00:14:00.160 { 00:14:00.160 "method": "nvmf_subsystem_add_listener", 00:14:00.160 "params": { 00:14:00.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.160 "listen_address": { 00:14:00.160 "trtype": "TCP", 00:14:00.160 "adrfam": "IPv4", 00:14:00.160 "traddr": "10.0.0.2", 00:14:00.160 "trsvcid": "4420" 00:14:00.160 }, 00:14:00.160 "secure_channel": true 00:14:00.160 } 00:14:00.160 } 00:14:00.160 ] 00:14:00.160 } 00:14:00.160 ] 00:14:00.160 }' 00:14:00.160 15:18:09 -- common/autotest_common.sh@10 -- # set +x 00:14:00.160 15:18:09 -- nvmf/common.sh@470 -- # nvmfpid=71179 00:14:00.160 15:18:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:00.160 15:18:09 -- nvmf/common.sh@471 -- # waitforlisten 71179 00:14:00.160 15:18:09 -- common/autotest_common.sh@817 -- # '[' -z 71179 ']' 00:14:00.160 15:18:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.160 15:18:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:00.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.160 15:18:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.160 15:18:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:00.160 15:18:09 -- common/autotest_common.sh@10 -- # set +x 00:14:00.160 [2024-04-24 15:18:09.389950] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:14:00.160 [2024-04-24 15:18:09.390073] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.419 [2024-04-24 15:18:09.532566] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.679 [2024-04-24 15:18:09.666898] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.679 [2024-04-24 15:18:09.666988] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.679 [2024-04-24 15:18:09.667006] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.679 [2024-04-24 15:18:09.667016] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.679 [2024-04-24 15:18:09.667026] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.679 [2024-04-24 15:18:09.667131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.680 [2024-04-24 15:18:09.904423] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.938 [2024-04-24 15:18:09.936376] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:00.938 [2024-04-24 15:18:09.936706] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.204 15:18:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:01.204 15:18:10 -- common/autotest_common.sh@850 -- # return 0 00:14:01.204 15:18:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:01.204 15:18:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:01.204 15:18:10 -- common/autotest_common.sh@10 -- # set +x 00:14:01.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.204 15:18:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.204 15:18:10 -- target/tls.sh@272 -- # bdevperf_pid=71211 00:14:01.204 15:18:10 -- target/tls.sh@273 -- # waitforlisten 71211 /var/tmp/bdevperf.sock 00:14:01.204 15:18:10 -- common/autotest_common.sh@817 -- # '[' -z 71211 ']' 00:14:01.204 15:18:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.204 15:18:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:01.204 15:18:10 -- target/tls.sh@270 -- # echo '{ 00:14:01.204 "subsystems": [ 00:14:01.204 { 00:14:01.204 "subsystem": "keyring", 00:14:01.204 "config": [ 00:14:01.204 { 00:14:01.204 "method": "keyring_file_add_key", 00:14:01.204 "params": { 00:14:01.204 "name": "key0", 00:14:01.204 "path": "/tmp/tmp.Bs9Dou4SoA" 00:14:01.204 } 00:14:01.204 } 00:14:01.204 ] 00:14:01.204 }, 00:14:01.204 { 00:14:01.204 "subsystem": "iobuf", 00:14:01.204 "config": [ 00:14:01.204 { 00:14:01.204 "method": "iobuf_set_options", 00:14:01.204 "params": { 00:14:01.204 "small_pool_count": 8192, 00:14:01.204 "large_pool_count": 1024, 00:14:01.204 "small_bufsize": 8192, 00:14:01.204 "large_bufsize": 135168 00:14:01.204 } 00:14:01.204 } 00:14:01.204 ] 00:14:01.204 }, 00:14:01.204 { 00:14:01.204 "subsystem": "sock", 00:14:01.204 "config": [ 00:14:01.204 { 00:14:01.204 "method": "sock_impl_set_options", 00:14:01.204 "params": { 00:14:01.204 "impl_name": "uring", 00:14:01.204 "recv_buf_size": 2097152, 00:14:01.204 "send_buf_size": 2097152, 00:14:01.204 "enable_recv_pipe": true, 00:14:01.204 "enable_quickack": false, 00:14:01.204 "enable_placement_id": 0, 00:14:01.204 "enable_zerocopy_send_server": false, 00:14:01.204 "enable_zerocopy_send_client": false, 00:14:01.204 "zerocopy_threshold": 0, 00:14:01.204 "tls_version": 0, 00:14:01.204 "enable_ktls": false 00:14:01.204 } 00:14:01.204 }, 00:14:01.204 { 00:14:01.204 "method": "sock_impl_set_options", 00:14:01.204 "params": { 00:14:01.204 "impl_name": "posix", 00:14:01.204 "recv_buf_size": 2097152, 00:14:01.204 "send_buf_size": 2097152, 00:14:01.204 "enable_recv_pipe": true, 00:14:01.204 "enable_quickack": false, 00:14:01.204 "enable_placement_id": 0, 00:14:01.204 "enable_zerocopy_send_server": true, 00:14:01.204 "enable_zerocopy_send_client": false, 00:14:01.204 "zerocopy_threshold": 0, 00:14:01.204 "tls_version": 0, 00:14:01.204 "enable_ktls": false 00:14:01.204 } 00:14:01.204 }, 00:14:01.204 { 00:14:01.204 "method": "sock_impl_set_options", 00:14:01.204 "params": { 00:14:01.204 "impl_name": "ssl", 00:14:01.204 "recv_buf_size": 4096, 00:14:01.204 "send_buf_size": 4096, 00:14:01.204 "enable_recv_pipe": true, 00:14:01.204 "enable_quickack": false, 00:14:01.204 "enable_placement_id": 0, 00:14:01.204 "enable_zerocopy_send_server": true, 00:14:01.204 "enable_zerocopy_send_client": false, 00:14:01.204 "zerocopy_threshold": 0, 00:14:01.204 "tls_version": 0, 00:14:01.204 "enable_ktls": false 00:14:01.204 } 00:14:01.204 } 00:14:01.204 ] 00:14:01.204 }, 00:14:01.204 { 00:14:01.204 "subsystem": "vmd", 00:14:01.204 "config": [] 00:14:01.204 }, 00:14:01.204 { 00:14:01.204 "subsystem": "accel", 00:14:01.204 "config": [ 00:14:01.204 { 00:14:01.205 "method": "accel_set_options", 00:14:01.205 "params": { 00:14:01.205 "small_cache_size": 128, 00:14:01.205 "large_cache_size": 16, 00:14:01.205 "task_count": 2048, 00:14:01.205 "sequence_count": 2048, 00:14:01.205 "buf_count": 2048 00:14:01.205 } 00:14:01.205 } 00:14:01.205 ] 00:14:01.205 }, 00:14:01.205 { 00:14:01.205 "subsystem": "bdev", 00:14:01.205 "config": [ 00:14:01.205 { 00:14:01.205 "method": "bdev_set_options", 00:14:01.205 "params": { 00:14:01.205 "bdev_io_pool_size": 65535, 00:14:01.205 "bdev_io_cache_size": 256, 00:14:01.205 "bdev_auto_examine": true, 00:14:01.205 "iobuf_small_cache_size": 128, 00:14:01.205 "iobuf_large_cache_size": 16 00:14:01.205 } 00:14:01.205 }, 00:14:01.205 { 00:14:01.205 "method": "bdev_raid_set_options", 00:14:01.205 "params": { 00:14:01.205 "process_window_size_kb": 1024 00:14:01.205 } 00:14:01.205 }, 00:14:01.205 { 00:14:01.205 "method": "bdev_iscsi_set_options", 00:14:01.205 "params": { 00:14:01.205 "timeout_sec": 30 00:14:01.205 } 00:14:01.205 }, 00:14:01.205 { 00:14:01.205 "method": "bdev_nvme_set_options", 00:14:01.205 "params": { 00:14:01.205 "action_on_timeout": "none", 00:14:01.205 "timeout_us": 0, 00:14:01.205 "timeout_admin_us": 0, 00:14:01.205 "keep_alive_timeout_ms": 10000, 00:14:01.205 "arbitration_burst": 0, 00:14:01.205 "low_priority_weight": 0, 00:14:01.205 "medium_priority_weight": 0, 00:14:01.205 "high_priority_weight": 0, 00:14:01.205 "nvme_adminq_poll_period_us": 10000, 00:14:01.205 "nvme_ioq_poll_period_us": 0, 00:14:01.205 "io_queue_requests": 512, 00:14:01.205 "delay_cmd_submit": true, 00:14:01.205 "transport_retry_count": 4, 00:14:01.205 "bdev_retry_count": 3, 00:14:01.205 "transport_ack_timeout": 0, 00:14:01.205 "ctrlr_loss_timeout_sec": 0, 00:14:01.205 "reconnect_delay_sec": 0, 00:14:01.205 "fast_io_fail_timeout_sec": 0, 00:14:01.205 "disable_auto_failback": false, 00:14:01.205 "generate_uuids": false, 00:14:01.205 "transport_tos": 0, 00:14:01.205 "nvme_error_stat": false, 00:14:01.205 "rdma_srq_size": 0, 00:14:01.205 "io_path_stat": false, 00:14:01.205 "allow_accel_sequence": false, 00:14:01.205 "rdma_max_cq_size": 0, 00:14:01.205 "rdma_cm_event_timeout_ms": 0, 00:14:01.205 "dhchap_digests": [ 00:14:01.205 "sha256", 00:14:01.205 "sha384", 00:14:01.205 "sha512" 00:14:01.205 ], 00:14:01.205 "dhchap_dhgroups": [ 00:14:01.205 "null", 00:14:01.205 "ffdhe2048", 00:14:01.205 "ffdhe3072", 00:14:01.205 "ffdhe4096", 00:14:01.205 "ffdhe6144", 00:14:01.205 "ffdhe8192" 00:14:01.205 ] 00:14:01.205 } 00:14:01.205 }, 00:14:01.205 { 00:14:01.205 "method": "bdev_nvme_attach_controller", 00:14:01.205 "params": { 00:14:01.205 "name": "nvme0", 00:14:01.205 "trtype": "TCP", 00:14:01.205 "adrfam": "IPv4", 00:14:01.205 "traddr": "10.0.0.2", 00:14:01.205 "trsvcid": "4420", 00:14:01.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.205 "prchk_reftag": false, 00:14:01.205 "prchk_guard": false, 00:14:01.205 "ctrlr_loss_timeout_sec": 0, 00:14:01.205 "reconnect_delay_sec": 0, 00:14:01.205 "fast_io_fail_timeout_sec": 0, 00:14:01.205 "psk": "key0", 00:14:01.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:01.205 "hdgst": false, 00:14:01.205 "ddgst": false 00:14:01.205 } 00:14:01.205 }, 00:14:01.205 { 00:14:01.205 "method": "bdev_nvme_set_hotplug", 00:14:01.205 "params": { 00:14:01.205 "period_us": 100000, 00:14:01.205 "enable": false 00:14:01.205 } 00:14:01.205 }, 00:14:01.205 { 00:14:01.205 "method": "bdev_enable_histogram", 00:14:01.205 "params": { 00:14:01.205 "name": "nvme0n1", 00:14:01.205 "enable": true 00:14:01.205 } 00:14:01.205 }, 00:14:01.205 { 00:14:01.205 "method": "bdev_wait_for_examine" 00:14:01.205 } 00:14:01.205 ] 00:14:01.205 }, 00:14:01.205 { 00:14:01.205 "subsystem": "nbd", 00:14:01.205 "config": [] 00:14:01.205 } 00:14:01.205 ] 00:14:01.205 }' 00:14:01.205 15:18:10 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:01.205 15:18:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.205 15:18:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:01.205 15:18:10 -- common/autotest_common.sh@10 -- # set +x 00:14:01.466 [2024-04-24 15:18:10.444929] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:14:01.466 [2024-04-24 15:18:10.445016] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71211 ] 00:14:01.466 [2024-04-24 15:18:10.582941] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.466 [2024-04-24 15:18:10.705062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.725 [2024-04-24 15:18:10.885579] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:02.292 15:18:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:02.292 15:18:11 -- common/autotest_common.sh@850 -- # return 0 00:14:02.292 15:18:11 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:02.292 15:18:11 -- target/tls.sh@275 -- # jq -r '.[].name' 00:14:02.550 15:18:11 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.550 15:18:11 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:02.550 Running I/O for 1 seconds... 00:14:03.926 00:14:03.926 Latency(us) 00:14:03.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.926 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:03.926 Verification LBA range: start 0x0 length 0x2000 00:14:03.926 nvme0n1 : 1.02 4044.05 15.80 0.00 0.00 31276.65 2502.28 21567.30 00:14:03.926 =================================================================================================================== 00:14:03.926 Total : 4044.05 15.80 0.00 0.00 31276.65 2502.28 21567.30 00:14:03.926 0 00:14:03.926 15:18:12 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:14:03.926 15:18:12 -- target/tls.sh@279 -- # cleanup 00:14:03.926 15:18:12 -- target/tls.sh@15 -- # process_shm --id 0 00:14:03.926 15:18:12 -- common/autotest_common.sh@794 -- # type=--id 00:14:03.926 15:18:12 -- common/autotest_common.sh@795 -- # id=0 00:14:03.926 15:18:12 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:14:03.926 15:18:12 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:03.926 15:18:12 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:14:03.926 15:18:12 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:14:03.926 15:18:12 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:14:03.926 15:18:12 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:03.926 nvmf_trace.0 00:14:03.926 15:18:12 -- common/autotest_common.sh@809 -- # return 0 00:14:03.926 15:18:12 -- target/tls.sh@16 -- # killprocess 71211 00:14:03.926 15:18:12 -- common/autotest_common.sh@936 -- # '[' -z 71211 ']' 00:14:03.927 15:18:12 -- common/autotest_common.sh@940 -- # kill -0 71211 00:14:03.927 15:18:12 -- common/autotest_common.sh@941 -- # uname 00:14:03.927 15:18:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:03.927 15:18:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71211 00:14:03.927 killing process with pid 71211 00:14:03.927 Received shutdown signal, test time was about 1.000000 seconds 00:14:03.927 00:14:03.927 Latency(us) 00:14:03.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.927 =================================================================================================================== 00:14:03.927 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:03.927 15:18:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:03.927 15:18:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:03.927 15:18:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71211' 00:14:03.927 15:18:12 -- common/autotest_common.sh@955 -- # kill 71211 00:14:03.927 15:18:12 -- common/autotest_common.sh@960 -- # wait 71211 00:14:03.927 15:18:13 -- target/tls.sh@17 -- # nvmftestfini 00:14:03.927 15:18:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:03.927 15:18:13 -- nvmf/common.sh@117 -- # sync 00:14:04.186 15:18:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.186 15:18:13 -- nvmf/common.sh@120 -- # set +e 00:14:04.186 15:18:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.186 15:18:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.186 rmmod nvme_tcp 00:14:04.186 rmmod nvme_fabrics 00:14:04.186 rmmod nvme_keyring 00:14:04.186 15:18:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:04.186 15:18:13 -- nvmf/common.sh@124 -- # set -e 00:14:04.186 15:18:13 -- nvmf/common.sh@125 -- # return 0 00:14:04.186 15:18:13 -- nvmf/common.sh@478 -- # '[' -n 71179 ']' 00:14:04.186 15:18:13 -- nvmf/common.sh@479 -- # killprocess 71179 00:14:04.186 15:18:13 -- common/autotest_common.sh@936 -- # '[' -z 71179 ']' 00:14:04.186 15:18:13 -- common/autotest_common.sh@940 -- # kill -0 71179 00:14:04.186 15:18:13 -- common/autotest_common.sh@941 -- # uname 00:14:04.186 15:18:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:04.186 15:18:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71179 00:14:04.186 killing process with pid 71179 00:14:04.186 15:18:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:04.186 15:18:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:04.186 15:18:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71179' 00:14:04.186 15:18:13 -- common/autotest_common.sh@955 -- # kill 71179 00:14:04.186 15:18:13 -- common/autotest_common.sh@960 -- # wait 71179 00:14:04.445 15:18:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:04.445 15:18:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:04.445 15:18:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:04.445 15:18:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.445 15:18:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:04.445 15:18:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.445 15:18:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.445 15:18:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.445 15:18:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:04.445 15:18:13 -- target/tls.sh@18 -- # rm -f /tmp/tmp.ZPnMUHTLuz /tmp/tmp.C9pO5u7Q9t /tmp/tmp.Bs9Dou4SoA 00:14:04.445 00:14:04.445 real 1m27.277s 00:14:04.445 user 2m18.979s 00:14:04.445 sys 0m27.706s 00:14:04.445 15:18:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:04.445 ************************************ 00:14:04.445 15:18:13 -- common/autotest_common.sh@10 -- # set +x 00:14:04.445 END TEST nvmf_tls 00:14:04.445 ************************************ 00:14:04.445 15:18:13 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:04.445 15:18:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:04.445 15:18:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:04.445 15:18:13 -- common/autotest_common.sh@10 -- # set +x 00:14:04.704 ************************************ 00:14:04.704 START TEST nvmf_fips 00:14:04.704 ************************************ 00:14:04.704 15:18:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:04.704 * Looking for test storage... 00:14:04.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:04.704 15:18:13 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:04.704 15:18:13 -- nvmf/common.sh@7 -- # uname -s 00:14:04.704 15:18:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.704 15:18:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.704 15:18:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.704 15:18:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.704 15:18:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.704 15:18:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.704 15:18:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.704 15:18:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.704 15:18:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.704 15:18:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.704 15:18:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:14:04.704 15:18:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:14:04.704 15:18:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.704 15:18:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.704 15:18:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:04.704 15:18:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.704 15:18:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:04.704 15:18:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.704 15:18:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.705 15:18:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.705 15:18:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.705 15:18:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.705 15:18:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.705 15:18:13 -- paths/export.sh@5 -- # export PATH 00:14:04.705 15:18:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.705 15:18:13 -- nvmf/common.sh@47 -- # : 0 00:14:04.705 15:18:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.705 15:18:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.705 15:18:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.705 15:18:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.705 15:18:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.705 15:18:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.705 15:18:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.705 15:18:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.705 15:18:13 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:04.705 15:18:13 -- fips/fips.sh@89 -- # check_openssl_version 00:14:04.705 15:18:13 -- fips/fips.sh@83 -- # local target=3.0.0 00:14:04.705 15:18:13 -- fips/fips.sh@85 -- # openssl version 00:14:04.705 15:18:13 -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:04.705 15:18:13 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:04.705 15:18:13 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:04.705 15:18:13 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:04.705 15:18:13 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:04.705 15:18:13 -- scripts/common.sh@333 -- # IFS=.-: 00:14:04.705 15:18:13 -- scripts/common.sh@333 -- # read -ra ver1 00:14:04.705 15:18:13 -- scripts/common.sh@334 -- # IFS=.-: 00:14:04.705 15:18:13 -- scripts/common.sh@334 -- # read -ra ver2 00:14:04.705 15:18:13 -- scripts/common.sh@335 -- # local 'op=>=' 00:14:04.705 15:18:13 -- scripts/common.sh@337 -- # ver1_l=3 00:14:04.705 15:18:13 -- scripts/common.sh@338 -- # ver2_l=3 00:14:04.705 15:18:13 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:04.705 15:18:13 -- scripts/common.sh@341 -- # case "$op" in 00:14:04.705 15:18:13 -- scripts/common.sh@345 -- # : 1 00:14:04.705 15:18:13 -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:04.705 15:18:13 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.705 15:18:13 -- scripts/common.sh@362 -- # decimal 3 00:14:04.705 15:18:13 -- scripts/common.sh@350 -- # local d=3 00:14:04.705 15:18:13 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:04.705 15:18:13 -- scripts/common.sh@352 -- # echo 3 00:14:04.705 15:18:13 -- scripts/common.sh@362 -- # ver1[v]=3 00:14:04.705 15:18:13 -- scripts/common.sh@363 -- # decimal 3 00:14:04.705 15:18:13 -- scripts/common.sh@350 -- # local d=3 00:14:04.705 15:18:13 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:04.705 15:18:13 -- scripts/common.sh@352 -- # echo 3 00:14:04.705 15:18:13 -- scripts/common.sh@363 -- # ver2[v]=3 00:14:04.705 15:18:13 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:04.705 15:18:13 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:04.705 15:18:13 -- scripts/common.sh@361 -- # (( v++ )) 00:14:04.705 15:18:13 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.705 15:18:13 -- scripts/common.sh@362 -- # decimal 0 00:14:04.705 15:18:13 -- scripts/common.sh@350 -- # local d=0 00:14:04.705 15:18:13 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:04.705 15:18:13 -- scripts/common.sh@352 -- # echo 0 00:14:04.705 15:18:13 -- scripts/common.sh@362 -- # ver1[v]=0 00:14:04.705 15:18:13 -- scripts/common.sh@363 -- # decimal 0 00:14:04.705 15:18:13 -- scripts/common.sh@350 -- # local d=0 00:14:04.705 15:18:13 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:04.705 15:18:13 -- scripts/common.sh@352 -- # echo 0 00:14:04.705 15:18:13 -- scripts/common.sh@363 -- # ver2[v]=0 00:14:04.705 15:18:13 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:04.705 15:18:13 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:04.705 15:18:13 -- scripts/common.sh@361 -- # (( v++ )) 00:14:04.705 15:18:13 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.705 15:18:13 -- scripts/common.sh@362 -- # decimal 9 00:14:04.705 15:18:13 -- scripts/common.sh@350 -- # local d=9 00:14:04.705 15:18:13 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:04.705 15:18:13 -- scripts/common.sh@352 -- # echo 9 00:14:04.705 15:18:13 -- scripts/common.sh@362 -- # ver1[v]=9 00:14:04.705 15:18:13 -- scripts/common.sh@363 -- # decimal 0 00:14:04.705 15:18:13 -- scripts/common.sh@350 -- # local d=0 00:14:04.705 15:18:13 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:04.705 15:18:13 -- scripts/common.sh@352 -- # echo 0 00:14:04.705 15:18:13 -- scripts/common.sh@363 -- # ver2[v]=0 00:14:04.705 15:18:13 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:04.705 15:18:13 -- scripts/common.sh@364 -- # return 0 00:14:04.705 15:18:13 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:04.705 15:18:13 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:04.705 15:18:13 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:04.705 15:18:13 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:04.705 15:18:13 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:04.705 15:18:13 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:04.705 15:18:13 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:04.705 15:18:13 -- fips/fips.sh@113 -- # build_openssl_config 00:14:04.705 15:18:13 -- fips/fips.sh@37 -- # cat 00:14:04.705 15:18:13 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:04.705 15:18:13 -- fips/fips.sh@58 -- # cat - 00:14:04.705 15:18:13 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:04.705 15:18:13 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:04.705 15:18:13 -- fips/fips.sh@116 -- # mapfile -t providers 00:14:04.705 15:18:13 -- fips/fips.sh@116 -- # openssl list -providers 00:14:04.705 15:18:13 -- fips/fips.sh@116 -- # grep name 00:14:04.705 15:18:13 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:04.705 15:18:13 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:04.705 15:18:13 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:04.963 15:18:13 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:04.963 15:18:13 -- fips/fips.sh@127 -- # : 00:14:04.963 15:18:13 -- common/autotest_common.sh@638 -- # local es=0 00:14:04.963 15:18:13 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:04.963 15:18:13 -- common/autotest_common.sh@626 -- # local arg=openssl 00:14:04.963 15:18:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:04.963 15:18:13 -- common/autotest_common.sh@630 -- # type -t openssl 00:14:04.963 15:18:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:04.963 15:18:13 -- common/autotest_common.sh@632 -- # type -P openssl 00:14:04.963 15:18:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:04.963 15:18:13 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:14:04.963 15:18:13 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:14:04.963 15:18:13 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:14:04.963 Error setting digest 00:14:04.963 00A204FF437F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:04.963 00A204FF437F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:04.963 15:18:13 -- common/autotest_common.sh@641 -- # es=1 00:14:04.963 15:18:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:04.963 15:18:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:04.963 15:18:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:04.963 15:18:13 -- fips/fips.sh@130 -- # nvmftestinit 00:14:04.963 15:18:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:04.963 15:18:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.963 15:18:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:04.963 15:18:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:04.963 15:18:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:04.963 15:18:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.963 15:18:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.963 15:18:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.963 15:18:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:04.963 15:18:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:04.963 15:18:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:04.963 15:18:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:04.963 15:18:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:04.963 15:18:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:04.963 15:18:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.963 15:18:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.963 15:18:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:04.963 15:18:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:04.963 15:18:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:04.963 15:18:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:04.963 15:18:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:04.964 15:18:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.964 15:18:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:04.964 15:18:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:04.964 15:18:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:04.964 15:18:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:04.964 15:18:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:04.964 15:18:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:04.964 Cannot find device "nvmf_tgt_br" 00:14:04.964 15:18:14 -- nvmf/common.sh@155 -- # true 00:14:04.964 15:18:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:04.964 Cannot find device "nvmf_tgt_br2" 00:14:04.964 15:18:14 -- nvmf/common.sh@156 -- # true 00:14:04.964 15:18:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:04.964 15:18:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:04.964 Cannot find device "nvmf_tgt_br" 00:14:04.964 15:18:14 -- nvmf/common.sh@158 -- # true 00:14:04.964 15:18:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:04.964 Cannot find device "nvmf_tgt_br2" 00:14:04.964 15:18:14 -- nvmf/common.sh@159 -- # true 00:14:04.964 15:18:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:04.964 15:18:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:04.964 15:18:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:04.964 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.964 15:18:14 -- nvmf/common.sh@162 -- # true 00:14:04.964 15:18:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:04.964 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.964 15:18:14 -- nvmf/common.sh@163 -- # true 00:14:04.964 15:18:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:04.964 15:18:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:04.964 15:18:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:04.964 15:18:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:04.964 15:18:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:04.964 15:18:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:04.964 15:18:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:05.222 15:18:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:05.222 15:18:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:05.222 15:18:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:05.222 15:18:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:05.222 15:18:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:05.222 15:18:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:05.222 15:18:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:05.222 15:18:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:05.222 15:18:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:05.222 15:18:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:05.222 15:18:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:05.222 15:18:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:05.222 15:18:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:05.222 15:18:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:05.222 15:18:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:05.222 15:18:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:05.222 15:18:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:05.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:14:05.222 00:14:05.222 --- 10.0.0.2 ping statistics --- 00:14:05.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.222 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:05.222 15:18:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:05.222 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:05.222 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:14:05.222 00:14:05.222 --- 10.0.0.3 ping statistics --- 00:14:05.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.222 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:05.222 15:18:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:05.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:05.222 00:14:05.222 --- 10.0.0.1 ping statistics --- 00:14:05.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.222 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:05.222 15:18:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.222 15:18:14 -- nvmf/common.sh@422 -- # return 0 00:14:05.222 15:18:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:05.222 15:18:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.222 15:18:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:05.222 15:18:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:05.222 15:18:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.222 15:18:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:05.222 15:18:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:05.222 15:18:14 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:05.222 15:18:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:05.222 15:18:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:05.222 15:18:14 -- common/autotest_common.sh@10 -- # set +x 00:14:05.222 15:18:14 -- nvmf/common.sh@470 -- # nvmfpid=71484 00:14:05.222 15:18:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:05.222 15:18:14 -- nvmf/common.sh@471 -- # waitforlisten 71484 00:14:05.222 15:18:14 -- common/autotest_common.sh@817 -- # '[' -z 71484 ']' 00:14:05.222 15:18:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.222 15:18:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:05.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.222 15:18:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.222 15:18:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:05.222 15:18:14 -- common/autotest_common.sh@10 -- # set +x 00:14:05.222 [2024-04-24 15:18:14.447583] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:14:05.223 [2024-04-24 15:18:14.447674] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.481 [2024-04-24 15:18:14.590232] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.481 [2024-04-24 15:18:14.720850] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.481 [2024-04-24 15:18:14.720952] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.481 [2024-04-24 15:18:14.720977] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.481 [2024-04-24 15:18:14.720987] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.481 [2024-04-24 15:18:14.720997] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.481 [2024-04-24 15:18:14.721040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.422 15:18:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:06.422 15:18:15 -- common/autotest_common.sh@850 -- # return 0 00:14:06.422 15:18:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:06.422 15:18:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:06.422 15:18:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.422 15:18:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.422 15:18:15 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:06.422 15:18:15 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:06.422 15:18:15 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:06.422 15:18:15 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:06.422 15:18:15 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:06.422 15:18:15 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:06.422 15:18:15 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:06.422 15:18:15 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.422 [2024-04-24 15:18:15.663879] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.679 [2024-04-24 15:18:15.679801] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:06.679 [2024-04-24 15:18:15.680019] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.680 [2024-04-24 15:18:15.711585] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:06.680 malloc0 00:14:06.680 15:18:15 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:06.680 15:18:15 -- fips/fips.sh@147 -- # bdevperf_pid=71518 00:14:06.680 15:18:15 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:06.680 15:18:15 -- fips/fips.sh@148 -- # waitforlisten 71518 /var/tmp/bdevperf.sock 00:14:06.680 15:18:15 -- common/autotest_common.sh@817 -- # '[' -z 71518 ']' 00:14:06.680 15:18:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.680 15:18:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:06.680 15:18:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.680 15:18:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:06.680 15:18:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.680 [2024-04-24 15:18:15.818443] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:14:06.680 [2024-04-24 15:18:15.818555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71518 ] 00:14:06.938 [2024-04-24 15:18:15.960321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.938 [2024-04-24 15:18:16.088358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.873 15:18:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:07.873 15:18:16 -- common/autotest_common.sh@850 -- # return 0 00:14:07.873 15:18:16 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:07.873 [2024-04-24 15:18:16.971782] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:07.873 [2024-04-24 15:18:16.971914] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:07.873 TLSTESTn1 00:14:07.873 15:18:17 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:08.132 Running I/O for 10 seconds... 00:14:18.238 00:14:18.238 Latency(us) 00:14:18.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.238 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:18.238 Verification LBA range: start 0x0 length 0x2000 00:14:18.238 TLSTESTn1 : 10.02 3997.07 15.61 0.00 0.00 31959.49 7626.01 22997.18 00:14:18.238 =================================================================================================================== 00:14:18.238 Total : 3997.07 15.61 0.00 0.00 31959.49 7626.01 22997.18 00:14:18.238 0 00:14:18.238 15:18:27 -- fips/fips.sh@1 -- # cleanup 00:14:18.238 15:18:27 -- fips/fips.sh@15 -- # process_shm --id 0 00:14:18.238 15:18:27 -- common/autotest_common.sh@794 -- # type=--id 00:14:18.238 15:18:27 -- common/autotest_common.sh@795 -- # id=0 00:14:18.238 15:18:27 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:14:18.238 15:18:27 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:18.238 15:18:27 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:14:18.238 15:18:27 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:14:18.238 15:18:27 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:14:18.238 15:18:27 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:18.238 nvmf_trace.0 00:14:18.238 15:18:27 -- common/autotest_common.sh@809 -- # return 0 00:14:18.238 15:18:27 -- fips/fips.sh@16 -- # killprocess 71518 00:14:18.238 15:18:27 -- common/autotest_common.sh@936 -- # '[' -z 71518 ']' 00:14:18.238 15:18:27 -- common/autotest_common.sh@940 -- # kill -0 71518 00:14:18.238 15:18:27 -- common/autotest_common.sh@941 -- # uname 00:14:18.238 15:18:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:18.238 15:18:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71518 00:14:18.238 15:18:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:18.238 15:18:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:18.238 killing process with pid 71518 00:14:18.238 15:18:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71518' 00:14:18.238 Received shutdown signal, test time was about 10.000000 seconds 00:14:18.238 00:14:18.238 Latency(us) 00:14:18.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.238 =================================================================================================================== 00:14:18.238 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:18.238 15:18:27 -- common/autotest_common.sh@955 -- # kill 71518 00:14:18.238 [2024-04-24 15:18:27.339518] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:18.238 15:18:27 -- common/autotest_common.sh@960 -- # wait 71518 00:14:18.497 15:18:27 -- fips/fips.sh@17 -- # nvmftestfini 00:14:18.497 15:18:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:18.497 15:18:27 -- nvmf/common.sh@117 -- # sync 00:14:18.497 15:18:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.497 15:18:27 -- nvmf/common.sh@120 -- # set +e 00:14:18.497 15:18:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.497 15:18:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.497 rmmod nvme_tcp 00:14:18.497 rmmod nvme_fabrics 00:14:18.497 rmmod nvme_keyring 00:14:18.497 15:18:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.497 15:18:27 -- nvmf/common.sh@124 -- # set -e 00:14:18.497 15:18:27 -- nvmf/common.sh@125 -- # return 0 00:14:18.497 15:18:27 -- nvmf/common.sh@478 -- # '[' -n 71484 ']' 00:14:18.497 15:18:27 -- nvmf/common.sh@479 -- # killprocess 71484 00:14:18.497 15:18:27 -- common/autotest_common.sh@936 -- # '[' -z 71484 ']' 00:14:18.497 15:18:27 -- common/autotest_common.sh@940 -- # kill -0 71484 00:14:18.497 15:18:27 -- common/autotest_common.sh@941 -- # uname 00:14:18.497 15:18:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:18.497 15:18:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71484 00:14:18.497 15:18:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:18.497 15:18:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:18.497 15:18:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71484' 00:14:18.497 killing process with pid 71484 00:14:18.497 15:18:27 -- common/autotest_common.sh@955 -- # kill 71484 00:14:18.497 [2024-04-24 15:18:27.736036] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:18.497 15:18:27 -- common/autotest_common.sh@960 -- # wait 71484 00:14:19.063 15:18:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:19.063 15:18:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:19.063 15:18:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:19.064 15:18:28 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.064 15:18:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.064 15:18:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.064 15:18:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.064 15:18:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.064 15:18:28 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:19.064 15:18:28 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:19.064 00:14:19.064 real 0m14.325s 00:14:19.064 user 0m19.434s 00:14:19.064 sys 0m5.807s 00:14:19.064 15:18:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:19.064 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:14:19.064 ************************************ 00:14:19.064 END TEST nvmf_fips 00:14:19.064 ************************************ 00:14:19.064 15:18:28 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:14:19.064 15:18:28 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:14:19.064 15:18:28 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:14:19.064 15:18:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:19.064 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:14:19.064 15:18:28 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:14:19.064 15:18:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:19.064 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:14:19.064 15:18:28 -- nvmf/nvmf.sh@88 -- # [[ 1 -eq 0 ]] 00:14:19.064 15:18:28 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:19.064 15:18:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:19.064 15:18:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:19.064 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:14:19.064 ************************************ 00:14:19.064 START TEST nvmf_identify 00:14:19.064 ************************************ 00:14:19.064 15:18:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:19.064 * Looking for test storage... 00:14:19.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:19.064 15:18:28 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:19.064 15:18:28 -- nvmf/common.sh@7 -- # uname -s 00:14:19.064 15:18:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.064 15:18:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.064 15:18:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.064 15:18:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.064 15:18:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.064 15:18:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.064 15:18:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.064 15:18:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.064 15:18:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.064 15:18:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.322 15:18:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:14:19.322 15:18:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:14:19.322 15:18:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.322 15:18:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.322 15:18:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:19.322 15:18:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.322 15:18:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:19.322 15:18:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.322 15:18:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.322 15:18:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.322 15:18:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.322 15:18:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.323 15:18:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.323 15:18:28 -- paths/export.sh@5 -- # export PATH 00:14:19.323 15:18:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.323 15:18:28 -- nvmf/common.sh@47 -- # : 0 00:14:19.323 15:18:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.323 15:18:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.323 15:18:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.323 15:18:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.323 15:18:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.323 15:18:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.323 15:18:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.323 15:18:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.323 15:18:28 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.323 15:18:28 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.323 15:18:28 -- host/identify.sh@14 -- # nvmftestinit 00:14:19.323 15:18:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:19.323 15:18:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.323 15:18:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:19.323 15:18:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:19.323 15:18:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:19.323 15:18:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.323 15:18:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.323 15:18:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.323 15:18:28 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:19.323 15:18:28 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:19.323 15:18:28 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:19.323 15:18:28 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:19.323 15:18:28 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:19.323 15:18:28 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:19.323 15:18:28 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.323 15:18:28 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.323 15:18:28 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:19.323 15:18:28 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:19.323 15:18:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:19.323 15:18:28 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:19.323 15:18:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:19.323 15:18:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.323 15:18:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:19.323 15:18:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:19.323 15:18:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:19.323 15:18:28 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:19.323 15:18:28 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:19.323 15:18:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:19.323 Cannot find device "nvmf_tgt_br" 00:14:19.323 15:18:28 -- nvmf/common.sh@155 -- # true 00:14:19.323 15:18:28 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:19.323 Cannot find device "nvmf_tgt_br2" 00:14:19.323 15:18:28 -- nvmf/common.sh@156 -- # true 00:14:19.323 15:18:28 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:19.323 15:18:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:19.323 Cannot find device "nvmf_tgt_br" 00:14:19.323 15:18:28 -- nvmf/common.sh@158 -- # true 00:14:19.323 15:18:28 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:19.323 Cannot find device "nvmf_tgt_br2" 00:14:19.323 15:18:28 -- nvmf/common.sh@159 -- # true 00:14:19.323 15:18:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:19.323 15:18:28 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:19.323 15:18:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:19.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.323 15:18:28 -- nvmf/common.sh@162 -- # true 00:14:19.323 15:18:28 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:19.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.323 15:18:28 -- nvmf/common.sh@163 -- # true 00:14:19.323 15:18:28 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:19.323 15:18:28 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:19.323 15:18:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:19.323 15:18:28 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:19.323 15:18:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:19.323 15:18:28 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:19.323 15:18:28 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:19.323 15:18:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:19.323 15:18:28 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:19.582 15:18:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:19.582 15:18:28 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:19.582 15:18:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:19.582 15:18:28 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:19.582 15:18:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:19.582 15:18:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:19.582 15:18:28 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:19.582 15:18:28 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:19.582 15:18:28 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:19.582 15:18:28 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:19.582 15:18:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:19.582 15:18:28 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:19.582 15:18:28 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:19.582 15:18:28 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:19.582 15:18:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:19.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:14:19.582 00:14:19.582 --- 10.0.0.2 ping statistics --- 00:14:19.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.582 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:19.582 15:18:28 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:19.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:19.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:14:19.582 00:14:19.582 --- 10.0.0.3 ping statistics --- 00:14:19.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.583 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:19.583 15:18:28 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:19.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:14:19.583 00:14:19.583 --- 10.0.0.1 ping statistics --- 00:14:19.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.583 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:19.583 15:18:28 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.583 15:18:28 -- nvmf/common.sh@422 -- # return 0 00:14:19.583 15:18:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:19.583 15:18:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.583 15:18:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:19.583 15:18:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:19.583 15:18:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.583 15:18:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:19.583 15:18:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:19.583 15:18:28 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:19.583 15:18:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:19.583 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:14:19.583 15:18:28 -- host/identify.sh@19 -- # nvmfpid=71867 00:14:19.583 15:18:28 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:19.583 15:18:28 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:19.583 15:18:28 -- host/identify.sh@23 -- # waitforlisten 71867 00:14:19.583 15:18:28 -- common/autotest_common.sh@817 -- # '[' -z 71867 ']' 00:14:19.583 15:18:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.583 15:18:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:19.583 15:18:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.583 15:18:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:19.583 15:18:28 -- common/autotest_common.sh@10 -- # set +x 00:14:19.583 [2024-04-24 15:18:28.737354] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:14:19.583 [2024-04-24 15:18:28.737452] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.841 [2024-04-24 15:18:28.873967] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.841 [2024-04-24 15:18:29.018265] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.841 [2024-04-24 15:18:29.018365] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.841 [2024-04-24 15:18:29.018397] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.841 [2024-04-24 15:18:29.018412] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.841 [2024-04-24 15:18:29.018444] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.841 [2024-04-24 15:18:29.018626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.841 [2024-04-24 15:18:29.019511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.841 [2024-04-24 15:18:29.019411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.841 [2024-04-24 15:18:29.019497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.777 15:18:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:20.777 15:18:29 -- common/autotest_common.sh@850 -- # return 0 00:14:20.777 15:18:29 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.777 15:18:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.777 15:18:29 -- common/autotest_common.sh@10 -- # set +x 00:14:20.777 [2024-04-24 15:18:29.732808] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.777 15:18:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.777 15:18:29 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:20.777 15:18:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:20.777 15:18:29 -- common/autotest_common.sh@10 -- # set +x 00:14:20.777 15:18:29 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:20.777 15:18:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.777 15:18:29 -- common/autotest_common.sh@10 -- # set +x 00:14:20.777 Malloc0 00:14:20.777 15:18:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.777 15:18:29 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:20.777 15:18:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.777 15:18:29 -- common/autotest_common.sh@10 -- # set +x 00:14:20.777 15:18:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.777 15:18:29 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:20.777 15:18:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.777 15:18:29 -- common/autotest_common.sh@10 -- # set +x 00:14:20.777 15:18:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.777 15:18:29 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.777 15:18:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.777 15:18:29 -- common/autotest_common.sh@10 -- # set +x 00:14:20.777 [2024-04-24 15:18:29.827097] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.777 15:18:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.777 15:18:29 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:20.777 15:18:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.778 15:18:29 -- common/autotest_common.sh@10 -- # set +x 00:14:20.778 15:18:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.778 15:18:29 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:20.778 15:18:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.778 15:18:29 -- common/autotest_common.sh@10 -- # set +x 00:14:20.778 [2024-04-24 15:18:29.842837] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:20.778 [ 00:14:20.778 { 00:14:20.778 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:20.778 "subtype": "Discovery", 00:14:20.778 "listen_addresses": [ 00:14:20.778 { 00:14:20.778 "transport": "TCP", 00:14:20.778 "trtype": "TCP", 00:14:20.778 "adrfam": "IPv4", 00:14:20.778 "traddr": "10.0.0.2", 00:14:20.778 "trsvcid": "4420" 00:14:20.778 } 00:14:20.778 ], 00:14:20.778 "allow_any_host": true, 00:14:20.778 "hosts": [] 00:14:20.778 }, 00:14:20.778 { 00:14:20.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.778 "subtype": "NVMe", 00:14:20.778 "listen_addresses": [ 00:14:20.778 { 00:14:20.778 "transport": "TCP", 00:14:20.778 "trtype": "TCP", 00:14:20.778 "adrfam": "IPv4", 00:14:20.778 "traddr": "10.0.0.2", 00:14:20.778 "trsvcid": "4420" 00:14:20.778 } 00:14:20.778 ], 00:14:20.778 "allow_any_host": true, 00:14:20.778 "hosts": [], 00:14:20.778 "serial_number": "SPDK00000000000001", 00:14:20.778 "model_number": "SPDK bdev Controller", 00:14:20.778 "max_namespaces": 32, 00:14:20.778 "min_cntlid": 1, 00:14:20.778 "max_cntlid": 65519, 00:14:20.778 "namespaces": [ 00:14:20.778 { 00:14:20.778 "nsid": 1, 00:14:20.778 "bdev_name": "Malloc0", 00:14:20.778 "name": "Malloc0", 00:14:20.778 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:20.778 "eui64": "ABCDEF0123456789", 00:14:20.778 "uuid": "48580ffc-c6a1-4a70-a18f-0795d121f3d9" 00:14:20.778 } 00:14:20.778 ] 00:14:20.778 } 00:14:20.778 ] 00:14:20.778 15:18:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.778 15:18:29 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:20.778 [2024-04-24 15:18:29.879566] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:14:20.778 [2024-04-24 15:18:29.879627] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71902 ] 00:14:21.043 [2024-04-24 15:18:30.024648] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:21.043 [2024-04-24 15:18:30.024728] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:21.043 [2024-04-24 15:18:30.024736] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:21.043 [2024-04-24 15:18:30.024751] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:21.043 [2024-04-24 15:18:30.024767] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:21.043 [2024-04-24 15:18:30.024923] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:21.043 [2024-04-24 15:18:30.024982] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1fa1300 0 00:14:21.043 [2024-04-24 15:18:30.037500] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:21.043 [2024-04-24 15:18:30.037528] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:21.043 [2024-04-24 15:18:30.037534] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:21.043 [2024-04-24 15:18:30.037538] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:21.043 [2024-04-24 15:18:30.037589] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.043 [2024-04-24 15:18:30.037597] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.043 [2024-04-24 15:18:30.037601] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1300) 00:14:21.044 [2024-04-24 15:18:30.037616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:21.044 [2024-04-24 15:18:30.037663] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe99c0, cid 0, qid 0 00:14:21.044 [2024-04-24 15:18:30.045451] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.044 [2024-04-24 15:18:30.045471] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.044 [2024-04-24 15:18:30.045477] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.045482] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe99c0) on tqpair=0x1fa1300 00:14:21.044 [2024-04-24 15:18:30.045495] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:21.044 [2024-04-24 15:18:30.045504] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:21.044 [2024-04-24 15:18:30.045510] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:21.044 [2024-04-24 15:18:30.045529] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.045534] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.045538] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1300) 00:14:21.044 [2024-04-24 15:18:30.045548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.044 [2024-04-24 15:18:30.045574] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe99c0, cid 0, qid 0 00:14:21.044 [2024-04-24 15:18:30.045663] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.044 [2024-04-24 15:18:30.045670] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.044 [2024-04-24 15:18:30.045674] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.045678] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe99c0) on tqpair=0x1fa1300 00:14:21.044 [2024-04-24 15:18:30.045690] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:21.044 [2024-04-24 15:18:30.045699] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:21.044 [2024-04-24 15:18:30.045706] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.045711] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.045715] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1300) 00:14:21.044 [2024-04-24 15:18:30.045723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.044 [2024-04-24 15:18:30.045743] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe99c0, cid 0, qid 0 00:14:21.044 [2024-04-24 15:18:30.045808] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.044 [2024-04-24 15:18:30.045821] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.044 [2024-04-24 15:18:30.045825] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.045829] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe99c0) on tqpair=0x1fa1300 00:14:21.044 [2024-04-24 15:18:30.045836] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:21.044 [2024-04-24 15:18:30.045845] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:21.044 [2024-04-24 15:18:30.045853] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.045857] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.045861] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1300) 00:14:21.044 [2024-04-24 15:18:30.045869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.044 [2024-04-24 15:18:30.045887] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe99c0, cid 0, qid 0 00:14:21.044 [2024-04-24 15:18:30.045948] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.044 [2024-04-24 15:18:30.045955] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.044 [2024-04-24 15:18:30.045958] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.045963] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe99c0) on tqpair=0x1fa1300 00:14:21.044 [2024-04-24 15:18:30.045969] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:21.044 [2024-04-24 15:18:30.045979] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.045984] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.045988] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1300) 00:14:21.044 [2024-04-24 15:18:30.045996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.044 [2024-04-24 15:18:30.046013] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe99c0, cid 0, qid 0 00:14:21.044 [2024-04-24 15:18:30.046081] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.044 [2024-04-24 15:18:30.046087] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.044 [2024-04-24 15:18:30.046091] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.046095] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe99c0) on tqpair=0x1fa1300 00:14:21.044 [2024-04-24 15:18:30.046101] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:21.044 [2024-04-24 15:18:30.046106] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:21.044 [2024-04-24 15:18:30.046114] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:21.044 [2024-04-24 15:18:30.046220] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:21.044 [2024-04-24 15:18:30.046226] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:21.044 [2024-04-24 15:18:30.046236] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.046240] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.046244] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1300) 00:14:21.044 [2024-04-24 15:18:30.046252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.044 [2024-04-24 15:18:30.046270] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe99c0, cid 0, qid 0 00:14:21.044 [2024-04-24 15:18:30.046332] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.044 [2024-04-24 15:18:30.046346] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.044 [2024-04-24 15:18:30.046351] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.046356] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe99c0) on tqpair=0x1fa1300 00:14:21.044 [2024-04-24 15:18:30.046362] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:21.044 [2024-04-24 15:18:30.046373] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.046378] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.046382] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1300) 00:14:21.044 [2024-04-24 15:18:30.046390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.044 [2024-04-24 15:18:30.046408] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe99c0, cid 0, qid 0 00:14:21.044 [2024-04-24 15:18:30.046500] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.044 [2024-04-24 15:18:30.046509] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.044 [2024-04-24 15:18:30.046513] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.046517] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe99c0) on tqpair=0x1fa1300 00:14:21.044 [2024-04-24 15:18:30.046523] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:21.044 [2024-04-24 15:18:30.046529] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:21.044 [2024-04-24 15:18:30.046537] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:21.044 [2024-04-24 15:18:30.046548] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:21.044 [2024-04-24 15:18:30.046559] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.046564] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1300) 00:14:21.044 [2024-04-24 15:18:30.046572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.044 [2024-04-24 15:18:30.046593] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe99c0, cid 0, qid 0 00:14:21.044 [2024-04-24 15:18:30.046711] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.044 [2024-04-24 15:18:30.046722] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.044 [2024-04-24 15:18:30.046727] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.046731] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fa1300): datao=0, datal=4096, cccid=0 00:14:21.044 [2024-04-24 15:18:30.046736] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fe99c0) on tqpair(0x1fa1300): expected_datao=0, payload_size=4096 00:14:21.044 [2024-04-24 15:18:30.046741] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.046750] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.046755] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.044 [2024-04-24 15:18:30.046764] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.045 [2024-04-24 15:18:30.046771] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.045 [2024-04-24 15:18:30.046774] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.046778] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe99c0) on tqpair=0x1fa1300 00:14:21.045 [2024-04-24 15:18:30.046789] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:21.045 [2024-04-24 15:18:30.046794] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:21.045 [2024-04-24 15:18:30.046799] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:21.045 [2024-04-24 15:18:30.046809] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:21.045 [2024-04-24 15:18:30.046815] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:21.045 [2024-04-24 15:18:30.046821] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:21.045 [2024-04-24 15:18:30.046830] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:21.045 [2024-04-24 15:18:30.046838] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.046842] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.046846] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1300) 00:14:21.045 [2024-04-24 15:18:30.046855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.045 [2024-04-24 15:18:30.046875] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe99c0, cid 0, qid 0 00:14:21.045 [2024-04-24 15:18:30.046949] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.045 [2024-04-24 15:18:30.046956] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.045 [2024-04-24 15:18:30.046960] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.046964] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe99c0) on tqpair=0x1fa1300 00:14:21.045 [2024-04-24 15:18:30.046974] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.046978] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.046982] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1300) 00:14:21.045 [2024-04-24 15:18:30.046989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.045 [2024-04-24 15:18:30.046996] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047000] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047004] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1fa1300) 00:14:21.045 [2024-04-24 15:18:30.047010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.045 [2024-04-24 15:18:30.047017] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047021] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047025] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1fa1300) 00:14:21.045 [2024-04-24 15:18:30.047031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.045 [2024-04-24 15:18:30.047038] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047042] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047045] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fa1300) 00:14:21.045 [2024-04-24 15:18:30.047051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.045 [2024-04-24 15:18:30.047057] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:21.045 [2024-04-24 15:18:30.047070] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:21.045 [2024-04-24 15:18:30.047078] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047082] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fa1300) 00:14:21.045 [2024-04-24 15:18:30.047089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.045 [2024-04-24 15:18:30.047109] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe99c0, cid 0, qid 0 00:14:21.045 [2024-04-24 15:18:30.047116] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9b20, cid 1, qid 0 00:14:21.045 [2024-04-24 15:18:30.047121] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9c80, cid 2, qid 0 00:14:21.045 [2024-04-24 15:18:30.047126] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9de0, cid 3, qid 0 00:14:21.045 [2024-04-24 15:18:30.047131] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9f40, cid 4, qid 0 00:14:21.045 [2024-04-24 15:18:30.047267] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.045 [2024-04-24 15:18:30.047274] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.045 [2024-04-24 15:18:30.047278] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047282] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe9f40) on tqpair=0x1fa1300 00:14:21.045 [2024-04-24 15:18:30.047288] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:21.045 [2024-04-24 15:18:30.047294] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:21.045 [2024-04-24 15:18:30.047306] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047311] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fa1300) 00:14:21.045 [2024-04-24 15:18:30.047318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.045 [2024-04-24 15:18:30.047336] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9f40, cid 4, qid 0 00:14:21.045 [2024-04-24 15:18:30.047419] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.045 [2024-04-24 15:18:30.047425] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.045 [2024-04-24 15:18:30.047441] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047446] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fa1300): datao=0, datal=4096, cccid=4 00:14:21.045 [2024-04-24 15:18:30.047450] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fe9f40) on tqpair(0x1fa1300): expected_datao=0, payload_size=4096 00:14:21.045 [2024-04-24 15:18:30.047455] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047463] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047467] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047489] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.045 [2024-04-24 15:18:30.047499] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.045 [2024-04-24 15:18:30.047503] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047507] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe9f40) on tqpair=0x1fa1300 00:14:21.045 [2024-04-24 15:18:30.047523] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:21.045 [2024-04-24 15:18:30.047546] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047551] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fa1300) 00:14:21.045 [2024-04-24 15:18:30.047559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.045 [2024-04-24 15:18:30.047567] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047571] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047575] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fa1300) 00:14:21.045 [2024-04-24 15:18:30.047581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.045 [2024-04-24 15:18:30.047611] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9f40, cid 4, qid 0 00:14:21.045 [2024-04-24 15:18:30.047619] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fea0a0, cid 5, qid 0 00:14:21.045 [2024-04-24 15:18:30.047755] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.045 [2024-04-24 15:18:30.047770] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.045 [2024-04-24 15:18:30.047775] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047779] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fa1300): datao=0, datal=1024, cccid=4 00:14:21.045 [2024-04-24 15:18:30.047784] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fe9f40) on tqpair(0x1fa1300): expected_datao=0, payload_size=1024 00:14:21.045 [2024-04-24 15:18:30.047789] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047796] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047800] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047807] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.045 [2024-04-24 15:18:30.047813] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.045 [2024-04-24 15:18:30.047816] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047820] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fea0a0) on tqpair=0x1fa1300 00:14:21.045 [2024-04-24 15:18:30.047839] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.045 [2024-04-24 15:18:30.047847] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.045 [2024-04-24 15:18:30.047851] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047855] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe9f40) on tqpair=0x1fa1300 00:14:21.045 [2024-04-24 15:18:30.047873] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.045 [2024-04-24 15:18:30.047878] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fa1300) 00:14:21.045 [2024-04-24 15:18:30.047886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.045 [2024-04-24 15:18:30.047911] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9f40, cid 4, qid 0 00:14:21.045 [2024-04-24 15:18:30.048000] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.045 [2024-04-24 15:18:30.048007] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.046 [2024-04-24 15:18:30.048011] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.046 [2024-04-24 15:18:30.048015] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fa1300): datao=0, datal=3072, cccid=4 00:14:21.046 [2024-04-24 15:18:30.048020] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fe9f40) on tqpair(0x1fa1300): expected_datao=0, payload_size=3072 00:14:21.046 [2024-04-24 15:18:30.048025] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.046 [2024-04-24 15:18:30.048032] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.046 [2024-04-24 15:18:30.048036] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.046 [2024-04-24 15:18:30.048044] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.046 [2024-04-24 15:18:30.048050] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.046 [2024-04-24 15:18:30.048054] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.046 [2024-04-24 15:18:30.048058] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe9f40) on tqpair=0x1fa1300 00:14:21.046 [2024-04-24 15:18:30.048069] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.046 [2024-04-24 15:18:30.048074] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fa1300) 00:14:21.046 [2024-04-24 15:18:30.048082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.046 [2024-04-24 15:18:30.048104] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9f40, cid 4, qid 0 00:14:21.046 [2024-04-24 15:18:30.048196] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.046 [2024-04-24 15:18:30.048207] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.046 [2024-04-24 15:18:30.048212] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.046 [2024-04-24 15:18:30.048215] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fa1300): datao=0, datal=8, cccid=4 00:14:21.046 [2024-04-24 15:18:30.048220] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fe9f40) on tqpair(0x1fa1300): expected_datao=0, payload_size=8 00:14:21.046 [2024-04-24 15:18:30.048225] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.046 [2024-04-24 15:18:30.048232] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.046 [2024-04-24 15:18:30.048236] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.046 [2024-04-24 15:18:30.048251] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.046 [2024-04-24 15:18:30.048258] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.046 [2024-04-24 15:18:30.048272] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.046 [2024-04-24 15:18:30.048277] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe9f40) on tqpair=0x1fa1300 00:14:21.046 ===================================================== 00:14:21.046 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:21.046 ===================================================== 00:14:21.046 Controller Capabilities/Features 00:14:21.046 ================================ 00:14:21.046 Vendor ID: 0000 00:14:21.046 Subsystem Vendor ID: 0000 00:14:21.046 Serial Number: .................... 00:14:21.046 Model Number: ........................................ 00:14:21.046 Firmware Version: 24.05 00:14:21.046 Recommended Arb Burst: 0 00:14:21.046 IEEE OUI Identifier: 00 00 00 00:14:21.046 Multi-path I/O 00:14:21.046 May have multiple subsystem ports: No 00:14:21.046 May have multiple controllers: No 00:14:21.046 Associated with SR-IOV VF: No 00:14:21.046 Max Data Transfer Size: 131072 00:14:21.046 Max Number of Namespaces: 0 00:14:21.046 Max Number of I/O Queues: 1024 00:14:21.046 NVMe Specification Version (VS): 1.3 00:14:21.046 NVMe Specification Version (Identify): 1.3 00:14:21.046 Maximum Queue Entries: 128 00:14:21.046 Contiguous Queues Required: Yes 00:14:21.046 Arbitration Mechanisms Supported 00:14:21.046 Weighted Round Robin: Not Supported 00:14:21.046 Vendor Specific: Not Supported 00:14:21.046 Reset Timeout: 15000 ms 00:14:21.046 Doorbell Stride: 4 bytes 00:14:21.046 NVM Subsystem Reset: Not Supported 00:14:21.046 Command Sets Supported 00:14:21.046 NVM Command Set: Supported 00:14:21.046 Boot Partition: Not Supported 00:14:21.046 Memory Page Size Minimum: 4096 bytes 00:14:21.046 Memory Page Size Maximum: 4096 bytes 00:14:21.046 Persistent Memory Region: Not Supported 00:14:21.046 Optional Asynchronous Events Supported 00:14:21.046 Namespace Attribute Notices: Not Supported 00:14:21.046 Firmware Activation Notices: Not Supported 00:14:21.046 ANA Change Notices: Not Supported 00:14:21.046 PLE Aggregate Log Change Notices: Not Supported 00:14:21.046 LBA Status Info Alert Notices: Not Supported 00:14:21.046 EGE Aggregate Log Change Notices: Not Supported 00:14:21.046 Normal NVM Subsystem Shutdown event: Not Supported 00:14:21.046 Zone Descriptor Change Notices: Not Supported 00:14:21.046 Discovery Log Change Notices: Supported 00:14:21.046 Controller Attributes 00:14:21.046 128-bit Host Identifier: Not Supported 00:14:21.046 Non-Operational Permissive Mode: Not Supported 00:14:21.046 NVM Sets: Not Supported 00:14:21.046 Read Recovery Levels: Not Supported 00:14:21.046 Endurance Groups: Not Supported 00:14:21.046 Predictable Latency Mode: Not Supported 00:14:21.046 Traffic Based Keep ALive: Not Supported 00:14:21.046 Namespace Granularity: Not Supported 00:14:21.046 SQ Associations: Not Supported 00:14:21.046 UUID List: Not Supported 00:14:21.046 Multi-Domain Subsystem: Not Supported 00:14:21.046 Fixed Capacity Management: Not Supported 00:14:21.046 Variable Capacity Management: Not Supported 00:14:21.046 Delete Endurance Group: Not Supported 00:14:21.046 Delete NVM Set: Not Supported 00:14:21.046 Extended LBA Formats Supported: Not Supported 00:14:21.046 Flexible Data Placement Supported: Not Supported 00:14:21.046 00:14:21.046 Controller Memory Buffer Support 00:14:21.046 ================================ 00:14:21.046 Supported: No 00:14:21.046 00:14:21.046 Persistent Memory Region Support 00:14:21.046 ================================ 00:14:21.046 Supported: No 00:14:21.046 00:14:21.046 Admin Command Set Attributes 00:14:21.046 ============================ 00:14:21.046 Security Send/Receive: Not Supported 00:14:21.046 Format NVM: Not Supported 00:14:21.046 Firmware Activate/Download: Not Supported 00:14:21.046 Namespace Management: Not Supported 00:14:21.046 Device Self-Test: Not Supported 00:14:21.046 Directives: Not Supported 00:14:21.046 NVMe-MI: Not Supported 00:14:21.046 Virtualization Management: Not Supported 00:14:21.046 Doorbell Buffer Config: Not Supported 00:14:21.046 Get LBA Status Capability: Not Supported 00:14:21.046 Command & Feature Lockdown Capability: Not Supported 00:14:21.046 Abort Command Limit: 1 00:14:21.046 Async Event Request Limit: 4 00:14:21.046 Number of Firmware Slots: N/A 00:14:21.046 Firmware Slot 1 Read-Only: N/A 00:14:21.046 Firmware Activation Without Reset: N/A 00:14:21.046 Multiple Update Detection Support: N/A 00:14:21.046 Firmware Update Granularity: No Information Provided 00:14:21.046 Per-Namespace SMART Log: No 00:14:21.046 Asymmetric Namespace Access Log Page: Not Supported 00:14:21.046 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:21.046 Command Effects Log Page: Not Supported 00:14:21.046 Get Log Page Extended Data: Supported 00:14:21.046 Telemetry Log Pages: Not Supported 00:14:21.046 Persistent Event Log Pages: Not Supported 00:14:21.046 Supported Log Pages Log Page: May Support 00:14:21.046 Commands Supported & Effects Log Page: Not Supported 00:14:21.046 Feature Identifiers & Effects Log Page:May Support 00:14:21.046 NVMe-MI Commands & Effects Log Page: May Support 00:14:21.046 Data Area 4 for Telemetry Log: Not Supported 00:14:21.046 Error Log Page Entries Supported: 128 00:14:21.046 Keep Alive: Not Supported 00:14:21.046 00:14:21.046 NVM Command Set Attributes 00:14:21.046 ========================== 00:14:21.046 Submission Queue Entry Size 00:14:21.046 Max: 1 00:14:21.046 Min: 1 00:14:21.046 Completion Queue Entry Size 00:14:21.046 Max: 1 00:14:21.046 Min: 1 00:14:21.046 Number of Namespaces: 0 00:14:21.046 Compare Command: Not Supported 00:14:21.046 Write Uncorrectable Command: Not Supported 00:14:21.046 Dataset Management Command: Not Supported 00:14:21.046 Write Zeroes Command: Not Supported 00:14:21.046 Set Features Save Field: Not Supported 00:14:21.046 Reservations: Not Supported 00:14:21.046 Timestamp: Not Supported 00:14:21.046 Copy: Not Supported 00:14:21.046 Volatile Write Cache: Not Present 00:14:21.047 Atomic Write Unit (Normal): 1 00:14:21.047 Atomic Write Unit (PFail): 1 00:14:21.047 Atomic Compare & Write Unit: 1 00:14:21.047 Fused Compare & Write: Supported 00:14:21.047 Scatter-Gather List 00:14:21.047 SGL Command Set: Supported 00:14:21.047 SGL Keyed: Supported 00:14:21.047 SGL Bit Bucket Descriptor: Not Supported 00:14:21.047 SGL Metadata Pointer: Not Supported 00:14:21.047 Oversized SGL: Not Supported 00:14:21.047 SGL Metadata Address: Not Supported 00:14:21.047 SGL Offset: Supported 00:14:21.047 Transport SGL Data Block: Not Supported 00:14:21.047 Replay Protected Memory Block: Not Supported 00:14:21.047 00:14:21.047 Firmware Slot Information 00:14:21.047 ========================= 00:14:21.047 Active slot: 0 00:14:21.047 00:14:21.047 00:14:21.047 Error Log 00:14:21.047 ========= 00:14:21.047 00:14:21.047 Active Namespaces 00:14:21.047 ================= 00:14:21.047 Discovery Log Page 00:14:21.047 ================== 00:14:21.047 Generation Counter: 2 00:14:21.047 Number of Records: 2 00:14:21.047 Record Format: 0 00:14:21.047 00:14:21.047 Discovery Log Entry 0 00:14:21.047 ---------------------- 00:14:21.047 Transport Type: 3 (TCP) 00:14:21.047 Address Family: 1 (IPv4) 00:14:21.047 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:21.047 Entry Flags: 00:14:21.047 Duplicate Returned Information: 1 00:14:21.047 Explicit Persistent Connection Support for Discovery: 1 00:14:21.047 Transport Requirements: 00:14:21.047 Secure Channel: Not Required 00:14:21.047 Port ID: 0 (0x0000) 00:14:21.047 Controller ID: 65535 (0xffff) 00:14:21.047 Admin Max SQ Size: 128 00:14:21.047 Transport Service Identifier: 4420 00:14:21.047 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:21.047 Transport Address: 10.0.0.2 00:14:21.047 Discovery Log Entry 1 00:14:21.047 ---------------------- 00:14:21.047 Transport Type: 3 (TCP) 00:14:21.047 Address Family: 1 (IPv4) 00:14:21.047 Subsystem Type: 2 (NVM Subsystem) 00:14:21.047 Entry Flags: 00:14:21.047 Duplicate Returned Information: 0 00:14:21.047 Explicit Persistent Connection Support for Discovery: 0 00:14:21.047 Transport Requirements: 00:14:21.047 Secure Channel: Not Required 00:14:21.047 Port ID: 0 (0x0000) 00:14:21.047 Controller ID: 65535 (0xffff) 00:14:21.047 Admin Max SQ Size: 128 00:14:21.047 Transport Service Identifier: 4420 00:14:21.047 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:21.047 Transport Address: 10.0.0.2 [2024-04-24 15:18:30.048375] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:21.047 [2024-04-24 15:18:30.048390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.047 [2024-04-24 15:18:30.048398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.047 [2024-04-24 15:18:30.048404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.047 [2024-04-24 15:18:30.048411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.047 [2024-04-24 15:18:30.048420] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.048424] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.048441] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fa1300) 00:14:21.047 [2024-04-24 15:18:30.048451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.047 [2024-04-24 15:18:30.048474] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9de0, cid 3, qid 0 00:14:21.047 [2024-04-24 15:18:30.048534] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.047 [2024-04-24 15:18:30.048541] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.047 [2024-04-24 15:18:30.048545] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.048549] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe9de0) on tqpair=0x1fa1300 00:14:21.047 [2024-04-24 15:18:30.048564] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.048569] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.048573] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fa1300) 00:14:21.047 [2024-04-24 15:18:30.048580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.047 [2024-04-24 15:18:30.048602] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9de0, cid 3, qid 0 00:14:21.047 [2024-04-24 15:18:30.048686] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.047 [2024-04-24 15:18:30.048693] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.047 [2024-04-24 15:18:30.048697] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.048701] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe9de0) on tqpair=0x1fa1300 00:14:21.047 [2024-04-24 15:18:30.048708] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:21.047 [2024-04-24 15:18:30.048713] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:21.047 [2024-04-24 15:18:30.048723] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.048728] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.048732] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fa1300) 00:14:21.047 [2024-04-24 15:18:30.048739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.047 [2024-04-24 15:18:30.048756] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9de0, cid 3, qid 0 00:14:21.047 [2024-04-24 15:18:30.048822] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.047 [2024-04-24 15:18:30.048834] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.047 [2024-04-24 15:18:30.048838] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.048843] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe9de0) on tqpair=0x1fa1300 00:14:21.047 [2024-04-24 15:18:30.048855] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.048860] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.048863] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fa1300) 00:14:21.047 [2024-04-24 15:18:30.048871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.047 [2024-04-24 15:18:30.048889] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9de0, cid 3, qid 0 00:14:21.047 [2024-04-24 15:18:30.048955] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.047 [2024-04-24 15:18:30.048965] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.047 [2024-04-24 15:18:30.048970] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.048974] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe9de0) on tqpair=0x1fa1300 00:14:21.047 [2024-04-24 15:18:30.048985] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.048990] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.048994] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fa1300) 00:14:21.047 [2024-04-24 15:18:30.049002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.047 [2024-04-24 15:18:30.049019] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9de0, cid 3, qid 0 00:14:21.047 [2024-04-24 15:18:30.049076] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.047 [2024-04-24 15:18:30.049082] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.047 [2024-04-24 15:18:30.049086] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.049090] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe9de0) on tqpair=0x1fa1300 00:14:21.047 [2024-04-24 15:18:30.049101] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.049106] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.049110] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fa1300) 00:14:21.047 [2024-04-24 15:18:30.049118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.047 [2024-04-24 15:18:30.049134] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9de0, cid 3, qid 0 00:14:21.047 [2024-04-24 15:18:30.049197] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.047 [2024-04-24 15:18:30.049204] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.047 [2024-04-24 15:18:30.049207] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.049211] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe9de0) on tqpair=0x1fa1300 00:14:21.047 [2024-04-24 15:18:30.049223] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.049227] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.047 [2024-04-24 15:18:30.049231] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fa1300) 00:14:21.047 [2024-04-24 15:18:30.049239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.047 [2024-04-24 15:18:30.049256] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9de0, cid 3, qid 0 00:14:21.047 [2024-04-24 15:18:30.049310] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.047 [2024-04-24 15:18:30.049317] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.047 [2024-04-24 15:18:30.049321] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.049325] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe9de0) on tqpair=0x1fa1300 00:14:21.048 [2024-04-24 15:18:30.049336] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.049341] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.049345] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fa1300) 00:14:21.048 [2024-04-24 15:18:30.049353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.048 [2024-04-24 15:18:30.049369] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9de0, cid 3, qid 0 00:14:21.048 [2024-04-24 15:18:30.053445] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.048 [2024-04-24 15:18:30.053464] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.048 [2024-04-24 15:18:30.053469] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.053474] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe9de0) on tqpair=0x1fa1300 00:14:21.048 [2024-04-24 15:18:30.053489] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.053494] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.053498] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fa1300) 00:14:21.048 [2024-04-24 15:18:30.053507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.048 [2024-04-24 15:18:30.053531] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fe9de0, cid 3, qid 0 00:14:21.048 [2024-04-24 15:18:30.053596] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.048 [2024-04-24 15:18:30.053603] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.048 [2024-04-24 15:18:30.053606] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.053611] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fe9de0) on tqpair=0x1fa1300 00:14:21.048 [2024-04-24 15:18:30.053620] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:14:21.048 00:14:21.048 15:18:30 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:21.048 [2024-04-24 15:18:30.095876] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:14:21.048 [2024-04-24 15:18:30.095951] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71909 ] 00:14:21.048 [2024-04-24 15:18:30.247709] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:21.048 [2024-04-24 15:18:30.247797] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:21.048 [2024-04-24 15:18:30.247805] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:21.048 [2024-04-24 15:18:30.247820] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:21.048 [2024-04-24 15:18:30.247835] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:21.048 [2024-04-24 15:18:30.248022] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:21.048 [2024-04-24 15:18:30.248081] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2487300 0 00:14:21.048 [2024-04-24 15:18:30.255451] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:21.048 [2024-04-24 15:18:30.255474] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:21.048 [2024-04-24 15:18:30.255480] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:21.048 [2024-04-24 15:18:30.255484] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:21.048 [2024-04-24 15:18:30.255531] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.255539] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.255544] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2487300) 00:14:21.048 [2024-04-24 15:18:30.255558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:21.048 [2024-04-24 15:18:30.255590] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cf9c0, cid 0, qid 0 00:14:21.048 [2024-04-24 15:18:30.262448] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.048 [2024-04-24 15:18:30.262469] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.048 [2024-04-24 15:18:30.262475] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.262480] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cf9c0) on tqpair=0x2487300 00:14:21.048 [2024-04-24 15:18:30.262492] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:21.048 [2024-04-24 15:18:30.262500] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:21.048 [2024-04-24 15:18:30.262507] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:21.048 [2024-04-24 15:18:30.262525] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.262531] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.262535] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2487300) 00:14:21.048 [2024-04-24 15:18:30.262544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.048 [2024-04-24 15:18:30.262572] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cf9c0, cid 0, qid 0 00:14:21.048 [2024-04-24 15:18:30.262658] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.048 [2024-04-24 15:18:30.262665] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.048 [2024-04-24 15:18:30.262668] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.262673] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cf9c0) on tqpair=0x2487300 00:14:21.048 [2024-04-24 15:18:30.262684] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:21.048 [2024-04-24 15:18:30.262692] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:21.048 [2024-04-24 15:18:30.262700] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.262705] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.262709] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2487300) 00:14:21.048 [2024-04-24 15:18:30.262717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.048 [2024-04-24 15:18:30.262737] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cf9c0, cid 0, qid 0 00:14:21.048 [2024-04-24 15:18:30.262821] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.048 [2024-04-24 15:18:30.262828] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.048 [2024-04-24 15:18:30.262832] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.262836] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cf9c0) on tqpair=0x2487300 00:14:21.048 [2024-04-24 15:18:30.262843] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:21.048 [2024-04-24 15:18:30.262852] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:21.048 [2024-04-24 15:18:30.262860] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.262865] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.262869] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2487300) 00:14:21.048 [2024-04-24 15:18:30.262876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.048 [2024-04-24 15:18:30.262895] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cf9c0, cid 0, qid 0 00:14:21.048 [2024-04-24 15:18:30.262973] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.048 [2024-04-24 15:18:30.262980] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.048 [2024-04-24 15:18:30.262983] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.048 [2024-04-24 15:18:30.262988] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cf9c0) on tqpair=0x2487300 00:14:21.049 [2024-04-24 15:18:30.262995] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:21.049 [2024-04-24 15:18:30.263006] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263011] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263015] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2487300) 00:14:21.049 [2024-04-24 15:18:30.263022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.049 [2024-04-24 15:18:30.263040] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cf9c0, cid 0, qid 0 00:14:21.049 [2024-04-24 15:18:30.263111] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.049 [2024-04-24 15:18:30.263118] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.049 [2024-04-24 15:18:30.263121] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263125] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cf9c0) on tqpair=0x2487300 00:14:21.049 [2024-04-24 15:18:30.263132] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:21.049 [2024-04-24 15:18:30.263137] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:21.049 [2024-04-24 15:18:30.263145] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:21.049 [2024-04-24 15:18:30.263252] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:21.049 [2024-04-24 15:18:30.263262] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:21.049 [2024-04-24 15:18:30.263272] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263277] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263281] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2487300) 00:14:21.049 [2024-04-24 15:18:30.263289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.049 [2024-04-24 15:18:30.263308] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cf9c0, cid 0, qid 0 00:14:21.049 [2024-04-24 15:18:30.263385] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.049 [2024-04-24 15:18:30.263392] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.049 [2024-04-24 15:18:30.263396] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263400] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cf9c0) on tqpair=0x2487300 00:14:21.049 [2024-04-24 15:18:30.263407] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:21.049 [2024-04-24 15:18:30.263417] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263422] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263426] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2487300) 00:14:21.049 [2024-04-24 15:18:30.263447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.049 [2024-04-24 15:18:30.263468] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cf9c0, cid 0, qid 0 00:14:21.049 [2024-04-24 15:18:30.263543] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.049 [2024-04-24 15:18:30.263550] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.049 [2024-04-24 15:18:30.263554] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263558] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cf9c0) on tqpair=0x2487300 00:14:21.049 [2024-04-24 15:18:30.263564] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:21.049 [2024-04-24 15:18:30.263570] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:21.049 [2024-04-24 15:18:30.263578] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:21.049 [2024-04-24 15:18:30.263589] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:21.049 [2024-04-24 15:18:30.263599] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263604] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2487300) 00:14:21.049 [2024-04-24 15:18:30.263612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.049 [2024-04-24 15:18:30.263631] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cf9c0, cid 0, qid 0 00:14:21.049 [2024-04-24 15:18:30.263769] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.049 [2024-04-24 15:18:30.263784] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.049 [2024-04-24 15:18:30.263789] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263793] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2487300): datao=0, datal=4096, cccid=0 00:14:21.049 [2024-04-24 15:18:30.263799] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24cf9c0) on tqpair(0x2487300): expected_datao=0, payload_size=4096 00:14:21.049 [2024-04-24 15:18:30.263804] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263813] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263818] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263827] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.049 [2024-04-24 15:18:30.263833] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.049 [2024-04-24 15:18:30.263837] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263841] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cf9c0) on tqpair=0x2487300 00:14:21.049 [2024-04-24 15:18:30.263851] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:21.049 [2024-04-24 15:18:30.263856] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:21.049 [2024-04-24 15:18:30.263861] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:21.049 [2024-04-24 15:18:30.263871] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:21.049 [2024-04-24 15:18:30.263876] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:21.049 [2024-04-24 15:18:30.263882] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:21.049 [2024-04-24 15:18:30.263891] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:21.049 [2024-04-24 15:18:30.263899] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263904] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.263908] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2487300) 00:14:21.049 [2024-04-24 15:18:30.263916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.049 [2024-04-24 15:18:30.263937] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cf9c0, cid 0, qid 0 00:14:21.049 [2024-04-24 15:18:30.264017] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.049 [2024-04-24 15:18:30.264024] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.049 [2024-04-24 15:18:30.264028] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.264032] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cf9c0) on tqpair=0x2487300 00:14:21.049 [2024-04-24 15:18:30.264041] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.264046] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.264050] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2487300) 00:14:21.049 [2024-04-24 15:18:30.264057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.049 [2024-04-24 15:18:30.264064] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.264068] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.264072] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2487300) 00:14:21.049 [2024-04-24 15:18:30.264078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.049 [2024-04-24 15:18:30.264085] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.264089] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.264093] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2487300) 00:14:21.049 [2024-04-24 15:18:30.264099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.049 [2024-04-24 15:18:30.264106] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.264110] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.264113] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.049 [2024-04-24 15:18:30.264119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.049 [2024-04-24 15:18:30.264125] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:21.049 [2024-04-24 15:18:30.264138] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:21.049 [2024-04-24 15:18:30.264146] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.049 [2024-04-24 15:18:30.264150] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2487300) 00:14:21.049 [2024-04-24 15:18:30.264157] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.049 [2024-04-24 15:18:30.264178] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cf9c0, cid 0, qid 0 00:14:21.049 [2024-04-24 15:18:30.264186] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfb20, cid 1, qid 0 00:14:21.049 [2024-04-24 15:18:30.264191] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfc80, cid 2, qid 0 00:14:21.049 [2024-04-24 15:18:30.264196] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.049 [2024-04-24 15:18:30.264202] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cff40, cid 4, qid 0 00:14:21.049 [2024-04-24 15:18:30.264350] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.049 [2024-04-24 15:18:30.264359] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.049 [2024-04-24 15:18:30.264363] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.264367] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cff40) on tqpair=0x2487300 00:14:21.050 [2024-04-24 15:18:30.264374] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:21.050 [2024-04-24 15:18:30.264380] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.264388] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.264396] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.264403] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.264407] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.264411] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2487300) 00:14:21.050 [2024-04-24 15:18:30.264419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.050 [2024-04-24 15:18:30.264465] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cff40, cid 4, qid 0 00:14:21.050 [2024-04-24 15:18:30.264548] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.050 [2024-04-24 15:18:30.264555] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.050 [2024-04-24 15:18:30.264559] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.264563] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cff40) on tqpair=0x2487300 00:14:21.050 [2024-04-24 15:18:30.264615] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.264626] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.264635] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.264639] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2487300) 00:14:21.050 [2024-04-24 15:18:30.264647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.050 [2024-04-24 15:18:30.264666] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cff40, cid 4, qid 0 00:14:21.050 [2024-04-24 15:18:30.264764] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.050 [2024-04-24 15:18:30.264771] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.050 [2024-04-24 15:18:30.264775] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.264779] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2487300): datao=0, datal=4096, cccid=4 00:14:21.050 [2024-04-24 15:18:30.264784] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24cff40) on tqpair(0x2487300): expected_datao=0, payload_size=4096 00:14:21.050 [2024-04-24 15:18:30.264789] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.264797] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.264801] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.264810] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.050 [2024-04-24 15:18:30.264816] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.050 [2024-04-24 15:18:30.264820] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.264824] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cff40) on tqpair=0x2487300 00:14:21.050 [2024-04-24 15:18:30.264836] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:21.050 [2024-04-24 15:18:30.264851] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.264862] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.264870] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.264875] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2487300) 00:14:21.050 [2024-04-24 15:18:30.264883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.050 [2024-04-24 15:18:30.264903] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cff40, cid 4, qid 0 00:14:21.050 [2024-04-24 15:18:30.265018] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.050 [2024-04-24 15:18:30.265025] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.050 [2024-04-24 15:18:30.265029] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265033] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2487300): datao=0, datal=4096, cccid=4 00:14:21.050 [2024-04-24 15:18:30.265038] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24cff40) on tqpair(0x2487300): expected_datao=0, payload_size=4096 00:14:21.050 [2024-04-24 15:18:30.265042] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265050] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265054] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265066] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.050 [2024-04-24 15:18:30.265073] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.050 [2024-04-24 15:18:30.265076] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265080] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cff40) on tqpair=0x2487300 00:14:21.050 [2024-04-24 15:18:30.265097] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.265109] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.265118] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265122] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2487300) 00:14:21.050 [2024-04-24 15:18:30.265130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.050 [2024-04-24 15:18:30.265150] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cff40, cid 4, qid 0 00:14:21.050 [2024-04-24 15:18:30.265252] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.050 [2024-04-24 15:18:30.265269] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.050 [2024-04-24 15:18:30.265274] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265278] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2487300): datao=0, datal=4096, cccid=4 00:14:21.050 [2024-04-24 15:18:30.265283] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24cff40) on tqpair(0x2487300): expected_datao=0, payload_size=4096 00:14:21.050 [2024-04-24 15:18:30.265288] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265295] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265299] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265308] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.050 [2024-04-24 15:18:30.265314] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.050 [2024-04-24 15:18:30.265318] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265322] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cff40) on tqpair=0x2487300 00:14:21.050 [2024-04-24 15:18:30.265332] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.265342] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.265353] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.265360] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.265365] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.265371] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:21.050 [2024-04-24 15:18:30.265376] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:21.050 [2024-04-24 15:18:30.265381] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:21.050 [2024-04-24 15:18:30.265398] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265403] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2487300) 00:14:21.050 [2024-04-24 15:18:30.265411] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.050 [2024-04-24 15:18:30.265419] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265423] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265441] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2487300) 00:14:21.050 [2024-04-24 15:18:30.265449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.050 [2024-04-24 15:18:30.265477] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cff40, cid 4, qid 0 00:14:21.050 [2024-04-24 15:18:30.265485] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d00a0, cid 5, qid 0 00:14:21.050 [2024-04-24 15:18:30.265584] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.050 [2024-04-24 15:18:30.265599] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.050 [2024-04-24 15:18:30.265603] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265608] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cff40) on tqpair=0x2487300 00:14:21.050 [2024-04-24 15:18:30.265616] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.050 [2024-04-24 15:18:30.265623] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.050 [2024-04-24 15:18:30.265626] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265630] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d00a0) on tqpair=0x2487300 00:14:21.050 [2024-04-24 15:18:30.265642] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.050 [2024-04-24 15:18:30.265647] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2487300) 00:14:21.050 [2024-04-24 15:18:30.265655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.051 [2024-04-24 15:18:30.265674] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d00a0, cid 5, qid 0 00:14:21.051 [2024-04-24 15:18:30.265748] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.051 [2024-04-24 15:18:30.265759] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.051 [2024-04-24 15:18:30.265763] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.265767] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d00a0) on tqpair=0x2487300 00:14:21.051 [2024-04-24 15:18:30.265779] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.265784] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2487300) 00:14:21.051 [2024-04-24 15:18:30.265792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.051 [2024-04-24 15:18:30.265810] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d00a0, cid 5, qid 0 00:14:21.051 [2024-04-24 15:18:30.265884] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.051 [2024-04-24 15:18:30.265891] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.051 [2024-04-24 15:18:30.265895] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.265899] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d00a0) on tqpair=0x2487300 00:14:21.051 [2024-04-24 15:18:30.265910] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.265915] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2487300) 00:14:21.051 [2024-04-24 15:18:30.265923] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.051 [2024-04-24 15:18:30.265940] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d00a0, cid 5, qid 0 00:14:21.051 [2024-04-24 15:18:30.266012] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.051 [2024-04-24 15:18:30.266019] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.051 [2024-04-24 15:18:30.266023] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.266027] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d00a0) on tqpair=0x2487300 00:14:21.051 [2024-04-24 15:18:30.266042] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.266047] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2487300) 00:14:21.051 [2024-04-24 15:18:30.266055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.051 [2024-04-24 15:18:30.266063] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.266068] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2487300) 00:14:21.051 [2024-04-24 15:18:30.266074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.051 [2024-04-24 15:18:30.266083] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.266087] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2487300) 00:14:21.051 [2024-04-24 15:18:30.266094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.051 [2024-04-24 15:18:30.266102] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.266107] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2487300) 00:14:21.051 [2024-04-24 15:18:30.266113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.051 [2024-04-24 15:18:30.266133] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d00a0, cid 5, qid 0 00:14:21.051 [2024-04-24 15:18:30.266141] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cff40, cid 4, qid 0 00:14:21.051 [2024-04-24 15:18:30.266146] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d0200, cid 6, qid 0 00:14:21.051 [2024-04-24 15:18:30.266151] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d0360, cid 7, qid 0 00:14:21.051 [2024-04-24 15:18:30.266328] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.051 [2024-04-24 15:18:30.266341] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.051 [2024-04-24 15:18:30.266346] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.266350] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2487300): datao=0, datal=8192, cccid=5 00:14:21.051 [2024-04-24 15:18:30.266355] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d00a0) on tqpair(0x2487300): expected_datao=0, payload_size=8192 00:14:21.051 [2024-04-24 15:18:30.266360] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.266378] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.266383] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.266389] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.051 [2024-04-24 15:18:30.266395] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.051 [2024-04-24 15:18:30.266399] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.266402] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2487300): datao=0, datal=512, cccid=4 00:14:21.051 [2024-04-24 15:18:30.266407] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24cff40) on tqpair(0x2487300): expected_datao=0, payload_size=512 00:14:21.051 [2024-04-24 15:18:30.266412] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.266418] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.266422] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.269473] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.051 [2024-04-24 15:18:30.269506] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.051 [2024-04-24 15:18:30.269511] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.269515] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2487300): datao=0, datal=512, cccid=6 00:14:21.051 [2024-04-24 15:18:30.269520] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d0200) on tqpair(0x2487300): expected_datao=0, payload_size=512 00:14:21.051 [2024-04-24 15:18:30.269525] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.269532] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.269536] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.269542] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:21.051 [2024-04-24 15:18:30.269547] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:21.051 [2024-04-24 15:18:30.269551] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.269555] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2487300): datao=0, datal=4096, cccid=7 00:14:21.051 [2024-04-24 15:18:30.269559] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d0360) on tqpair(0x2487300): expected_datao=0, payload_size=4096 00:14:21.051 [2024-04-24 15:18:30.269564] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.269571] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.269575] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.269584] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.051 [2024-04-24 15:18:30.269590] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.051 [2024-04-24 15:18:30.269594] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.269598] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d00a0) on tqpair=0x2487300 00:14:21.051 [2024-04-24 15:18:30.269633] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.051 [2024-04-24 15:18:30.269641] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.051 [2024-04-24 15:18:30.269645] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.051 [2024-04-24 15:18:30.269649] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cff40) on tqpair=0x2487300 00:14:21.051 [2024-04-24 15:18:30.269661] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.051 [2024-04-24 15:18:30.269667] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.051 [2024-04-24 15:18:30.269671] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.051 ===================================================== 00:14:21.051 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:21.051 ===================================================== 00:14:21.051 Controller Capabilities/Features 00:14:21.051 ================================ 00:14:21.051 Vendor ID: 8086 00:14:21.051 Subsystem Vendor ID: 8086 00:14:21.051 Serial Number: SPDK00000000000001 00:14:21.051 Model Number: SPDK bdev Controller 00:14:21.051 Firmware Version: 24.05 00:14:21.051 Recommended Arb Burst: 6 00:14:21.051 IEEE OUI Identifier: e4 d2 5c 00:14:21.051 Multi-path I/O 00:14:21.051 May have multiple subsystem ports: Yes 00:14:21.051 May have multiple controllers: Yes 00:14:21.051 Associated with SR-IOV VF: No 00:14:21.051 Max Data Transfer Size: 131072 00:14:21.051 Max Number of Namespaces: 32 00:14:21.051 Max Number of I/O Queues: 127 00:14:21.051 NVMe Specification Version (VS): 1.3 00:14:21.051 NVMe Specification Version (Identify): 1.3 00:14:21.051 Maximum Queue Entries: 128 00:14:21.051 Contiguous Queues Required: Yes 00:14:21.051 Arbitration Mechanisms Supported 00:14:21.051 Weighted Round Robin: Not Supported 00:14:21.051 Vendor Specific: Not Supported 00:14:21.051 Reset Timeout: 15000 ms 00:14:21.051 Doorbell Stride: 4 bytes 00:14:21.051 NVM Subsystem Reset: Not Supported 00:14:21.051 Command Sets Supported 00:14:21.051 NVM Command Set: Supported 00:14:21.051 Boot Partition: Not Supported 00:14:21.051 Memory Page Size Minimum: 4096 bytes 00:14:21.051 Memory Page Size Maximum: 4096 bytes 00:14:21.051 Persistent Memory Region: Not Supported 00:14:21.051 Optional Asynchronous Events Supported 00:14:21.051 Namespace Attribute Notices: Supported 00:14:21.051 Firmware Activation Notices: Not Supported 00:14:21.051 ANA Change Notices: Not Supported 00:14:21.051 PLE Aggregate Log Change Notices: Not Supported 00:14:21.051 LBA Status Info Alert Notices: Not Supported 00:14:21.051 EGE Aggregate Log Change Notices: Not Supported 00:14:21.051 Normal NVM Subsystem Shutdown event: Not Supported 00:14:21.051 Zone Descriptor Change Notices: Not Supported 00:14:21.051 Discovery Log Change Notices: Not Supported 00:14:21.051 Controller Attributes 00:14:21.051 128-bit Host Identifier: Supported 00:14:21.052 Non-Operational Permissive Mode: Not Supported 00:14:21.052 NVM Sets: Not Supported 00:14:21.052 Read Recovery Levels: Not Supported 00:14:21.052 Endurance Groups: Not Supported 00:14:21.052 Predictable Latency Mode: Not Supported 00:14:21.052 Traffic Based Keep ALive: Not Supported 00:14:21.052 Namespace Granularity: Not Supported 00:14:21.052 SQ Associations: Not Supported 00:14:21.052 UUID List: Not Supported 00:14:21.052 Multi-Domain Subsystem: Not Supported 00:14:21.052 Fixed Capacity Management: Not Supported 00:14:21.052 Variable Capacity Management: Not Supported 00:14:21.052 Delete Endurance Group: Not Supported 00:14:21.052 Delete NVM Set: Not Supported 00:14:21.052 Extended LBA Formats Supported: Not Supported 00:14:21.052 Flexible Data Placement Supported: Not Supported 00:14:21.052 00:14:21.052 Controller Memory Buffer Support 00:14:21.052 ================================ 00:14:21.052 Supported: No 00:14:21.052 00:14:21.052 Persistent Memory Region Support 00:14:21.052 ================================ 00:14:21.052 Supported: No 00:14:21.052 00:14:21.052 Admin Command Set Attributes 00:14:21.052 ============================ 00:14:21.052 Security Send/Receive: Not Supported 00:14:21.052 Format NVM: Not Supported 00:14:21.052 Firmware Activate/Download: Not Supported 00:14:21.052 Namespace Management: Not Supported 00:14:21.052 Device Self-Test: Not Supported 00:14:21.052 Directives: Not Supported 00:14:21.052 NVMe-MI: Not Supported 00:14:21.052 Virtualization Management: Not Supported 00:14:21.052 Doorbell Buffer Config: Not Supported 00:14:21.052 Get LBA Status Capability: Not Supported 00:14:21.052 Command & Feature Lockdown Capability: Not Supported 00:14:21.052 Abort Command Limit: 4 00:14:21.052 Async Event Request Limit: 4 00:14:21.052 Number of Firmware Slots: N/A 00:14:21.052 Firmware Slot 1 Read-Only: N/A 00:14:21.052 Firmware Activation Without Reset: [2024-04-24 15:18:30.269675] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d0200) on tqpair=0x2487300 00:14:21.052 [2024-04-24 15:18:30.269684] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.052 [2024-04-24 15:18:30.269690] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.052 [2024-04-24 15:18:30.269693] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.052 [2024-04-24 15:18:30.269697] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d0360) on tqpair=0x2487300 00:14:21.052 N/A 00:14:21.052 Multiple Update Detection Support: N/A 00:14:21.052 Firmware Update Granularity: No Information Provided 00:14:21.052 Per-Namespace SMART Log: No 00:14:21.052 Asymmetric Namespace Access Log Page: Not Supported 00:14:21.052 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:21.052 Command Effects Log Page: Supported 00:14:21.052 Get Log Page Extended Data: Supported 00:14:21.052 Telemetry Log Pages: Not Supported 00:14:21.052 Persistent Event Log Pages: Not Supported 00:14:21.052 Supported Log Pages Log Page: May Support 00:14:21.052 Commands Supported & Effects Log Page: Not Supported 00:14:21.052 Feature Identifiers & Effects Log Page:May Support 00:14:21.052 NVMe-MI Commands & Effects Log Page: May Support 00:14:21.052 Data Area 4 for Telemetry Log: Not Supported 00:14:21.052 Error Log Page Entries Supported: 128 00:14:21.052 Keep Alive: Supported 00:14:21.052 Keep Alive Granularity: 10000 ms 00:14:21.052 00:14:21.052 NVM Command Set Attributes 00:14:21.052 ========================== 00:14:21.052 Submission Queue Entry Size 00:14:21.052 Max: 64 00:14:21.052 Min: 64 00:14:21.052 Completion Queue Entry Size 00:14:21.052 Max: 16 00:14:21.052 Min: 16 00:14:21.052 Number of Namespaces: 32 00:14:21.052 Compare Command: Supported 00:14:21.052 Write Uncorrectable Command: Not Supported 00:14:21.052 Dataset Management Command: Supported 00:14:21.052 Write Zeroes Command: Supported 00:14:21.052 Set Features Save Field: Not Supported 00:14:21.052 Reservations: Supported 00:14:21.052 Timestamp: Not Supported 00:14:21.052 Copy: Supported 00:14:21.052 Volatile Write Cache: Present 00:14:21.052 Atomic Write Unit (Normal): 1 00:14:21.052 Atomic Write Unit (PFail): 1 00:14:21.052 Atomic Compare & Write Unit: 1 00:14:21.052 Fused Compare & Write: Supported 00:14:21.052 Scatter-Gather List 00:14:21.052 SGL Command Set: Supported 00:14:21.052 SGL Keyed: Supported 00:14:21.052 SGL Bit Bucket Descriptor: Not Supported 00:14:21.052 SGL Metadata Pointer: Not Supported 00:14:21.052 Oversized SGL: Not Supported 00:14:21.052 SGL Metadata Address: Not Supported 00:14:21.052 SGL Offset: Supported 00:14:21.052 Transport SGL Data Block: Not Supported 00:14:21.052 Replay Protected Memory Block: Not Supported 00:14:21.052 00:14:21.052 Firmware Slot Information 00:14:21.052 ========================= 00:14:21.052 Active slot: 1 00:14:21.052 Slot 1 Firmware Revision: 24.05 00:14:21.052 00:14:21.052 00:14:21.052 Commands Supported and Effects 00:14:21.052 ============================== 00:14:21.052 Admin Commands 00:14:21.052 -------------- 00:14:21.052 Get Log Page (02h): Supported 00:14:21.052 Identify (06h): Supported 00:14:21.052 Abort (08h): Supported 00:14:21.052 Set Features (09h): Supported 00:14:21.052 Get Features (0Ah): Supported 00:14:21.052 Asynchronous Event Request (0Ch): Supported 00:14:21.052 Keep Alive (18h): Supported 00:14:21.052 I/O Commands 00:14:21.052 ------------ 00:14:21.052 Flush (00h): Supported LBA-Change 00:14:21.052 Write (01h): Supported LBA-Change 00:14:21.052 Read (02h): Supported 00:14:21.052 Compare (05h): Supported 00:14:21.052 Write Zeroes (08h): Supported LBA-Change 00:14:21.052 Dataset Management (09h): Supported LBA-Change 00:14:21.052 Copy (19h): Supported LBA-Change 00:14:21.052 Unknown (79h): Supported LBA-Change 00:14:21.052 Unknown (7Ah): Supported 00:14:21.052 00:14:21.052 Error Log 00:14:21.052 ========= 00:14:21.052 00:14:21.052 Arbitration 00:14:21.052 =========== 00:14:21.052 Arbitration Burst: 1 00:14:21.052 00:14:21.052 Power Management 00:14:21.052 ================ 00:14:21.052 Number of Power States: 1 00:14:21.052 Current Power State: Power State #0 00:14:21.052 Power State #0: 00:14:21.052 Max Power: 0.00 W 00:14:21.052 Non-Operational State: Operational 00:14:21.052 Entry Latency: Not Reported 00:14:21.052 Exit Latency: Not Reported 00:14:21.052 Relative Read Throughput: 0 00:14:21.052 Relative Read Latency: 0 00:14:21.052 Relative Write Throughput: 0 00:14:21.052 Relative Write Latency: 0 00:14:21.052 Idle Power: Not Reported 00:14:21.052 Active Power: Not Reported 00:14:21.052 Non-Operational Permissive Mode: Not Supported 00:14:21.052 00:14:21.052 Health Information 00:14:21.052 ================== 00:14:21.052 Critical Warnings: 00:14:21.052 Available Spare Space: OK 00:14:21.052 Temperature: OK 00:14:21.052 Device Reliability: OK 00:14:21.052 Read Only: No 00:14:21.052 Volatile Memory Backup: OK 00:14:21.052 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:21.052 Temperature Threshold: [2024-04-24 15:18:30.269813] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.052 [2024-04-24 15:18:30.269820] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2487300) 00:14:21.052 [2024-04-24 15:18:30.269829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.052 [2024-04-24 15:18:30.269867] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d0360, cid 7, qid 0 00:14:21.052 [2024-04-24 15:18:30.269947] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.052 [2024-04-24 15:18:30.269954] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.052 [2024-04-24 15:18:30.269957] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.052 [2024-04-24 15:18:30.269961] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24d0360) on tqpair=0x2487300 00:14:21.052 [2024-04-24 15:18:30.269995] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:21.052 [2024-04-24 15:18:30.270008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.052 [2024-04-24 15:18:30.270015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.052 [2024-04-24 15:18:30.270022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.052 [2024-04-24 15:18:30.270028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.052 [2024-04-24 15:18:30.270037] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.052 [2024-04-24 15:18:30.270041] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.052 [2024-04-24 15:18:30.270045] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.052 [2024-04-24 15:18:30.270053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.052 [2024-04-24 15:18:30.270075] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.052 [2024-04-24 15:18:30.270150] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.052 [2024-04-24 15:18:30.270157] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.052 [2024-04-24 15:18:30.270161] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270165] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.053 [2024-04-24 15:18:30.270173] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270178] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270182] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.053 [2024-04-24 15:18:30.270189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.053 [2024-04-24 15:18:30.270210] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.053 [2024-04-24 15:18:30.270308] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.053 [2024-04-24 15:18:30.270324] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.053 [2024-04-24 15:18:30.270328] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270333] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.053 [2024-04-24 15:18:30.270339] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:21.053 [2024-04-24 15:18:30.270344] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:21.053 [2024-04-24 15:18:30.270355] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270359] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270363] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.053 [2024-04-24 15:18:30.270371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.053 [2024-04-24 15:18:30.270390] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.053 [2024-04-24 15:18:30.270478] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.053 [2024-04-24 15:18:30.270487] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.053 [2024-04-24 15:18:30.270491] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270495] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.053 [2024-04-24 15:18:30.270508] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270513] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270517] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.053 [2024-04-24 15:18:30.270525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.053 [2024-04-24 15:18:30.270545] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.053 [2024-04-24 15:18:30.270621] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.053 [2024-04-24 15:18:30.270628] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.053 [2024-04-24 15:18:30.270632] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270636] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.053 [2024-04-24 15:18:30.270647] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270652] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270656] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.053 [2024-04-24 15:18:30.270663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.053 [2024-04-24 15:18:30.270681] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.053 [2024-04-24 15:18:30.270751] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.053 [2024-04-24 15:18:30.270758] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.053 [2024-04-24 15:18:30.270762] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270766] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.053 [2024-04-24 15:18:30.270780] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270785] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270788] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.053 [2024-04-24 15:18:30.270796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.053 [2024-04-24 15:18:30.270813] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.053 [2024-04-24 15:18:30.270909] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.053 [2024-04-24 15:18:30.270923] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.053 [2024-04-24 15:18:30.270927] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270932] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.053 [2024-04-24 15:18:30.270943] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270948] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.270952] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.053 [2024-04-24 15:18:30.270959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.053 [2024-04-24 15:18:30.270978] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.053 [2024-04-24 15:18:30.271050] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.053 [2024-04-24 15:18:30.271056] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.053 [2024-04-24 15:18:30.271060] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.271064] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.053 [2024-04-24 15:18:30.271075] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.271079] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.271083] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.053 [2024-04-24 15:18:30.271090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.053 [2024-04-24 15:18:30.271108] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.053 [2024-04-24 15:18:30.271185] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.053 [2024-04-24 15:18:30.271198] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.053 [2024-04-24 15:18:30.271203] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.271207] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.053 [2024-04-24 15:18:30.271219] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.271223] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.271227] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.053 [2024-04-24 15:18:30.271235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.053 [2024-04-24 15:18:30.271254] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.053 [2024-04-24 15:18:30.271326] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.053 [2024-04-24 15:18:30.271333] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.053 [2024-04-24 15:18:30.271336] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.271340] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.053 [2024-04-24 15:18:30.271351] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.271356] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.271360] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.053 [2024-04-24 15:18:30.271367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.053 [2024-04-24 15:18:30.271384] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.053 [2024-04-24 15:18:30.271488] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.053 [2024-04-24 15:18:30.271502] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.053 [2024-04-24 15:18:30.271507] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.271511] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.053 [2024-04-24 15:18:30.271523] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.271528] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.053 [2024-04-24 15:18:30.271532] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.054 [2024-04-24 15:18:30.271540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.054 [2024-04-24 15:18:30.271560] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.054 [2024-04-24 15:18:30.271626] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.054 [2024-04-24 15:18:30.271637] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.054 [2024-04-24 15:18:30.271641] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.271646] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.054 [2024-04-24 15:18:30.271658] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.271663] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.271667] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.054 [2024-04-24 15:18:30.271674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.054 [2024-04-24 15:18:30.271693] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.054 [2024-04-24 15:18:30.271754] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.054 [2024-04-24 15:18:30.271760] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.054 [2024-04-24 15:18:30.271764] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.271768] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.054 [2024-04-24 15:18:30.271780] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.271784] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.271788] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.054 [2024-04-24 15:18:30.271811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.054 [2024-04-24 15:18:30.271828] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.054 [2024-04-24 15:18:30.271895] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.054 [2024-04-24 15:18:30.271901] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.054 [2024-04-24 15:18:30.271905] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.271909] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.054 [2024-04-24 15:18:30.271920] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.271924] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.271928] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.054 [2024-04-24 15:18:30.271935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.054 [2024-04-24 15:18:30.271953] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.054 [2024-04-24 15:18:30.272014] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.054 [2024-04-24 15:18:30.272020] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.054 [2024-04-24 15:18:30.272024] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272028] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.054 [2024-04-24 15:18:30.272039] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272043] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272047] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.054 [2024-04-24 15:18:30.272054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.054 [2024-04-24 15:18:30.272072] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.054 [2024-04-24 15:18:30.272137] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.054 [2024-04-24 15:18:30.272144] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.054 [2024-04-24 15:18:30.272147] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272151] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.054 [2024-04-24 15:18:30.272162] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272167] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272171] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.054 [2024-04-24 15:18:30.272178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.054 [2024-04-24 15:18:30.272195] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.054 [2024-04-24 15:18:30.272271] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.054 [2024-04-24 15:18:30.272318] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.054 [2024-04-24 15:18:30.272324] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272328] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.054 [2024-04-24 15:18:30.272342] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272347] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272351] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.054 [2024-04-24 15:18:30.272359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.054 [2024-04-24 15:18:30.272382] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.054 [2024-04-24 15:18:30.272468] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.054 [2024-04-24 15:18:30.272477] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.054 [2024-04-24 15:18:30.272480] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272484] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.054 [2024-04-24 15:18:30.272496] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272501] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272505] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.054 [2024-04-24 15:18:30.272513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.054 [2024-04-24 15:18:30.272534] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.054 [2024-04-24 15:18:30.272614] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.054 [2024-04-24 15:18:30.272621] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.054 [2024-04-24 15:18:30.272624] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272628] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.054 [2024-04-24 15:18:30.272640] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272645] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272648] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.054 [2024-04-24 15:18:30.272656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.054 [2024-04-24 15:18:30.272674] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.054 [2024-04-24 15:18:30.272754] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.054 [2024-04-24 15:18:30.272760] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.054 [2024-04-24 15:18:30.272764] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272783] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.054 [2024-04-24 15:18:30.272794] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272798] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272802] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.054 [2024-04-24 15:18:30.272809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.054 [2024-04-24 15:18:30.272827] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.054 [2024-04-24 15:18:30.272897] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.054 [2024-04-24 15:18:30.272904] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.054 [2024-04-24 15:18:30.272908] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272912] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.054 [2024-04-24 15:18:30.272923] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272928] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.272931] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.054 [2024-04-24 15:18:30.272939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.054 [2024-04-24 15:18:30.272956] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.054 [2024-04-24 15:18:30.273022] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.054 [2024-04-24 15:18:30.273033] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.054 [2024-04-24 15:18:30.273037] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.273041] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.054 [2024-04-24 15:18:30.273053] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.273057] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.273061] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.054 [2024-04-24 15:18:30.273068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.054 [2024-04-24 15:18:30.273087] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.054 [2024-04-24 15:18:30.273159] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.054 [2024-04-24 15:18:30.273170] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.054 [2024-04-24 15:18:30.273174] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.054 [2024-04-24 15:18:30.273178] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.054 [2024-04-24 15:18:30.273190] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.055 [2024-04-24 15:18:30.273194] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.055 [2024-04-24 15:18:30.273198] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.055 [2024-04-24 15:18:30.273205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.055 [2024-04-24 15:18:30.273223] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.055 [2024-04-24 15:18:30.273298] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.055 [2024-04-24 15:18:30.273308] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.055 [2024-04-24 15:18:30.273313] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.055 [2024-04-24 15:18:30.273317] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.055 [2024-04-24 15:18:30.273328] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.055 [2024-04-24 15:18:30.273333] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.055 [2024-04-24 15:18:30.273336] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.055 [2024-04-24 15:18:30.273344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.055 [2024-04-24 15:18:30.273362] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.055 [2024-04-24 15:18:30.277468] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.055 [2024-04-24 15:18:30.277488] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.055 [2024-04-24 15:18:30.277492] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.055 [2024-04-24 15:18:30.277497] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.055 [2024-04-24 15:18:30.277513] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:21.055 [2024-04-24 15:18:30.277519] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:21.055 [2024-04-24 15:18:30.277523] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2487300) 00:14:21.055 [2024-04-24 15:18:30.277531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.055 [2024-04-24 15:18:30.277557] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24cfde0, cid 3, qid 0 00:14:21.055 [2024-04-24 15:18:30.277634] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:21.055 [2024-04-24 15:18:30.277641] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:21.055 [2024-04-24 15:18:30.277645] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:21.055 [2024-04-24 15:18:30.277649] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24cfde0) on tqpair=0x2487300 00:14:21.055 [2024-04-24 15:18:30.277658] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:14:21.330 0 Kelvin (-273 Celsius) 00:14:21.330 Available Spare: 0% 00:14:21.330 Available Spare Threshold: 0% 00:14:21.330 Life Percentage Used: 0% 00:14:21.330 Data Units Read: 0 00:14:21.330 Data Units Written: 0 00:14:21.330 Host Read Commands: 0 00:14:21.330 Host Write Commands: 0 00:14:21.330 Controller Busy Time: 0 minutes 00:14:21.330 Power Cycles: 0 00:14:21.330 Power On Hours: 0 hours 00:14:21.330 Unsafe Shutdowns: 0 00:14:21.330 Unrecoverable Media Errors: 0 00:14:21.330 Lifetime Error Log Entries: 0 00:14:21.330 Warning Temperature Time: 0 minutes 00:14:21.330 Critical Temperature Time: 0 minutes 00:14:21.330 00:14:21.330 Number of Queues 00:14:21.330 ================ 00:14:21.330 Number of I/O Submission Queues: 127 00:14:21.330 Number of I/O Completion Queues: 127 00:14:21.330 00:14:21.330 Active Namespaces 00:14:21.330 ================= 00:14:21.330 Namespace ID:1 00:14:21.330 Error Recovery Timeout: Unlimited 00:14:21.330 Command Set Identifier: NVM (00h) 00:14:21.330 Deallocate: Supported 00:14:21.330 Deallocated/Unwritten Error: Not Supported 00:14:21.330 Deallocated Read Value: Unknown 00:14:21.330 Deallocate in Write Zeroes: Not Supported 00:14:21.330 Deallocated Guard Field: 0xFFFF 00:14:21.330 Flush: Supported 00:14:21.330 Reservation: Supported 00:14:21.330 Namespace Sharing Capabilities: Multiple Controllers 00:14:21.330 Size (in LBAs): 131072 (0GiB) 00:14:21.330 Capacity (in LBAs): 131072 (0GiB) 00:14:21.330 Utilization (in LBAs): 131072 (0GiB) 00:14:21.330 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:21.330 EUI64: ABCDEF0123456789 00:14:21.330 UUID: 48580ffc-c6a1-4a70-a18f-0795d121f3d9 00:14:21.330 Thin Provisioning: Not Supported 00:14:21.330 Per-NS Atomic Units: Yes 00:14:21.330 Atomic Boundary Size (Normal): 0 00:14:21.330 Atomic Boundary Size (PFail): 0 00:14:21.330 Atomic Boundary Offset: 0 00:14:21.330 Maximum Single Source Range Length: 65535 00:14:21.330 Maximum Copy Length: 65535 00:14:21.330 Maximum Source Range Count: 1 00:14:21.330 NGUID/EUI64 Never Reused: No 00:14:21.330 Namespace Write Protected: No 00:14:21.330 Number of LBA Formats: 1 00:14:21.330 Current LBA Format: LBA Format #00 00:14:21.330 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:21.330 00:14:21.330 15:18:30 -- host/identify.sh@51 -- # sync 00:14:21.330 15:18:30 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.330 15:18:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:21.330 15:18:30 -- common/autotest_common.sh@10 -- # set +x 00:14:21.330 15:18:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:21.330 15:18:30 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:21.330 15:18:30 -- host/identify.sh@56 -- # nvmftestfini 00:14:21.331 15:18:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:21.331 15:18:30 -- nvmf/common.sh@117 -- # sync 00:14:21.331 15:18:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:21.331 15:18:30 -- nvmf/common.sh@120 -- # set +e 00:14:21.331 15:18:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.331 15:18:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:21.331 rmmod nvme_tcp 00:14:21.331 rmmod nvme_fabrics 00:14:21.331 rmmod nvme_keyring 00:14:21.331 15:18:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.331 15:18:30 -- nvmf/common.sh@124 -- # set -e 00:14:21.331 15:18:30 -- nvmf/common.sh@125 -- # return 0 00:14:21.331 15:18:30 -- nvmf/common.sh@478 -- # '[' -n 71867 ']' 00:14:21.331 15:18:30 -- nvmf/common.sh@479 -- # killprocess 71867 00:14:21.331 15:18:30 -- common/autotest_common.sh@936 -- # '[' -z 71867 ']' 00:14:21.331 15:18:30 -- common/autotest_common.sh@940 -- # kill -0 71867 00:14:21.331 15:18:30 -- common/autotest_common.sh@941 -- # uname 00:14:21.331 15:18:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:21.331 15:18:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71867 00:14:21.331 15:18:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:21.331 15:18:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:21.331 15:18:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71867' 00:14:21.331 killing process with pid 71867 00:14:21.331 15:18:30 -- common/autotest_common.sh@955 -- # kill 71867 00:14:21.331 [2024-04-24 15:18:30.418831] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:21.331 15:18:30 -- common/autotest_common.sh@960 -- # wait 71867 00:14:21.616 15:18:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:21.616 15:18:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:21.616 15:18:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:21.616 15:18:30 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.616 15:18:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:21.616 15:18:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.616 15:18:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.616 15:18:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.616 15:18:30 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:21.616 00:14:21.616 real 0m2.529s 00:14:21.616 user 0m6.838s 00:14:21.616 sys 0m0.633s 00:14:21.616 15:18:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:21.616 15:18:30 -- common/autotest_common.sh@10 -- # set +x 00:14:21.616 ************************************ 00:14:21.616 END TEST nvmf_identify 00:14:21.616 ************************************ 00:14:21.616 15:18:30 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:21.616 15:18:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:21.616 15:18:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:21.616 15:18:30 -- common/autotest_common.sh@10 -- # set +x 00:14:21.616 ************************************ 00:14:21.616 START TEST nvmf_perf 00:14:21.616 ************************************ 00:14:21.616 15:18:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:21.875 * Looking for test storage... 00:14:21.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:21.875 15:18:30 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:21.875 15:18:30 -- nvmf/common.sh@7 -- # uname -s 00:14:21.875 15:18:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.875 15:18:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.875 15:18:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.875 15:18:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.875 15:18:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.876 15:18:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.876 15:18:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.876 15:18:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.876 15:18:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.876 15:18:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.876 15:18:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:14:21.876 15:18:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:14:21.876 15:18:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.876 15:18:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.876 15:18:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:21.876 15:18:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.876 15:18:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.876 15:18:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.876 15:18:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.876 15:18:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.876 15:18:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.876 15:18:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.876 15:18:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.876 15:18:30 -- paths/export.sh@5 -- # export PATH 00:14:21.876 15:18:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.876 15:18:30 -- nvmf/common.sh@47 -- # : 0 00:14:21.876 15:18:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.876 15:18:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.876 15:18:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.876 15:18:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.876 15:18:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.876 15:18:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.876 15:18:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.876 15:18:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.876 15:18:30 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:21.876 15:18:30 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:21.876 15:18:30 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:21.876 15:18:30 -- host/perf.sh@17 -- # nvmftestinit 00:14:21.876 15:18:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:21.876 15:18:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.876 15:18:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:21.876 15:18:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:21.876 15:18:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:21.876 15:18:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.876 15:18:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.876 15:18:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.876 15:18:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:21.876 15:18:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:21.876 15:18:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:21.876 15:18:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:21.876 15:18:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:21.876 15:18:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:21.876 15:18:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.876 15:18:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.876 15:18:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:21.876 15:18:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:21.876 15:18:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:21.876 15:18:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:21.876 15:18:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:21.876 15:18:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.876 15:18:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:21.876 15:18:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:21.876 15:18:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:21.876 15:18:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:21.876 15:18:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:21.876 15:18:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:21.876 Cannot find device "nvmf_tgt_br" 00:14:21.876 15:18:30 -- nvmf/common.sh@155 -- # true 00:14:21.876 15:18:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:21.876 Cannot find device "nvmf_tgt_br2" 00:14:21.876 15:18:31 -- nvmf/common.sh@156 -- # true 00:14:21.876 15:18:31 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:21.876 15:18:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:21.876 Cannot find device "nvmf_tgt_br" 00:14:21.876 15:18:31 -- nvmf/common.sh@158 -- # true 00:14:21.876 15:18:31 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:21.876 Cannot find device "nvmf_tgt_br2" 00:14:21.876 15:18:31 -- nvmf/common.sh@159 -- # true 00:14:21.876 15:18:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:21.876 15:18:31 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:21.876 15:18:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:21.876 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.876 15:18:31 -- nvmf/common.sh@162 -- # true 00:14:21.876 15:18:31 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:21.876 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.876 15:18:31 -- nvmf/common.sh@163 -- # true 00:14:21.876 15:18:31 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:21.876 15:18:31 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:21.876 15:18:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:22.135 15:18:31 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:22.135 15:18:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:22.135 15:18:31 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:22.135 15:18:31 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:22.135 15:18:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:22.135 15:18:31 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:22.135 15:18:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:22.135 15:18:31 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:22.135 15:18:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:22.135 15:18:31 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:22.135 15:18:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:22.135 15:18:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:22.135 15:18:31 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:22.135 15:18:31 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:22.135 15:18:31 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:22.135 15:18:31 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:22.135 15:18:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:22.135 15:18:31 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:22.135 15:18:31 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:22.135 15:18:31 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:22.135 15:18:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:22.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:22.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:14:22.135 00:14:22.135 --- 10.0.0.2 ping statistics --- 00:14:22.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.135 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:22.135 15:18:31 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:22.135 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:22.135 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:22.135 00:14:22.135 --- 10.0.0.3 ping statistics --- 00:14:22.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.135 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:22.135 15:18:31 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:22.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:22.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:22.135 00:14:22.135 --- 10.0.0.1 ping statistics --- 00:14:22.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.135 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:22.135 15:18:31 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:22.135 15:18:31 -- nvmf/common.sh@422 -- # return 0 00:14:22.135 15:18:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:22.135 15:18:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:22.135 15:18:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:22.135 15:18:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:22.135 15:18:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:22.135 15:18:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:22.135 15:18:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:22.135 15:18:31 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:22.135 15:18:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:22.135 15:18:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:22.135 15:18:31 -- common/autotest_common.sh@10 -- # set +x 00:14:22.135 15:18:31 -- nvmf/common.sh@470 -- # nvmfpid=72078 00:14:22.135 15:18:31 -- nvmf/common.sh@471 -- # waitforlisten 72078 00:14:22.135 15:18:31 -- common/autotest_common.sh@817 -- # '[' -z 72078 ']' 00:14:22.135 15:18:31 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:22.135 15:18:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.135 15:18:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:22.135 15:18:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.135 15:18:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:22.135 15:18:31 -- common/autotest_common.sh@10 -- # set +x 00:14:22.135 [2024-04-24 15:18:31.370972] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:14:22.135 [2024-04-24 15:18:31.371109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.394 [2024-04-24 15:18:31.509416] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:22.394 [2024-04-24 15:18:31.636309] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.394 [2024-04-24 15:18:31.636361] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.394 [2024-04-24 15:18:31.636372] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.394 [2024-04-24 15:18:31.636388] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.394 [2024-04-24 15:18:31.636395] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.394 [2024-04-24 15:18:31.636564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.394 [2024-04-24 15:18:31.636765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.394 [2024-04-24 15:18:31.637311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:22.394 [2024-04-24 15:18:31.637319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.330 15:18:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:23.330 15:18:32 -- common/autotest_common.sh@850 -- # return 0 00:14:23.330 15:18:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:23.330 15:18:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:23.330 15:18:32 -- common/autotest_common.sh@10 -- # set +x 00:14:23.330 15:18:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.330 15:18:32 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:23.330 15:18:32 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:23.603 15:18:32 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:23.603 15:18:32 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:24.176 15:18:33 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:24.176 15:18:33 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:24.176 15:18:33 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:24.176 15:18:33 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:24.176 15:18:33 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:24.176 15:18:33 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:24.176 15:18:33 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:24.434 [2024-04-24 15:18:33.622058] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.434 15:18:33 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:24.693 15:18:33 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:24.693 15:18:33 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:24.955 15:18:34 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:24.955 15:18:34 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:25.214 15:18:34 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.473 [2024-04-24 15:18:34.672465] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.473 15:18:34 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:25.732 15:18:34 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:25.732 15:18:34 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:25.732 15:18:34 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:25.732 15:18:34 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:27.119 Initializing NVMe Controllers 00:14:27.119 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:27.119 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:27.119 Initialization complete. Launching workers. 00:14:27.119 ======================================================== 00:14:27.119 Latency(us) 00:14:27.119 Device Information : IOPS MiB/s Average min max 00:14:27.119 PCIE (0000:00:10.0) NSID 1 from core 0: 24288.00 94.88 1316.68 316.83 6739.33 00:14:27.119 ======================================================== 00:14:27.119 Total : 24288.00 94.88 1316.68 316.83 6739.33 00:14:27.119 00:14:27.119 15:18:36 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:28.497 Initializing NVMe Controllers 00:14:28.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:28.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:28.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:28.497 Initialization complete. Launching workers. 00:14:28.497 ======================================================== 00:14:28.497 Latency(us) 00:14:28.497 Device Information : IOPS MiB/s Average min max 00:14:28.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3373.87 13.18 296.13 106.86 4359.18 00:14:28.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.99 0.49 7999.31 6041.98 12009.08 00:14:28.497 ======================================================== 00:14:28.497 Total : 3499.86 13.67 573.44 106.86 12009.08 00:14:28.497 00:14:28.497 15:18:37 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:29.989 Initializing NVMe Controllers 00:14:29.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:29.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:29.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:29.989 Initialization complete. Launching workers. 00:14:29.989 ======================================================== 00:14:29.989 Latency(us) 00:14:29.989 Device Information : IOPS MiB/s Average min max 00:14:29.989 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8451.23 33.01 3786.75 525.43 11042.38 00:14:29.989 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3944.51 15.41 8124.25 5340.24 16925.93 00:14:29.989 ======================================================== 00:14:29.989 Total : 12395.73 48.42 5167.01 525.43 16925.93 00:14:29.989 00:14:29.989 15:18:38 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:29.989 15:18:38 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:32.519 Initializing NVMe Controllers 00:14:32.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:32.519 Controller IO queue size 128, less than required. 00:14:32.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:32.519 Controller IO queue size 128, less than required. 00:14:32.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:32.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:32.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:32.519 Initialization complete. Launching workers. 00:14:32.519 ======================================================== 00:14:32.519 Latency(us) 00:14:32.519 Device Information : IOPS MiB/s Average min max 00:14:32.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1679.11 419.78 77517.73 57413.39 160510.87 00:14:32.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 662.76 165.69 205545.24 84374.78 341293.77 00:14:32.519 ======================================================== 00:14:32.519 Total : 2341.87 585.47 113749.87 57413.39 341293.77 00:14:32.519 00:14:32.519 15:18:41 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:32.519 No valid NVMe controllers or AIO or URING devices found 00:14:32.519 Initializing NVMe Controllers 00:14:32.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:32.519 Controller IO queue size 128, less than required. 00:14:32.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:32.519 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:32.519 Controller IO queue size 128, less than required. 00:14:32.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:32.519 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:32.519 WARNING: Some requested NVMe devices were skipped 00:14:32.519 15:18:41 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:35.065 Initializing NVMe Controllers 00:14:35.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:35.065 Controller IO queue size 128, less than required. 00:14:35.065 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:35.065 Controller IO queue size 128, less than required. 00:14:35.065 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:35.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:35.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:35.065 Initialization complete. Launching workers. 00:14:35.065 00:14:35.065 ==================== 00:14:35.065 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:35.065 TCP transport: 00:14:35.065 polls: 7096 00:14:35.065 idle_polls: 0 00:14:35.065 sock_completions: 7096 00:14:35.065 nvme_completions: 6593 00:14:35.065 submitted_requests: 10006 00:14:35.065 queued_requests: 1 00:14:35.065 00:14:35.065 ==================== 00:14:35.065 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:35.065 TCP transport: 00:14:35.065 polls: 7622 00:14:35.065 idle_polls: 0 00:14:35.065 sock_completions: 7622 00:14:35.065 nvme_completions: 6287 00:14:35.065 submitted_requests: 9352 00:14:35.065 queued_requests: 1 00:14:35.065 ======================================================== 00:14:35.065 Latency(us) 00:14:35.065 Device Information : IOPS MiB/s Average min max 00:14:35.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1647.89 411.97 79383.68 48621.82 136273.50 00:14:35.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1571.39 392.85 81392.06 37474.69 136256.66 00:14:35.065 ======================================================== 00:14:35.065 Total : 3219.28 804.82 80364.01 37474.69 136273.50 00:14:35.065 00:14:35.065 15:18:44 -- host/perf.sh@66 -- # sync 00:14:35.065 15:18:44 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:35.324 15:18:44 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:35.324 15:18:44 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:35.324 15:18:44 -- host/perf.sh@114 -- # nvmftestfini 00:14:35.324 15:18:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:35.324 15:18:44 -- nvmf/common.sh@117 -- # sync 00:14:35.324 15:18:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.324 15:18:44 -- nvmf/common.sh@120 -- # set +e 00:14:35.324 15:18:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.324 15:18:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.324 rmmod nvme_tcp 00:14:35.324 rmmod nvme_fabrics 00:14:35.324 rmmod nvme_keyring 00:14:35.324 15:18:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.324 15:18:44 -- nvmf/common.sh@124 -- # set -e 00:14:35.324 15:18:44 -- nvmf/common.sh@125 -- # return 0 00:14:35.324 15:18:44 -- nvmf/common.sh@478 -- # '[' -n 72078 ']' 00:14:35.324 15:18:44 -- nvmf/common.sh@479 -- # killprocess 72078 00:14:35.324 15:18:44 -- common/autotest_common.sh@936 -- # '[' -z 72078 ']' 00:14:35.324 15:18:44 -- common/autotest_common.sh@940 -- # kill -0 72078 00:14:35.324 15:18:44 -- common/autotest_common.sh@941 -- # uname 00:14:35.324 15:18:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:35.324 15:18:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72078 00:14:35.324 killing process with pid 72078 00:14:35.324 15:18:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:35.324 15:18:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:35.324 15:18:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72078' 00:14:35.324 15:18:44 -- common/autotest_common.sh@955 -- # kill 72078 00:14:35.324 15:18:44 -- common/autotest_common.sh@960 -- # wait 72078 00:14:36.259 15:18:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:36.259 15:18:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:36.259 15:18:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:36.259 15:18:45 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.259 15:18:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:36.259 15:18:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.259 15:18:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.259 15:18:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.259 15:18:45 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:36.259 ************************************ 00:14:36.259 END TEST nvmf_perf 00:14:36.259 ************************************ 00:14:36.259 00:14:36.259 real 0m14.417s 00:14:36.259 user 0m52.583s 00:14:36.259 sys 0m3.962s 00:14:36.259 15:18:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:36.259 15:18:45 -- common/autotest_common.sh@10 -- # set +x 00:14:36.259 15:18:45 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:36.259 15:18:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:36.259 15:18:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:36.259 15:18:45 -- common/autotest_common.sh@10 -- # set +x 00:14:36.259 ************************************ 00:14:36.259 START TEST nvmf_fio_host 00:14:36.259 ************************************ 00:14:36.259 15:18:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:36.259 * Looking for test storage... 00:14:36.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:36.259 15:18:45 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:36.259 15:18:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.259 15:18:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.259 15:18:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.259 15:18:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.259 15:18:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.259 15:18:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.259 15:18:45 -- paths/export.sh@5 -- # export PATH 00:14:36.259 15:18:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.259 15:18:45 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:36.259 15:18:45 -- nvmf/common.sh@7 -- # uname -s 00:14:36.259 15:18:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.259 15:18:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.259 15:18:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.259 15:18:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.259 15:18:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.259 15:18:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.259 15:18:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.259 15:18:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.259 15:18:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.259 15:18:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.259 15:18:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:14:36.259 15:18:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:14:36.259 15:18:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.259 15:18:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.259 15:18:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:36.259 15:18:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.259 15:18:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:36.259 15:18:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.259 15:18:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.259 15:18:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.259 15:18:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.260 15:18:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.260 15:18:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.260 15:18:45 -- paths/export.sh@5 -- # export PATH 00:14:36.260 15:18:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.260 15:18:45 -- nvmf/common.sh@47 -- # : 0 00:14:36.260 15:18:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:36.260 15:18:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:36.260 15:18:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.260 15:18:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.260 15:18:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.260 15:18:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:36.260 15:18:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:36.260 15:18:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:36.260 15:18:45 -- host/fio.sh@12 -- # nvmftestinit 00:14:36.260 15:18:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:36.260 15:18:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.260 15:18:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:36.260 15:18:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:36.260 15:18:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:36.260 15:18:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.260 15:18:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.260 15:18:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.260 15:18:45 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:36.260 15:18:45 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:36.260 15:18:45 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:36.260 15:18:45 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:36.260 15:18:45 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:36.260 15:18:45 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:36.260 15:18:45 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.260 15:18:45 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.260 15:18:45 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:36.260 15:18:45 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:36.260 15:18:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:36.260 15:18:45 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:36.260 15:18:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:36.260 15:18:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.260 15:18:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:36.260 15:18:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:36.260 15:18:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:36.260 15:18:45 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:36.260 15:18:45 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:36.518 15:18:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:36.518 Cannot find device "nvmf_tgt_br" 00:14:36.518 15:18:45 -- nvmf/common.sh@155 -- # true 00:14:36.518 15:18:45 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:36.518 Cannot find device "nvmf_tgt_br2" 00:14:36.518 15:18:45 -- nvmf/common.sh@156 -- # true 00:14:36.518 15:18:45 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:36.518 15:18:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:36.518 Cannot find device "nvmf_tgt_br" 00:14:36.518 15:18:45 -- nvmf/common.sh@158 -- # true 00:14:36.518 15:18:45 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:36.518 Cannot find device "nvmf_tgt_br2" 00:14:36.518 15:18:45 -- nvmf/common.sh@159 -- # true 00:14:36.518 15:18:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:36.518 15:18:45 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:36.518 15:18:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:36.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:36.518 15:18:45 -- nvmf/common.sh@162 -- # true 00:14:36.518 15:18:45 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:36.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:36.518 15:18:45 -- nvmf/common.sh@163 -- # true 00:14:36.518 15:18:45 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:36.518 15:18:45 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:36.518 15:18:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:36.518 15:18:45 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:36.518 15:18:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:36.518 15:18:45 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:36.519 15:18:45 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:36.519 15:18:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:36.519 15:18:45 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:36.519 15:18:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:36.519 15:18:45 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:36.519 15:18:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:36.519 15:18:45 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:36.519 15:18:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:36.519 15:18:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:36.519 15:18:45 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:36.519 15:18:45 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:36.519 15:18:45 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:36.519 15:18:45 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:36.519 15:18:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:36.777 15:18:45 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:36.777 15:18:45 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:36.777 15:18:45 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:36.777 15:18:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:36.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:14:36.777 00:14:36.777 --- 10.0.0.2 ping statistics --- 00:14:36.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.777 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:36.777 15:18:45 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:36.777 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:36.777 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:36.777 00:14:36.777 --- 10.0.0.3 ping statistics --- 00:14:36.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.777 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:36.777 15:18:45 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:36.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:36.777 00:14:36.777 --- 10.0.0.1 ping statistics --- 00:14:36.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.777 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:36.777 15:18:45 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.777 15:18:45 -- nvmf/common.sh@422 -- # return 0 00:14:36.777 15:18:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:36.777 15:18:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.777 15:18:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:36.777 15:18:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:36.777 15:18:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.777 15:18:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:36.777 15:18:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:36.777 15:18:45 -- host/fio.sh@14 -- # [[ y != y ]] 00:14:36.777 15:18:45 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:14:36.777 15:18:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:36.777 15:18:45 -- common/autotest_common.sh@10 -- # set +x 00:14:36.777 15:18:45 -- host/fio.sh@22 -- # nvmfpid=72486 00:14:36.777 15:18:45 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:36.777 15:18:45 -- host/fio.sh@26 -- # waitforlisten 72486 00:14:36.777 15:18:45 -- common/autotest_common.sh@817 -- # '[' -z 72486 ']' 00:14:36.777 15:18:45 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:36.777 15:18:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.777 15:18:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:36.777 15:18:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.777 15:18:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:36.777 15:18:45 -- common/autotest_common.sh@10 -- # set +x 00:14:36.777 [2024-04-24 15:18:45.881930] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:14:36.777 [2024-04-24 15:18:45.882057] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.777 [2024-04-24 15:18:46.020099] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:37.035 [2024-04-24 15:18:46.136667] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.035 [2024-04-24 15:18:46.136726] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.035 [2024-04-24 15:18:46.136746] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.035 [2024-04-24 15:18:46.136755] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.035 [2024-04-24 15:18:46.136762] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.035 [2024-04-24 15:18:46.136914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.035 [2024-04-24 15:18:46.137514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.035 [2024-04-24 15:18:46.137595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:37.035 [2024-04-24 15:18:46.137599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.027 15:18:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:38.027 15:18:46 -- common/autotest_common.sh@850 -- # return 0 00:14:38.027 15:18:46 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:38.027 15:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.027 15:18:46 -- common/autotest_common.sh@10 -- # set +x 00:14:38.027 [2024-04-24 15:18:46.889734] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.027 15:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.027 15:18:46 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:14:38.027 15:18:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:38.027 15:18:46 -- common/autotest_common.sh@10 -- # set +x 00:14:38.027 15:18:46 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:38.027 15:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.028 15:18:46 -- common/autotest_common.sh@10 -- # set +x 00:14:38.028 Malloc1 00:14:38.028 15:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.028 15:18:46 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:38.028 15:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.028 15:18:46 -- common/autotest_common.sh@10 -- # set +x 00:14:38.028 15:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.028 15:18:46 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:38.028 15:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.028 15:18:46 -- common/autotest_common.sh@10 -- # set +x 00:14:38.028 15:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.028 15:18:46 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.028 15:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.028 15:18:46 -- common/autotest_common.sh@10 -- # set +x 00:14:38.028 [2024-04-24 15:18:46.996305] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.028 15:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.028 15:18:47 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:38.028 15:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.028 15:18:47 -- common/autotest_common.sh@10 -- # set +x 00:14:38.028 15:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.028 15:18:47 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:38.028 15:18:47 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:38.028 15:18:47 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:38.028 15:18:47 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:14:38.028 15:18:47 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:38.028 15:18:47 -- common/autotest_common.sh@1325 -- # local sanitizers 00:14:38.028 15:18:47 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:38.028 15:18:47 -- common/autotest_common.sh@1327 -- # shift 00:14:38.028 15:18:47 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:14:38.028 15:18:47 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:14:38.028 15:18:47 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:38.028 15:18:47 -- common/autotest_common.sh@1331 -- # grep libasan 00:14:38.028 15:18:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:14:38.028 15:18:47 -- common/autotest_common.sh@1331 -- # asan_lib= 00:14:38.028 15:18:47 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:14:38.028 15:18:47 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:14:38.028 15:18:47 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:38.028 15:18:47 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:14:38.028 15:18:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:14:38.028 15:18:47 -- common/autotest_common.sh@1331 -- # asan_lib= 00:14:38.028 15:18:47 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:14:38.028 15:18:47 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:38.028 15:18:47 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:38.028 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:38.028 fio-3.35 00:14:38.028 Starting 1 thread 00:14:40.569 00:14:40.569 test: (groupid=0, jobs=1): err= 0: pid=72552: Wed Apr 24 15:18:49 2024 00:14:40.569 read: IOPS=8735, BW=34.1MiB/s (35.8MB/s)(68.5MiB/2007msec) 00:14:40.569 slat (usec): min=2, max=286, avg= 2.49, stdev= 2.85 00:14:40.569 clat (usec): min=2296, max=13389, avg=7622.48, stdev=522.36 00:14:40.569 lat (usec): min=2332, max=13392, avg=7624.97, stdev=522.07 00:14:40.569 clat percentiles (usec): 00:14:40.569 | 1.00th=[ 6521], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7242], 00:14:40.569 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7701], 00:14:40.569 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8225], 95.00th=[ 8455], 00:14:40.569 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[11994], 99.95th=[12780], 00:14:40.569 | 99.99th=[13304] 00:14:40.569 bw ( KiB/s): min=34003, max=35384, per=99.93%, avg=34918.75, stdev=621.75, samples=4 00:14:40.569 iops : min= 8500, max= 8846, avg=8729.50, stdev=155.81, samples=4 00:14:40.569 write: IOPS=8732, BW=34.1MiB/s (35.8MB/s)(68.5MiB/2007msec); 0 zone resets 00:14:40.569 slat (usec): min=2, max=248, avg= 2.60, stdev= 2.08 00:14:40.569 clat (usec): min=2148, max=12751, avg=6961.05, stdev=477.95 00:14:40.569 lat (usec): min=2160, max=12753, avg=6963.66, stdev=477.79 00:14:40.569 clat percentiles (usec): 00:14:40.569 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6652], 00:14:40.569 | 30.00th=[ 6783], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7046], 00:14:40.569 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7635], 00:14:40.569 | 99.00th=[ 8029], 99.50th=[ 8356], 99.90th=[11076], 99.95th=[11863], 00:14:40.569 | 99.99th=[12780] 00:14:40.569 bw ( KiB/s): min=34512, max=35336, per=99.94%, avg=34912.50, stdev=338.48, samples=4 00:14:40.569 iops : min= 8628, max= 8834, avg=8728.00, stdev=84.65, samples=4 00:14:40.569 lat (msec) : 4=0.12%, 10=99.69%, 20=0.20% 00:14:40.569 cpu : usr=69.04%, sys=22.58%, ctx=7, majf=0, minf=6 00:14:40.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:40.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.569 issued rwts: total=17533,17527,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.569 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.569 00:14:40.569 Run status group 0 (all jobs): 00:14:40.569 READ: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.8MB), run=2007-2007msec 00:14:40.569 WRITE: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.8MB), run=2007-2007msec 00:14:40.569 15:18:49 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:40.569 15:18:49 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:40.569 15:18:49 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:14:40.569 15:18:49 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:40.569 15:18:49 -- common/autotest_common.sh@1325 -- # local sanitizers 00:14:40.569 15:18:49 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:40.569 15:18:49 -- common/autotest_common.sh@1327 -- # shift 00:14:40.569 15:18:49 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:14:40.569 15:18:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:14:40.569 15:18:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:40.569 15:18:49 -- common/autotest_common.sh@1331 -- # grep libasan 00:14:40.569 15:18:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:14:40.569 15:18:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:14:40.569 15:18:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:14:40.569 15:18:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:14:40.569 15:18:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:40.569 15:18:49 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:14:40.569 15:18:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:14:40.569 15:18:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:14:40.569 15:18:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:14:40.569 15:18:49 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:40.569 15:18:49 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:40.569 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:40.569 fio-3.35 00:14:40.569 Starting 1 thread 00:14:43.099 00:14:43.099 test: (groupid=0, jobs=1): err= 0: pid=72595: Wed Apr 24 15:18:51 2024 00:14:43.099 read: IOPS=8154, BW=127MiB/s (134MB/s)(255MiB/2005msec) 00:14:43.099 slat (usec): min=3, max=126, avg= 3.74, stdev= 1.76 00:14:43.099 clat (usec): min=2179, max=16536, avg=8662.83, stdev=2426.97 00:14:43.099 lat (usec): min=2182, max=16540, avg=8666.57, stdev=2427.04 00:14:43.100 clat percentiles (usec): 00:14:43.100 | 1.00th=[ 4113], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6456], 00:14:43.100 | 30.00th=[ 7242], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 9110], 00:14:43.100 | 70.00th=[10028], 80.00th=[10814], 90.00th=[11863], 95.00th=[12780], 00:14:43.100 | 99.00th=[15139], 99.50th=[15664], 99.90th=[16057], 99.95th=[16188], 00:14:43.100 | 99.99th=[16319] 00:14:43.100 bw ( KiB/s): min=57888, max=74368, per=51.24%, avg=66856.00, stdev=8640.65, samples=4 00:14:43.100 iops : min= 3618, max= 4648, avg=4178.50, stdev=540.04, samples=4 00:14:43.100 write: IOPS=4821, BW=75.3MiB/s (79.0MB/s)(137MiB/1817msec); 0 zone resets 00:14:43.100 slat (usec): min=33, max=172, avg=37.84, stdev= 5.29 00:14:43.100 clat (usec): min=3170, max=20850, avg=12277.44, stdev=2241.47 00:14:43.100 lat (usec): min=3207, max=20887, avg=12315.27, stdev=2241.23 00:14:43.100 clat percentiles (usec): 00:14:43.100 | 1.00th=[ 7767], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10421], 00:14:43.100 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12125], 60.00th=[12649], 00:14:43.100 | 70.00th=[13173], 80.00th=[14091], 90.00th=[15401], 95.00th=[16188], 00:14:43.100 | 99.00th=[18220], 99.50th=[19268], 99.90th=[20055], 99.95th=[20055], 00:14:43.100 | 99.99th=[20841] 00:14:43.100 bw ( KiB/s): min=58880, max=77760, per=90.18%, avg=69560.00, stdev=9342.63, samples=4 00:14:43.100 iops : min= 3680, max= 4860, avg=4347.50, stdev=583.91, samples=4 00:14:43.100 lat (msec) : 4=0.47%, 10=50.01%, 20=49.49%, 50=0.03% 00:14:43.100 cpu : usr=82.04%, sys=13.87%, ctx=18, majf=0, minf=25 00:14:43.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:14:43.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:43.100 issued rwts: total=16349,8760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:43.100 00:14:43.100 Run status group 0 (all jobs): 00:14:43.100 READ: bw=127MiB/s (134MB/s), 127MiB/s-127MiB/s (134MB/s-134MB/s), io=255MiB (268MB), run=2005-2005msec 00:14:43.100 WRITE: bw=75.3MiB/s (79.0MB/s), 75.3MiB/s-75.3MiB/s (79.0MB/s-79.0MB/s), io=137MiB (144MB), run=1817-1817msec 00:14:43.100 15:18:51 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.100 15:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.100 15:18:51 -- common/autotest_common.sh@10 -- # set +x 00:14:43.100 15:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.100 15:18:51 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:14:43.100 15:18:51 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:14:43.100 15:18:51 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:14:43.100 15:18:51 -- host/fio.sh@84 -- # nvmftestfini 00:14:43.100 15:18:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:43.100 15:18:51 -- nvmf/common.sh@117 -- # sync 00:14:43.100 15:18:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:43.100 15:18:51 -- nvmf/common.sh@120 -- # set +e 00:14:43.100 15:18:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:43.100 15:18:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:43.100 rmmod nvme_tcp 00:14:43.100 rmmod nvme_fabrics 00:14:43.100 rmmod nvme_keyring 00:14:43.100 15:18:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:43.100 15:18:52 -- nvmf/common.sh@124 -- # set -e 00:14:43.100 15:18:52 -- nvmf/common.sh@125 -- # return 0 00:14:43.100 15:18:52 -- nvmf/common.sh@478 -- # '[' -n 72486 ']' 00:14:43.100 15:18:52 -- nvmf/common.sh@479 -- # killprocess 72486 00:14:43.100 15:18:52 -- common/autotest_common.sh@936 -- # '[' -z 72486 ']' 00:14:43.100 15:18:52 -- common/autotest_common.sh@940 -- # kill -0 72486 00:14:43.100 15:18:52 -- common/autotest_common.sh@941 -- # uname 00:14:43.100 15:18:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:43.100 15:18:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72486 00:14:43.100 15:18:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:43.100 15:18:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:43.100 killing process with pid 72486 00:14:43.100 15:18:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72486' 00:14:43.100 15:18:52 -- common/autotest_common.sh@955 -- # kill 72486 00:14:43.100 15:18:52 -- common/autotest_common.sh@960 -- # wait 72486 00:14:43.357 15:18:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:43.357 15:18:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:43.357 15:18:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:43.357 15:18:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:43.357 15:18:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:43.357 15:18:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.357 15:18:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.357 15:18:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.357 15:18:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:43.357 00:14:43.357 real 0m7.030s 00:14:43.357 user 0m27.425s 00:14:43.357 sys 0m2.143s 00:14:43.357 15:18:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:43.357 ************************************ 00:14:43.357 END TEST nvmf_fio_host 00:14:43.357 ************************************ 00:14:43.357 15:18:52 -- common/autotest_common.sh@10 -- # set +x 00:14:43.357 15:18:52 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:43.357 15:18:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:43.357 15:18:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:43.357 15:18:52 -- common/autotest_common.sh@10 -- # set +x 00:14:43.357 ************************************ 00:14:43.357 START TEST nvmf_failover 00:14:43.357 ************************************ 00:14:43.357 15:18:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:43.357 * Looking for test storage... 00:14:43.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:43.357 15:18:52 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:43.357 15:18:52 -- nvmf/common.sh@7 -- # uname -s 00:14:43.615 15:18:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.615 15:18:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.615 15:18:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.615 15:18:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.615 15:18:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.615 15:18:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.615 15:18:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.615 15:18:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.615 15:18:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.615 15:18:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.615 15:18:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:14:43.615 15:18:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:14:43.615 15:18:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.615 15:18:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.615 15:18:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:43.615 15:18:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.615 15:18:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:43.615 15:18:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.615 15:18:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.615 15:18:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.615 15:18:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.615 15:18:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.615 15:18:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.615 15:18:52 -- paths/export.sh@5 -- # export PATH 00:14:43.615 15:18:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.615 15:18:52 -- nvmf/common.sh@47 -- # : 0 00:14:43.615 15:18:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.615 15:18:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.615 15:18:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.615 15:18:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.615 15:18:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.615 15:18:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.615 15:18:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.615 15:18:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.615 15:18:52 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:43.615 15:18:52 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:43.615 15:18:52 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:43.615 15:18:52 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:43.615 15:18:52 -- host/failover.sh@18 -- # nvmftestinit 00:14:43.615 15:18:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:43.615 15:18:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.615 15:18:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:43.615 15:18:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:43.615 15:18:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:43.615 15:18:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.615 15:18:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.615 15:18:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.615 15:18:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:43.615 15:18:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:43.615 15:18:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:43.615 15:18:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:43.615 15:18:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:43.615 15:18:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:43.615 15:18:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.615 15:18:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.615 15:18:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:43.616 15:18:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:43.616 15:18:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:43.616 15:18:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:43.616 15:18:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:43.616 15:18:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.616 15:18:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:43.616 15:18:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:43.616 15:18:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:43.616 15:18:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:43.616 15:18:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:43.616 15:18:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:43.616 Cannot find device "nvmf_tgt_br" 00:14:43.616 15:18:52 -- nvmf/common.sh@155 -- # true 00:14:43.616 15:18:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:43.616 Cannot find device "nvmf_tgt_br2" 00:14:43.616 15:18:52 -- nvmf/common.sh@156 -- # true 00:14:43.616 15:18:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:43.616 15:18:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:43.616 Cannot find device "nvmf_tgt_br" 00:14:43.616 15:18:52 -- nvmf/common.sh@158 -- # true 00:14:43.616 15:18:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:43.616 Cannot find device "nvmf_tgt_br2" 00:14:43.616 15:18:52 -- nvmf/common.sh@159 -- # true 00:14:43.616 15:18:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:43.616 15:18:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:43.616 15:18:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:43.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.616 15:18:52 -- nvmf/common.sh@162 -- # true 00:14:43.616 15:18:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:43.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.616 15:18:52 -- nvmf/common.sh@163 -- # true 00:14:43.616 15:18:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:43.616 15:18:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:43.616 15:18:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:43.616 15:18:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:43.616 15:18:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:43.616 15:18:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:43.616 15:18:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:43.616 15:18:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:43.616 15:18:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:43.616 15:18:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:43.616 15:18:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:43.616 15:18:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:43.616 15:18:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:43.616 15:18:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:43.874 15:18:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:43.874 15:18:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:43.874 15:18:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:43.874 15:18:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:43.874 15:18:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:43.874 15:18:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:43.874 15:18:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:43.874 15:18:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:43.874 15:18:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:43.874 15:18:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:43.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:14:43.875 00:14:43.875 --- 10.0.0.2 ping statistics --- 00:14:43.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.875 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:14:43.875 15:18:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:43.875 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:43.875 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:43.875 00:14:43.875 --- 10.0.0.3 ping statistics --- 00:14:43.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.875 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:43.875 15:18:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:43.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:14:43.875 00:14:43.875 --- 10.0.0.1 ping statistics --- 00:14:43.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.875 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:43.875 15:18:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.875 15:18:52 -- nvmf/common.sh@422 -- # return 0 00:14:43.875 15:18:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:43.875 15:18:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.875 15:18:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:43.875 15:18:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:43.875 15:18:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.875 15:18:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:43.875 15:18:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:43.875 15:18:52 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:43.875 15:18:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:43.875 15:18:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:43.875 15:18:52 -- common/autotest_common.sh@10 -- # set +x 00:14:43.875 15:18:52 -- nvmf/common.sh@470 -- # nvmfpid=72810 00:14:43.875 15:18:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:43.875 15:18:52 -- nvmf/common.sh@471 -- # waitforlisten 72810 00:14:43.875 15:18:52 -- common/autotest_common.sh@817 -- # '[' -z 72810 ']' 00:14:43.875 15:18:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.875 15:18:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:43.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.875 15:18:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.875 15:18:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:43.875 15:18:52 -- common/autotest_common.sh@10 -- # set +x 00:14:43.875 [2024-04-24 15:18:53.009129] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:14:43.875 [2024-04-24 15:18:53.009648] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.132 [2024-04-24 15:18:53.143758] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:44.132 [2024-04-24 15:18:53.260333] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.132 [2024-04-24 15:18:53.260441] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.132 [2024-04-24 15:18:53.260454] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.132 [2024-04-24 15:18:53.260463] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.132 [2024-04-24 15:18:53.260470] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.132 [2024-04-24 15:18:53.260594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.132 [2024-04-24 15:18:53.261467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:44.132 [2024-04-24 15:18:53.261504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.067 15:18:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:45.067 15:18:54 -- common/autotest_common.sh@850 -- # return 0 00:14:45.067 15:18:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:45.067 15:18:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:45.067 15:18:54 -- common/autotest_common.sh@10 -- # set +x 00:14:45.067 15:18:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.067 15:18:54 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:45.327 [2024-04-24 15:18:54.314789] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.327 15:18:54 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:45.586 Malloc0 00:14:45.586 15:18:54 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:45.845 15:18:54 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:46.102 15:18:55 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.360 [2024-04-24 15:18:55.349269] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.360 15:18:55 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:46.360 [2024-04-24 15:18:55.569368] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:46.360 15:18:55 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:46.618 [2024-04-24 15:18:55.789616] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:46.618 15:18:55 -- host/failover.sh@31 -- # bdevperf_pid=72869 00:14:46.618 15:18:55 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:46.618 15:18:55 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:46.618 15:18:55 -- host/failover.sh@34 -- # waitforlisten 72869 /var/tmp/bdevperf.sock 00:14:46.618 15:18:55 -- common/autotest_common.sh@817 -- # '[' -z 72869 ']' 00:14:46.618 15:18:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.618 15:18:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:46.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.618 15:18:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.618 15:18:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:46.618 15:18:55 -- common/autotest_common.sh@10 -- # set +x 00:14:47.992 15:18:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:47.992 15:18:56 -- common/autotest_common.sh@850 -- # return 0 00:14:47.992 15:18:56 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:47.992 NVMe0n1 00:14:47.993 15:18:57 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:48.250 00:14:48.250 15:18:57 -- host/failover.sh@39 -- # run_test_pid=72897 00:14:48.250 15:18:57 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:48.250 15:18:57 -- host/failover.sh@41 -- # sleep 1 00:14:49.625 15:18:58 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.625 [2024-04-24 15:18:58.742675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ae640 is same with the state(5) to be set 00:14:49.625 [2024-04-24 15:18:58.742738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ae640 is same with the state(5) to be set 00:14:49.625 [2024-04-24 15:18:58.742750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ae640 is same with the state(5) to be set 00:14:49.625 [2024-04-24 15:18:58.742759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ae640 is same with the state(5) to be set 00:14:49.625 [2024-04-24 15:18:58.742768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ae640 is same with the state(5) to be set 00:14:49.625 [2024-04-24 15:18:58.742777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ae640 is same with the state(5) to be set 00:14:49.625 [2024-04-24 15:18:58.742786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ae640 is same with the state(5) to be set 00:14:49.625 [2024-04-24 15:18:58.742795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ae640 is same with the state(5) to be set 00:14:49.625 [2024-04-24 15:18:58.742803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ae640 is same with the state(5) to be set 00:14:49.625 [2024-04-24 15:18:58.742812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ae640 is same with the state(5) to be set 00:14:49.625 [2024-04-24 15:18:58.742820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ae640 is same with the state(5) to be set 00:14:49.625 15:18:58 -- host/failover.sh@45 -- # sleep 3 00:14:52.909 15:19:01 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:52.909 00:14:52.909 15:19:02 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:53.167 15:19:02 -- host/failover.sh@50 -- # sleep 3 00:14:56.450 15:19:05 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.450 [2024-04-24 15:19:05.624290] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.450 15:19:05 -- host/failover.sh@55 -- # sleep 1 00:14:57.826 15:19:06 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:57.826 15:19:06 -- host/failover.sh@59 -- # wait 72897 00:15:04.392 0 00:15:04.392 15:19:12 -- host/failover.sh@61 -- # killprocess 72869 00:15:04.392 15:19:12 -- common/autotest_common.sh@936 -- # '[' -z 72869 ']' 00:15:04.392 15:19:12 -- common/autotest_common.sh@940 -- # kill -0 72869 00:15:04.392 15:19:12 -- common/autotest_common.sh@941 -- # uname 00:15:04.392 15:19:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:04.392 15:19:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72869 00:15:04.392 killing process with pid 72869 00:15:04.392 15:19:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:04.392 15:19:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:04.392 15:19:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72869' 00:15:04.392 15:19:12 -- common/autotest_common.sh@955 -- # kill 72869 00:15:04.392 15:19:12 -- common/autotest_common.sh@960 -- # wait 72869 00:15:04.392 15:19:12 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:04.392 [2024-04-24 15:18:55.854048] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:15:04.392 [2024-04-24 15:18:55.854172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72869 ] 00:15:04.392 [2024-04-24 15:18:55.989642] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.392 [2024-04-24 15:18:56.101416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.392 Running I/O for 15 seconds... 00:15:04.392 [2024-04-24 15:18:58.742879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.392 [2024-04-24 15:18:58.742934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.742963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.392 [2024-04-24 15:18:58.742979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.742996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.392 [2024-04-24 15:18:58.743011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.392 [2024-04-24 15:18:58.743041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.392 [2024-04-24 15:18:58.743071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.392 [2024-04-24 15:18:58.743100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.392 [2024-04-24 15:18:58.743130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.392 [2024-04-24 15:18:58.743159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.392 [2024-04-24 15:18:58.743188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.392 [2024-04-24 15:18:58.743217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.392 [2024-04-24 15:18:58.743246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.392 [2024-04-24 15:18:58.743309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.392 [2024-04-24 15:18:58.743806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.392 [2024-04-24 15:18:58.743835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.392 [2024-04-24 15:18:58.743864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.392 [2024-04-24 15:18:58.743879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.393 [2024-04-24 15:18:58.743893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.743908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.393 [2024-04-24 15:18:58.743922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.743937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.393 [2024-04-24 15:18:58.743953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.743969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.393 [2024-04-24 15:18:58.743983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.743998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.393 [2024-04-24 15:18:58.744012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.393 [2024-04-24 15:18:58.744041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.393 [2024-04-24 15:18:58.744583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.393 [2024-04-24 15:18:58.744612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.393 [2024-04-24 15:18:58.744643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.393 [2024-04-24 15:18:58.744674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.393 [2024-04-24 15:18:58.744704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.393 [2024-04-24 15:18:58.744733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.393 [2024-04-24 15:18:58.744762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.393 [2024-04-24 15:18:58.744791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.744974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.744990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.745004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.745020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.745033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.745049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.745062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.393 [2024-04-24 15:18:58.745078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.393 [2024-04-24 15:18:58.745092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.745129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.745159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.745188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.745217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.745253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.745282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:68136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.745976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.745995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.746010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.394 [2024-04-24 15:18:58.746030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.746046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.746060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.746075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.746089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.746109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.746123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.746138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.746152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.746167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.746181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.746197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.746210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.746226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.746240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.746255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.746268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.746284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.746298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.394 [2024-04-24 15:18:58.746314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.394 [2024-04-24 15:18:58.746327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.395 [2024-04-24 15:18:58.746356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.395 [2024-04-24 15:18:58.746385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:68240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:18:58.746856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb91960 is same with the state(5) to be set 00:15:04.395 [2024-04-24 15:18:58.746887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:04.395 [2024-04-24 15:18:58.746898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:04.395 [2024-04-24 15:18:58.746909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68304 len:8 PRP1 0x0 PRP2 0x0 00:15:04.395 [2024-04-24 15:18:58.746922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.746985] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb91960 was disconnected and freed. reset controller. 00:15:04.395 [2024-04-24 15:18:58.747002] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:04.395 [2024-04-24 15:18:58.747059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.395 [2024-04-24 15:18:58.747080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.747097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.395 [2024-04-24 15:18:58.747111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.747125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.395 [2024-04-24 15:18:58.747138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.747153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.395 [2024-04-24 15:18:58.747171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:18:58.747185] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:04.395 [2024-04-24 15:18:58.750970] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:04.395 [2024-04-24 15:18:58.751007] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2b1d0 (9): Bad file descriptor 00:15:04.395 [2024-04-24 15:18:58.784312] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:04.395 [2024-04-24 15:19:02.349836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.395 [2024-04-24 15:19:02.349908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:19:02.349939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.395 [2024-04-24 15:19:02.349976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:19:02.349995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.395 [2024-04-24 15:19:02.350009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:19:02.350024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.395 [2024-04-24 15:19:02.350038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:19:02.350053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.395 [2024-04-24 15:19:02.350067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:19:02.350082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.395 [2024-04-24 15:19:02.350095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:19:02.350111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.395 [2024-04-24 15:19:02.350125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:19:02.350140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.395 [2024-04-24 15:19:02.350154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:19:02.350169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:19:02.350183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:19:02.350199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:19:02.350212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:19:02.350228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:19:02.350242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:19:02.350257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:19:02.350271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:19:02.350286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:19:02.350300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-04-24 15:19:02.350315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.395 [2024-04-24 15:19:02.350328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.350365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.350394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.396 [2024-04-24 15:19:02.350423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.396 [2024-04-24 15:19:02.350471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.396 [2024-04-24 15:19:02.350501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.396 [2024-04-24 15:19:02.350530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.396 [2024-04-24 15:19:02.350559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.396 [2024-04-24 15:19:02.350589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.396 [2024-04-24 15:19:02.350617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.396 [2024-04-24 15:19:02.350648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.350678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.350707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.350735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.350777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.350806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.350835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.350864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.350893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.350922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.350952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.350981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.350996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.351010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.351025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.351038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.351054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.351067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.351083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.351096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.351111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.396 [2024-04-24 15:19:02.351132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.351148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.396 [2024-04-24 15:19:02.351162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.351177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.396 [2024-04-24 15:19:02.351191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.351206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.396 [2024-04-24 15:19:02.351220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.351235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.396 [2024-04-24 15:19:02.351249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.351264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.396 [2024-04-24 15:19:02.351278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.396 [2024-04-24 15:19:02.351293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.396 [2024-04-24 15:19:02.351306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.351335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.351364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.351853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.351887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.351918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.351947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.351976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.351992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.352006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.352035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.352065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.352094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.352124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.352153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.352182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.352211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.352240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.352275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.352304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.397 [2024-04-24 15:19:02.352334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.352362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.352410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.352451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.352481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.352511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.397 [2024-04-24 15:19:02.352526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.397 [2024-04-24 15:19:02.352540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.352576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.352607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.352636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.352665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.352702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.352731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.352762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.352791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.352820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.352849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.352879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.352908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.352937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.352965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.352981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.352995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.353023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.353063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.353094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.353122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.353152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.353180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.353209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.353239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.353268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.353297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.353326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.353355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.353384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.353413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.353462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.353492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.353521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.353555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.398 [2024-04-24 15:19:02.353584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.353613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.353642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.353671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.353700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.353729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.398 [2024-04-24 15:19:02.353745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.398 [2024-04-24 15:19:02.353759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:02.353774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.399 [2024-04-24 15:19:02.353788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:02.353803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb95730 is same with the state(5) to be set 00:15:04.399 [2024-04-24 15:19:02.353819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:04.399 [2024-04-24 15:19:02.353837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:04.399 [2024-04-24 15:19:02.353848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73944 len:8 PRP1 0x0 PRP2 0x0 00:15:04.399 [2024-04-24 15:19:02.353862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:02.353920] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb95730 was disconnected and freed. reset controller. 00:15:04.399 [2024-04-24 15:19:02.353938] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:04.399 [2024-04-24 15:19:02.353991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.399 [2024-04-24 15:19:02.354012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:02.354028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.399 [2024-04-24 15:19:02.354041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:02.354056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.399 [2024-04-24 15:19:02.354069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:02.354083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.399 [2024-04-24 15:19:02.354097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:02.354115] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:04.399 [2024-04-24 15:19:02.357898] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:04.399 [2024-04-24 15:19:02.357938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2b1d0 (9): Bad file descriptor 00:15:04.399 [2024-04-24 15:19:02.398760] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:04.399 [2024-04-24 15:19:06.887686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.399 [2024-04-24 15:19:06.887762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.887783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.399 [2024-04-24 15:19:06.887797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.887812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.399 [2024-04-24 15:19:06.887825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.887839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.399 [2024-04-24 15:19:06.887853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.887866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2b1d0 is same with the state(5) to be set 00:15:04.399 [2024-04-24 15:19:06.887934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.399 [2024-04-24 15:19:06.887980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.399 [2024-04-24 15:19:06.888019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.399 [2024-04-24 15:19:06.888048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.399 [2024-04-24 15:19:06.888077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.399 [2024-04-24 15:19:06.888105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.399 [2024-04-24 15:19:06.888133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.399 [2024-04-24 15:19:06.888163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.399 [2024-04-24 15:19:06.888191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.399 [2024-04-24 15:19:06.888749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.399 [2024-04-24 15:19:06.888771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.888787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.400 [2024-04-24 15:19:06.888801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.888817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.400 [2024-04-24 15:19:06.888830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.888845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.888860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.888875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.888889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.888905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.888918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.888934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.888947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.888963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.888977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.888992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.400 [2024-04-24 15:19:06.889595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.400 [2024-04-24 15:19:06.889625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.400 [2024-04-24 15:19:06.889653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.400 [2024-04-24 15:19:06.889682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.400 [2024-04-24 15:19:06.889711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.400 [2024-04-24 15:19:06.889741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.400 [2024-04-24 15:19:06.889770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.400 [2024-04-24 15:19:06.889799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.400 [2024-04-24 15:19:06.889829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.400 [2024-04-24 15:19:06.889858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.400 [2024-04-24 15:19:06.889939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.400 [2024-04-24 15:19:06.889953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.889968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.401 [2024-04-24 15:19:06.889982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.889997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.401 [2024-04-24 15:19:06.890010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.401 [2024-04-24 15:19:06.890041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.401 [2024-04-24 15:19:06.890069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.401 [2024-04-24 15:19:06.890098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.401 [2024-04-24 15:19:06.890669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.401 [2024-04-24 15:19:06.890706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.401 [2024-04-24 15:19:06.890735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.401 [2024-04-24 15:19:06.890764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.401 [2024-04-24 15:19:06.890794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.401 [2024-04-24 15:19:06.890823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.401 [2024-04-24 15:19:06.890851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.401 [2024-04-24 15:19:06.890880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.890979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.890994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.401 [2024-04-24 15:19:06.891008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.401 [2024-04-24 15:19:06.891023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.402 [2024-04-24 15:19:06.891037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.402 [2024-04-24 15:19:06.891072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.402 [2024-04-24 15:19:06.891102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.402 [2024-04-24 15:19:06.891139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.402 [2024-04-24 15:19:06.891168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.402 [2024-04-24 15:19:06.891198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.402 [2024-04-24 15:19:06.891227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.402 [2024-04-24 15:19:06.891256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.402 [2024-04-24 15:19:06.891284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.402 [2024-04-24 15:19:06.891313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.402 [2024-04-24 15:19:06.891342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:04.402 [2024-04-24 15:19:06.891370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:04.402 [2024-04-24 15:19:06.891840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:04.402 [2024-04-24 15:19:06.891900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:04.402 [2024-04-24 15:19:06.891912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18504 len:8 PRP1 0x0 PRP2 0x0 00:15:04.402 [2024-04-24 15:19:06.891925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.402 [2024-04-24 15:19:06.891982] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb81350 was disconnected and freed. reset controller. 00:15:04.402 [2024-04-24 15:19:06.891999] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:04.402 [2024-04-24 15:19:06.892018] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:04.402 [2024-04-24 15:19:06.895795] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:04.402 [2024-04-24 15:19:06.895833] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2b1d0 (9): Bad file descriptor 00:15:04.402 [2024-04-24 15:19:06.934076] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:04.402 00:15:04.402 Latency(us) 00:15:04.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.402 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:04.402 Verification LBA range: start 0x0 length 0x4000 00:15:04.402 NVMe0n1 : 15.01 8827.52 34.48 235.41 0.00 14091.37 625.57 17635.14 00:15:04.402 =================================================================================================================== 00:15:04.402 Total : 8827.52 34.48 235.41 0.00 14091.37 625.57 17635.14 00:15:04.402 Received shutdown signal, test time was about 15.000000 seconds 00:15:04.402 00:15:04.402 Latency(us) 00:15:04.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.402 =================================================================================================================== 00:15:04.402 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:04.402 15:19:12 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:04.402 15:19:12 -- host/failover.sh@65 -- # count=3 00:15:04.402 15:19:12 -- host/failover.sh@67 -- # (( count != 3 )) 00:15:04.402 15:19:12 -- host/failover.sh@73 -- # bdevperf_pid=73071 00:15:04.402 15:19:12 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:04.402 15:19:12 -- host/failover.sh@75 -- # waitforlisten 73071 /var/tmp/bdevperf.sock 00:15:04.402 15:19:12 -- common/autotest_common.sh@817 -- # '[' -z 73071 ']' 00:15:04.402 15:19:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:04.402 15:19:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:04.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:04.402 15:19:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:04.402 15:19:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:04.402 15:19:12 -- common/autotest_common.sh@10 -- # set +x 00:15:04.661 15:19:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:04.661 15:19:13 -- common/autotest_common.sh@850 -- # return 0 00:15:04.661 15:19:13 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:04.919 [2024-04-24 15:19:14.149740] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:05.177 15:19:14 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:05.177 [2024-04-24 15:19:14.418000] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:05.435 15:19:14 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:05.693 NVMe0n1 00:15:05.693 15:19:14 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:05.952 00:15:05.952 15:19:15 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:06.210 00:15:06.210 15:19:15 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:06.210 15:19:15 -- host/failover.sh@82 -- # grep -q NVMe0 00:15:06.468 15:19:15 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:06.726 15:19:15 -- host/failover.sh@87 -- # sleep 3 00:15:10.008 15:19:18 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:10.008 15:19:18 -- host/failover.sh@88 -- # grep -q NVMe0 00:15:10.008 15:19:19 -- host/failover.sh@90 -- # run_test_pid=73153 00:15:10.009 15:19:19 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:10.009 15:19:19 -- host/failover.sh@92 -- # wait 73153 00:15:11.420 0 00:15:11.420 15:19:20 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:11.420 [2024-04-24 15:19:12.957710] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:15:11.420 [2024-04-24 15:19:12.957832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73071 ] 00:15:11.420 [2024-04-24 15:19:13.096467] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.420 [2024-04-24 15:19:13.204382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.420 [2024-04-24 15:19:15.807040] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:11.420 [2024-04-24 15:19:15.807195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.420 [2024-04-24 15:19:15.807221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.420 [2024-04-24 15:19:15.807240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.420 [2024-04-24 15:19:15.807255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.420 [2024-04-24 15:19:15.807269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.420 [2024-04-24 15:19:15.807283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.420 [2024-04-24 15:19:15.807297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.420 [2024-04-24 15:19:15.807311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.420 [2024-04-24 15:19:15.807326] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:11.420 [2024-04-24 15:19:15.807388] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:11.420 [2024-04-24 15:19:15.807421] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ab1d0 (9): Bad file descriptor 00:15:11.420 [2024-04-24 15:19:15.818614] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:11.420 Running I/O for 1 seconds... 00:15:11.420 00:15:11.420 Latency(us) 00:15:11.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.421 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:11.421 Verification LBA range: start 0x0 length 0x4000 00:15:11.421 NVMe0n1 : 1.02 6803.13 26.57 0.00 0.00 18738.72 2278.87 15192.44 00:15:11.421 =================================================================================================================== 00:15:11.421 Total : 6803.13 26.57 0.00 0.00 18738.72 2278.87 15192.44 00:15:11.421 15:19:20 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:11.421 15:19:20 -- host/failover.sh@95 -- # grep -q NVMe0 00:15:11.421 15:19:20 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:11.679 15:19:20 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:11.679 15:19:20 -- host/failover.sh@99 -- # grep -q NVMe0 00:15:11.936 15:19:21 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:12.194 15:19:21 -- host/failover.sh@101 -- # sleep 3 00:15:15.479 15:19:24 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:15.479 15:19:24 -- host/failover.sh@103 -- # grep -q NVMe0 00:15:15.479 15:19:24 -- host/failover.sh@108 -- # killprocess 73071 00:15:15.479 15:19:24 -- common/autotest_common.sh@936 -- # '[' -z 73071 ']' 00:15:15.479 15:19:24 -- common/autotest_common.sh@940 -- # kill -0 73071 00:15:15.479 15:19:24 -- common/autotest_common.sh@941 -- # uname 00:15:15.479 15:19:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:15.479 15:19:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73071 00:15:15.479 15:19:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:15.479 15:19:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:15.480 killing process with pid 73071 00:15:15.480 15:19:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73071' 00:15:15.480 15:19:24 -- common/autotest_common.sh@955 -- # kill 73071 00:15:15.480 15:19:24 -- common/autotest_common.sh@960 -- # wait 73071 00:15:15.738 15:19:24 -- host/failover.sh@110 -- # sync 00:15:15.738 15:19:24 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.996 15:19:25 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:15.996 15:19:25 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:15.996 15:19:25 -- host/failover.sh@116 -- # nvmftestfini 00:15:15.996 15:19:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:15.996 15:19:25 -- nvmf/common.sh@117 -- # sync 00:15:15.996 15:19:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.996 15:19:25 -- nvmf/common.sh@120 -- # set +e 00:15:15.996 15:19:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.996 15:19:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.996 rmmod nvme_tcp 00:15:15.996 rmmod nvme_fabrics 00:15:15.996 rmmod nvme_keyring 00:15:15.996 15:19:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.996 15:19:25 -- nvmf/common.sh@124 -- # set -e 00:15:15.996 15:19:25 -- nvmf/common.sh@125 -- # return 0 00:15:15.996 15:19:25 -- nvmf/common.sh@478 -- # '[' -n 72810 ']' 00:15:15.996 15:19:25 -- nvmf/common.sh@479 -- # killprocess 72810 00:15:15.996 15:19:25 -- common/autotest_common.sh@936 -- # '[' -z 72810 ']' 00:15:15.996 15:19:25 -- common/autotest_common.sh@940 -- # kill -0 72810 00:15:15.996 15:19:25 -- common/autotest_common.sh@941 -- # uname 00:15:15.996 15:19:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:15.996 15:19:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72810 00:15:15.996 15:19:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:15.996 15:19:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:15.996 killing process with pid 72810 00:15:15.996 15:19:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72810' 00:15:15.996 15:19:25 -- common/autotest_common.sh@955 -- # kill 72810 00:15:15.996 15:19:25 -- common/autotest_common.sh@960 -- # wait 72810 00:15:16.254 15:19:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:16.254 15:19:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:16.254 15:19:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:16.254 15:19:25 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.254 15:19:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.254 15:19:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.254 15:19:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.254 15:19:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.513 15:19:25 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:16.513 00:15:16.513 real 0m33.012s 00:15:16.513 user 2m8.010s 00:15:16.513 sys 0m5.525s 00:15:16.513 15:19:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:16.513 15:19:25 -- common/autotest_common.sh@10 -- # set +x 00:15:16.513 ************************************ 00:15:16.513 END TEST nvmf_failover 00:15:16.513 ************************************ 00:15:16.513 15:19:25 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:16.513 15:19:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:16.513 15:19:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.513 15:19:25 -- common/autotest_common.sh@10 -- # set +x 00:15:16.513 ************************************ 00:15:16.513 START TEST nvmf_discovery 00:15:16.513 ************************************ 00:15:16.513 15:19:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:16.513 * Looking for test storage... 00:15:16.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:16.513 15:19:25 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.513 15:19:25 -- nvmf/common.sh@7 -- # uname -s 00:15:16.513 15:19:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.513 15:19:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.513 15:19:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.513 15:19:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.513 15:19:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.513 15:19:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.513 15:19:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.513 15:19:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.513 15:19:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.513 15:19:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.513 15:19:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:15:16.513 15:19:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:15:16.513 15:19:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.513 15:19:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.513 15:19:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.513 15:19:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.513 15:19:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.513 15:19:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.513 15:19:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.513 15:19:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.513 15:19:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.513 15:19:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.513 15:19:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.513 15:19:25 -- paths/export.sh@5 -- # export PATH 00:15:16.513 15:19:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.513 15:19:25 -- nvmf/common.sh@47 -- # : 0 00:15:16.513 15:19:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.513 15:19:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.513 15:19:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.513 15:19:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.513 15:19:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.513 15:19:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.513 15:19:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.513 15:19:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.513 15:19:25 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:16.513 15:19:25 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:16.513 15:19:25 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:16.513 15:19:25 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:16.513 15:19:25 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:16.513 15:19:25 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:16.513 15:19:25 -- host/discovery.sh@25 -- # nvmftestinit 00:15:16.513 15:19:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:16.513 15:19:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.513 15:19:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:16.513 15:19:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:16.513 15:19:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:16.513 15:19:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.513 15:19:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.513 15:19:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.784 15:19:25 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:16.784 15:19:25 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:16.784 15:19:25 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:16.784 15:19:25 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:16.784 15:19:25 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:16.785 15:19:25 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:16.785 15:19:25 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.785 15:19:25 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.785 15:19:25 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:16.785 15:19:25 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:16.785 15:19:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.785 15:19:25 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.785 15:19:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.785 15:19:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.785 15:19:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.785 15:19:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.785 15:19:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.785 15:19:25 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.785 15:19:25 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:16.785 15:19:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:16.785 Cannot find device "nvmf_tgt_br" 00:15:16.785 15:19:25 -- nvmf/common.sh@155 -- # true 00:15:16.785 15:19:25 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.785 Cannot find device "nvmf_tgt_br2" 00:15:16.785 15:19:25 -- nvmf/common.sh@156 -- # true 00:15:16.785 15:19:25 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:16.785 15:19:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:16.785 Cannot find device "nvmf_tgt_br" 00:15:16.785 15:19:25 -- nvmf/common.sh@158 -- # true 00:15:16.785 15:19:25 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:16.785 Cannot find device "nvmf_tgt_br2" 00:15:16.785 15:19:25 -- nvmf/common.sh@159 -- # true 00:15:16.785 15:19:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:16.785 15:19:25 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:16.785 15:19:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.785 15:19:25 -- nvmf/common.sh@162 -- # true 00:15:16.785 15:19:25 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.785 15:19:25 -- nvmf/common.sh@163 -- # true 00:15:16.785 15:19:25 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.785 15:19:25 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.785 15:19:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.785 15:19:25 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.785 15:19:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.785 15:19:25 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.785 15:19:25 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.785 15:19:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:16.785 15:19:25 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:16.785 15:19:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:16.785 15:19:25 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:16.785 15:19:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:16.785 15:19:25 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:16.785 15:19:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:16.785 15:19:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:16.785 15:19:26 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:16.785 15:19:26 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:17.056 15:19:26 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:17.056 15:19:26 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.056 15:19:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.056 15:19:26 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.056 15:19:26 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.056 15:19:26 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.056 15:19:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:17.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:15:17.056 00:15:17.056 --- 10.0.0.2 ping statistics --- 00:15:17.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.056 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:15:17.056 15:19:26 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:17.056 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.056 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:15:17.056 00:15:17.056 --- 10.0.0.3 ping statistics --- 00:15:17.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.056 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:17.056 15:19:26 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:15:17.056 00:15:17.056 --- 10.0.0.1 ping statistics --- 00:15:17.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.056 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:17.056 15:19:26 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.056 15:19:26 -- nvmf/common.sh@422 -- # return 0 00:15:17.056 15:19:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:17.056 15:19:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.056 15:19:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:17.056 15:19:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:17.056 15:19:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.056 15:19:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:17.056 15:19:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:17.056 15:19:26 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:17.056 15:19:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:17.056 15:19:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:17.056 15:19:26 -- common/autotest_common.sh@10 -- # set +x 00:15:17.056 15:19:26 -- nvmf/common.sh@470 -- # nvmfpid=73427 00:15:17.056 15:19:26 -- nvmf/common.sh@471 -- # waitforlisten 73427 00:15:17.056 15:19:26 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:17.056 15:19:26 -- common/autotest_common.sh@817 -- # '[' -z 73427 ']' 00:15:17.056 15:19:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.056 15:19:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:17.056 15:19:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.056 15:19:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:17.056 15:19:26 -- common/autotest_common.sh@10 -- # set +x 00:15:17.056 [2024-04-24 15:19:26.175077] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:15:17.056 [2024-04-24 15:19:26.175190] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.314 [2024-04-24 15:19:26.317147] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.314 [2024-04-24 15:19:26.434159] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.314 [2024-04-24 15:19:26.434216] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.314 [2024-04-24 15:19:26.434227] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.314 [2024-04-24 15:19:26.434235] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.314 [2024-04-24 15:19:26.434242] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.314 [2024-04-24 15:19:26.434276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.249 15:19:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:18.249 15:19:27 -- common/autotest_common.sh@850 -- # return 0 00:15:18.249 15:19:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:18.249 15:19:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:18.249 15:19:27 -- common/autotest_common.sh@10 -- # set +x 00:15:18.249 15:19:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.249 15:19:27 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:18.249 15:19:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.249 15:19:27 -- common/autotest_common.sh@10 -- # set +x 00:15:18.249 [2024-04-24 15:19:27.219674] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.249 15:19:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.249 15:19:27 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:18.249 15:19:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.249 15:19:27 -- common/autotest_common.sh@10 -- # set +x 00:15:18.249 [2024-04-24 15:19:27.227765] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:18.249 15:19:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.249 15:19:27 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:18.249 15:19:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.249 15:19:27 -- common/autotest_common.sh@10 -- # set +x 00:15:18.249 null0 00:15:18.249 15:19:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.249 15:19:27 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:18.249 15:19:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.249 15:19:27 -- common/autotest_common.sh@10 -- # set +x 00:15:18.249 null1 00:15:18.249 15:19:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.249 15:19:27 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:18.249 15:19:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.249 15:19:27 -- common/autotest_common.sh@10 -- # set +x 00:15:18.249 15:19:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.249 15:19:27 -- host/discovery.sh@45 -- # hostpid=73459 00:15:18.249 15:19:27 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:18.249 15:19:27 -- host/discovery.sh@46 -- # waitforlisten 73459 /tmp/host.sock 00:15:18.250 15:19:27 -- common/autotest_common.sh@817 -- # '[' -z 73459 ']' 00:15:18.250 15:19:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:15:18.250 15:19:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:18.250 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:18.250 15:19:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:18.250 15:19:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:18.250 15:19:27 -- common/autotest_common.sh@10 -- # set +x 00:15:18.250 [2024-04-24 15:19:27.311685] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:15:18.250 [2024-04-24 15:19:27.311811] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73459 ] 00:15:18.250 [2024-04-24 15:19:27.451292] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.510 [2024-04-24 15:19:27.581594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.083 15:19:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:19.083 15:19:28 -- common/autotest_common.sh@850 -- # return 0 00:15:19.083 15:19:28 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:19.083 15:19:28 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:19.083 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.083 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.083 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.084 15:19:28 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:19.084 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.084 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.084 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.084 15:19:28 -- host/discovery.sh@72 -- # notify_id=0 00:15:19.084 15:19:28 -- host/discovery.sh@83 -- # get_subsystem_names 00:15:19.084 15:19:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:19.084 15:19:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:19.084 15:19:28 -- host/discovery.sh@59 -- # sort 00:15:19.084 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.084 15:19:28 -- host/discovery.sh@59 -- # xargs 00:15:19.084 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.344 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.344 15:19:28 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:19.344 15:19:28 -- host/discovery.sh@84 -- # get_bdev_list 00:15:19.344 15:19:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:19.344 15:19:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:19.344 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.344 15:19:28 -- host/discovery.sh@55 -- # sort 00:15:19.344 15:19:28 -- host/discovery.sh@55 -- # xargs 00:15:19.344 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.344 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.344 15:19:28 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:19.344 15:19:28 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:19.344 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.344 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.344 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.344 15:19:28 -- host/discovery.sh@87 -- # get_subsystem_names 00:15:19.344 15:19:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:19.344 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.344 15:19:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:19.344 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.344 15:19:28 -- host/discovery.sh@59 -- # sort 00:15:19.344 15:19:28 -- host/discovery.sh@59 -- # xargs 00:15:19.344 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.344 15:19:28 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:19.344 15:19:28 -- host/discovery.sh@88 -- # get_bdev_list 00:15:19.344 15:19:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:19.344 15:19:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:19.344 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.344 15:19:28 -- host/discovery.sh@55 -- # sort 00:15:19.344 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.344 15:19:28 -- host/discovery.sh@55 -- # xargs 00:15:19.344 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.344 15:19:28 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:19.344 15:19:28 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:19.344 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.344 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.344 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.344 15:19:28 -- host/discovery.sh@91 -- # get_subsystem_names 00:15:19.344 15:19:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:19.344 15:19:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:19.344 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.344 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.344 15:19:28 -- host/discovery.sh@59 -- # sort 00:15:19.345 15:19:28 -- host/discovery.sh@59 -- # xargs 00:15:19.345 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.603 15:19:28 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:19.603 15:19:28 -- host/discovery.sh@92 -- # get_bdev_list 00:15:19.603 15:19:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:19.603 15:19:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:19.603 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.603 15:19:28 -- host/discovery.sh@55 -- # sort 00:15:19.603 15:19:28 -- host/discovery.sh@55 -- # xargs 00:15:19.603 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.603 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.603 15:19:28 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:19.603 15:19:28 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:19.603 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.603 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.603 [2024-04-24 15:19:28.680309] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.603 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.603 15:19:28 -- host/discovery.sh@97 -- # get_subsystem_names 00:15:19.603 15:19:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:19.603 15:19:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:19.603 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.603 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.603 15:19:28 -- host/discovery.sh@59 -- # sort 00:15:19.603 15:19:28 -- host/discovery.sh@59 -- # xargs 00:15:19.603 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.603 15:19:28 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:19.603 15:19:28 -- host/discovery.sh@98 -- # get_bdev_list 00:15:19.603 15:19:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:19.603 15:19:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:19.603 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.603 15:19:28 -- host/discovery.sh@55 -- # xargs 00:15:19.603 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.603 15:19:28 -- host/discovery.sh@55 -- # sort 00:15:19.603 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.603 15:19:28 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:19.603 15:19:28 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:19.603 15:19:28 -- host/discovery.sh@79 -- # expected_count=0 00:15:19.603 15:19:28 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:19.603 15:19:28 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:19.603 15:19:28 -- common/autotest_common.sh@901 -- # local max=10 00:15:19.603 15:19:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:19.603 15:19:28 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:19.603 15:19:28 -- common/autotest_common.sh@903 -- # get_notification_count 00:15:19.603 15:19:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:19.603 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.603 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.603 15:19:28 -- host/discovery.sh@74 -- # jq '. | length' 00:15:19.603 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.603 15:19:28 -- host/discovery.sh@74 -- # notification_count=0 00:15:19.603 15:19:28 -- host/discovery.sh@75 -- # notify_id=0 00:15:19.603 15:19:28 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:15:19.864 15:19:28 -- common/autotest_common.sh@904 -- # return 0 00:15:19.864 15:19:28 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:19.864 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.864 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.864 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.864 15:19:28 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:19.864 15:19:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:19.864 15:19:28 -- common/autotest_common.sh@901 -- # local max=10 00:15:19.864 15:19:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:19.864 15:19:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:19.864 15:19:28 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:15:19.864 15:19:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:19.864 15:19:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.864 15:19:28 -- host/discovery.sh@59 -- # sort 00:15:19.864 15:19:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.864 15:19:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:19.864 15:19:28 -- host/discovery.sh@59 -- # xargs 00:15:19.864 15:19:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.864 15:19:28 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:15:19.864 15:19:28 -- common/autotest_common.sh@906 -- # sleep 1 00:15:20.123 [2024-04-24 15:19:29.322823] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:20.123 [2024-04-24 15:19:29.322875] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:20.123 [2024-04-24 15:19:29.322912] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:20.123 [2024-04-24 15:19:29.328877] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:20.381 [2024-04-24 15:19:29.385309] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:20.382 [2024-04-24 15:19:29.385341] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:20.950 15:19:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:20.950 15:19:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:20.950 15:19:29 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:15:20.950 15:19:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:20.950 15:19:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.950 15:19:29 -- common/autotest_common.sh@10 -- # set +x 00:15:20.950 15:19:29 -- host/discovery.sh@59 -- # sort 00:15:20.950 15:19:29 -- host/discovery.sh@59 -- # xargs 00:15:20.950 15:19:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:20.950 15:19:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.950 15:19:29 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.950 15:19:29 -- common/autotest_common.sh@904 -- # return 0 00:15:20.950 15:19:29 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:20.950 15:19:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:20.950 15:19:29 -- common/autotest_common.sh@901 -- # local max=10 00:15:20.950 15:19:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:20.950 15:19:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:20.950 15:19:29 -- common/autotest_common.sh@903 -- # get_bdev_list 00:15:20.950 15:19:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:20.950 15:19:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.950 15:19:29 -- common/autotest_common.sh@10 -- # set +x 00:15:20.950 15:19:29 -- host/discovery.sh@55 -- # sort 00:15:20.950 15:19:29 -- host/discovery.sh@55 -- # xargs 00:15:20.950 15:19:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:20.950 15:19:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.950 15:19:30 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:20.950 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:20.950 15:19:30 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:20.950 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:20.950 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:20.950 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:20.950 15:19:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:20.950 15:19:30 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:15:20.950 15:19:30 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:20.950 15:19:30 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:20.950 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.950 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:20.950 15:19:30 -- host/discovery.sh@63 -- # sort -n 00:15:20.950 15:19:30 -- host/discovery.sh@63 -- # xargs 00:15:20.950 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.950 15:19:30 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:15:20.950 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:20.950 15:19:30 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:20.950 15:19:30 -- host/discovery.sh@79 -- # expected_count=1 00:15:20.950 15:19:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:20.950 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:20.950 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:20.950 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:20.950 15:19:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:20.950 15:19:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:15:20.950 15:19:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:20.950 15:19:30 -- host/discovery.sh@74 -- # jq '. | length' 00:15:20.950 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.950 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:20.950 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.950 15:19:30 -- host/discovery.sh@74 -- # notification_count=1 00:15:20.950 15:19:30 -- host/discovery.sh@75 -- # notify_id=1 00:15:20.950 15:19:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:15:20.950 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:20.950 15:19:30 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:20.950 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.950 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:20.950 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.950 15:19:30 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:20.950 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:20.950 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:20.950 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:20.950 15:19:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:20.950 15:19:30 -- common/autotest_common.sh@903 -- # get_bdev_list 00:15:20.950 15:19:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:20.950 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.950 15:19:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:20.950 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:20.950 15:19:30 -- host/discovery.sh@55 -- # sort 00:15:20.950 15:19:30 -- host/discovery.sh@55 -- # xargs 00:15:20.950 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:21.212 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:21.212 15:19:30 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:21.212 15:19:30 -- host/discovery.sh@79 -- # expected_count=1 00:15:21.212 15:19:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:21.212 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:21.212 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:21.212 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:15:21.212 15:19:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:21.212 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.212 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.212 15:19:30 -- host/discovery.sh@74 -- # jq '. | length' 00:15:21.212 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.212 15:19:30 -- host/discovery.sh@74 -- # notification_count=1 00:15:21.212 15:19:30 -- host/discovery.sh@75 -- # notify_id=2 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:15:21.212 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:21.212 15:19:30 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:21.212 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.212 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.212 [2024-04-24 15:19:30.270595] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:21.212 [2024-04-24 15:19:30.271258] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:21.212 [2024-04-24 15:19:30.271300] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:21.212 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.212 15:19:30 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:21.212 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:21.212 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:21.212 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:21.212 [2024-04-24 15:19:30.277242] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:15:21.212 15:19:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:21.212 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.212 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.212 15:19:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:21.212 15:19:30 -- host/discovery.sh@59 -- # sort 00:15:21.212 15:19:30 -- host/discovery.sh@59 -- # xargs 00:15:21.212 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.212 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:21.212 15:19:30 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:21.212 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:21.212 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:21.212 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # get_bdev_list 00:15:21.212 15:19:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:21.212 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.212 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.212 15:19:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:21.212 15:19:30 -- host/discovery.sh@55 -- # sort 00:15:21.212 15:19:30 -- host/discovery.sh@55 -- # xargs 00:15:21.212 [2024-04-24 15:19:30.336554] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:21.212 [2024-04-24 15:19:30.336577] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:21.212 [2024-04-24 15:19:30.336584] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:21.212 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:21.212 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:21.212 15:19:30 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:21.212 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:21.212 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:21.212 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:15:21.212 15:19:30 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:21.212 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.212 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.212 15:19:30 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:21.212 15:19:30 -- host/discovery.sh@63 -- # sort -n 00:15:21.212 15:19:30 -- host/discovery.sh@63 -- # xargs 00:15:21.212 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:21.212 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:21.212 15:19:30 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:21.212 15:19:30 -- host/discovery.sh@79 -- # expected_count=0 00:15:21.212 15:19:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:21.212 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:21.212 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:21.212 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:21.212 15:19:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:15:21.212 15:19:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:21.212 15:19:30 -- host/discovery.sh@74 -- # jq '. | length' 00:15:21.212 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.212 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.212 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.471 15:19:30 -- host/discovery.sh@74 -- # notification_count=0 00:15:21.471 15:19:30 -- host/discovery.sh@75 -- # notify_id=2 00:15:21.471 15:19:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:15:21.471 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:21.471 15:19:30 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:21.471 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.471 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.471 [2024-04-24 15:19:30.503617] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:21.471 [2024-04-24 15:19:30.503660] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:21.471 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.471 15:19:30 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:21.471 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:21.471 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:21.471 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:21.471 15:19:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:21.471 [2024-04-24 15:19:30.509611] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:21.471 [2024-04-24 15:19:30.509650] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:21.471 [2024-04-24 15:19:30.509763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.471 [2024-04-24 15:19:30.509799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.471 [2024-04-24 15:19:30.509813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.471 [2024-04-24 15:19:30.509822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.471 [2024-04-24 15:19:30.509833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.471 [2024-04-24 15:19:30.509842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.471 [2024-04-24 15:19:30.509853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.471 [2024-04-24 15:19:30.509862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.471 [2024-04-24 15:19:30.509872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4cefa0 is same with the state(5) to be set 00:15:21.471 15:19:30 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:15:21.471 15:19:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:21.471 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.471 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.471 15:19:30 -- host/discovery.sh@59 -- # sort 00:15:21.471 15:19:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:21.471 15:19:30 -- host/discovery.sh@59 -- # xargs 00:15:21.471 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.471 15:19:30 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.471 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:21.471 15:19:30 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:21.471 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:21.471 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:21.471 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:21.471 15:19:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:21.471 15:19:30 -- common/autotest_common.sh@903 -- # get_bdev_list 00:15:21.471 15:19:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:21.471 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.471 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.471 15:19:30 -- host/discovery.sh@55 -- # sort 00:15:21.471 15:19:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:21.471 15:19:30 -- host/discovery.sh@55 -- # xargs 00:15:21.471 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.471 15:19:30 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:21.471 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:21.471 15:19:30 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:21.471 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:21.471 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:21.471 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:21.471 15:19:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:21.471 15:19:30 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:15:21.471 15:19:30 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:21.471 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.471 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.471 15:19:30 -- host/discovery.sh@63 -- # xargs 00:15:21.471 15:19:30 -- host/discovery.sh@63 -- # sort -n 00:15:21.471 15:19:30 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:21.471 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.471 15:19:30 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:15:21.471 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:21.471 15:19:30 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:21.471 15:19:30 -- host/discovery.sh@79 -- # expected_count=0 00:15:21.471 15:19:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:21.471 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:21.471 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:21.471 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:21.471 15:19:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:21.471 15:19:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:15:21.472 15:19:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:21.472 15:19:30 -- host/discovery.sh@74 -- # jq '. | length' 00:15:21.472 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.472 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.472 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.472 15:19:30 -- host/discovery.sh@74 -- # notification_count=0 00:15:21.472 15:19:30 -- host/discovery.sh@75 -- # notify_id=2 00:15:21.472 15:19:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:15:21.472 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:21.472 15:19:30 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:21.472 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.472 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.730 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.730 15:19:30 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:21.730 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:21.730 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:21.730 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:21.730 15:19:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:21.730 15:19:30 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:15:21.730 15:19:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:21.730 15:19:30 -- host/discovery.sh@59 -- # sort 00:15:21.730 15:19:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:21.730 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.730 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.730 15:19:30 -- host/discovery.sh@59 -- # xargs 00:15:21.730 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.730 15:19:30 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:15:21.730 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:21.730 15:19:30 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:21.730 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:21.730 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:21.730 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:21.730 15:19:30 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:21.730 15:19:30 -- common/autotest_common.sh@903 -- # get_bdev_list 00:15:21.730 15:19:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:21.730 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.730 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.730 15:19:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:21.730 15:19:30 -- host/discovery.sh@55 -- # xargs 00:15:21.730 15:19:30 -- host/discovery.sh@55 -- # sort 00:15:21.730 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.730 15:19:30 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:15:21.730 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:21.730 15:19:30 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:21.730 15:19:30 -- host/discovery.sh@79 -- # expected_count=2 00:15:21.730 15:19:30 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:21.730 15:19:30 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:21.730 15:19:30 -- common/autotest_common.sh@901 -- # local max=10 00:15:21.730 15:19:30 -- common/autotest_common.sh@902 -- # (( max-- )) 00:15:21.730 15:19:30 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:21.730 15:19:30 -- common/autotest_common.sh@903 -- # get_notification_count 00:15:21.730 15:19:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:21.730 15:19:30 -- host/discovery.sh@74 -- # jq '. | length' 00:15:21.730 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.730 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.730 15:19:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.730 15:19:30 -- host/discovery.sh@74 -- # notification_count=2 00:15:21.730 15:19:30 -- host/discovery.sh@75 -- # notify_id=4 00:15:21.730 15:19:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:15:21.730 15:19:30 -- common/autotest_common.sh@904 -- # return 0 00:15:21.730 15:19:30 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:21.730 15:19:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.730 15:19:30 -- common/autotest_common.sh@10 -- # set +x 00:15:22.674 [2024-04-24 15:19:31.914592] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:22.674 [2024-04-24 15:19:31.914624] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:22.674 [2024-04-24 15:19:31.914645] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:22.933 [2024-04-24 15:19:31.920629] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:22.933 [2024-04-24 15:19:31.980268] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:22.933 [2024-04-24 15:19:31.980349] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:22.933 15:19:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:22.933 15:19:31 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:22.933 15:19:31 -- common/autotest_common.sh@638 -- # local es=0 00:15:22.933 15:19:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:22.933 15:19:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:15:22.933 15:19:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:22.933 15:19:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:15:22.933 15:19:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:22.933 15:19:31 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:22.933 15:19:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.933 15:19:31 -- common/autotest_common.sh@10 -- # set +x 00:15:22.933 request: 00:15:22.933 { 00:15:22.933 "name": "nvme", 00:15:22.933 "trtype": "tcp", 00:15:22.933 "traddr": "10.0.0.2", 00:15:22.933 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:22.933 "adrfam": "ipv4", 00:15:22.933 "trsvcid": "8009", 00:15:22.933 "wait_for_attach": true, 00:15:22.933 "method": "bdev_nvme_start_discovery", 00:15:22.933 "req_id": 1 00:15:22.933 } 00:15:22.933 Got JSON-RPC error response 00:15:22.933 response: 00:15:22.933 { 00:15:22.933 "code": -17, 00:15:22.933 "message": "File exists" 00:15:22.933 } 00:15:22.933 15:19:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:22.933 15:19:31 -- common/autotest_common.sh@641 -- # es=1 00:15:22.933 15:19:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:22.933 15:19:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:22.933 15:19:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:22.933 15:19:32 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:22.933 15:19:32 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:22.933 15:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.933 15:19:32 -- common/autotest_common.sh@10 -- # set +x 00:15:22.933 15:19:32 -- host/discovery.sh@67 -- # sort 00:15:22.933 15:19:32 -- host/discovery.sh@67 -- # xargs 00:15:22.933 15:19:32 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:22.933 15:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:22.933 15:19:32 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:22.933 15:19:32 -- host/discovery.sh@146 -- # get_bdev_list 00:15:22.933 15:19:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:22.933 15:19:32 -- host/discovery.sh@55 -- # sort 00:15:22.933 15:19:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:22.933 15:19:32 -- host/discovery.sh@55 -- # xargs 00:15:22.933 15:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.933 15:19:32 -- common/autotest_common.sh@10 -- # set +x 00:15:22.933 15:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:22.933 15:19:32 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:22.933 15:19:32 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:22.933 15:19:32 -- common/autotest_common.sh@638 -- # local es=0 00:15:22.933 15:19:32 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:22.933 15:19:32 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:15:22.933 15:19:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:22.933 15:19:32 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:15:22.933 15:19:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:22.933 15:19:32 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:22.933 15:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.933 15:19:32 -- common/autotest_common.sh@10 -- # set +x 00:15:22.933 request: 00:15:22.933 { 00:15:22.933 "name": "nvme_second", 00:15:22.933 "trtype": "tcp", 00:15:22.933 "traddr": "10.0.0.2", 00:15:22.933 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:22.933 "adrfam": "ipv4", 00:15:22.933 "trsvcid": "8009", 00:15:22.933 "wait_for_attach": true, 00:15:22.933 "method": "bdev_nvme_start_discovery", 00:15:22.933 "req_id": 1 00:15:22.933 } 00:15:22.933 Got JSON-RPC error response 00:15:22.933 response: 00:15:22.933 { 00:15:22.933 "code": -17, 00:15:22.933 "message": "File exists" 00:15:22.933 } 00:15:22.933 15:19:32 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:22.933 15:19:32 -- common/autotest_common.sh@641 -- # es=1 00:15:22.933 15:19:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:22.933 15:19:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:22.933 15:19:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:22.933 15:19:32 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:22.933 15:19:32 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:22.933 15:19:32 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:22.933 15:19:32 -- host/discovery.sh@67 -- # sort 00:15:22.933 15:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:22.933 15:19:32 -- common/autotest_common.sh@10 -- # set +x 00:15:22.933 15:19:32 -- host/discovery.sh@67 -- # xargs 00:15:22.933 15:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:23.192 15:19:32 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:23.192 15:19:32 -- host/discovery.sh@152 -- # get_bdev_list 00:15:23.192 15:19:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:23.192 15:19:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:23.192 15:19:32 -- host/discovery.sh@55 -- # xargs 00:15:23.192 15:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:23.192 15:19:32 -- host/discovery.sh@55 -- # sort 00:15:23.192 15:19:32 -- common/autotest_common.sh@10 -- # set +x 00:15:23.192 15:19:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:23.192 15:19:32 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:23.192 15:19:32 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:23.192 15:19:32 -- common/autotest_common.sh@638 -- # local es=0 00:15:23.192 15:19:32 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:23.192 15:19:32 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:15:23.192 15:19:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:23.192 15:19:32 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:15:23.192 15:19:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:23.192 15:19:32 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:23.192 15:19:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:23.192 15:19:32 -- common/autotest_common.sh@10 -- # set +x 00:15:24.205 [2024-04-24 15:19:33.250120] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:24.205 [2024-04-24 15:19:33.250270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:24.205 [2024-04-24 15:19:33.250315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:24.205 [2024-04-24 15:19:33.250331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4c8d10 with addr=10.0.0.2, port=8010 00:15:24.205 [2024-04-24 15:19:33.250354] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:24.205 [2024-04-24 15:19:33.250364] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:24.205 [2024-04-24 15:19:33.250374] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:25.140 [2024-04-24 15:19:34.250296] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:25.140 [2024-04-24 15:19:34.250531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:25.140 [2024-04-24 15:19:34.250583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:25.140 [2024-04-24 15:19:34.250601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x55e290 with addr=10.0.0.2, port=8010 00:15:25.140 [2024-04-24 15:19:34.250635] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:25.140 [2024-04-24 15:19:34.250648] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:25.140 [2024-04-24 15:19:34.250662] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:26.073 [2024-04-24 15:19:35.249946] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:26.073 request: 00:15:26.073 { 00:15:26.073 "name": "nvme_second", 00:15:26.073 "trtype": "tcp", 00:15:26.073 "traddr": "10.0.0.2", 00:15:26.073 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:26.073 "adrfam": "ipv4", 00:15:26.073 "trsvcid": "8010", 00:15:26.073 "attach_timeout_ms": 3000, 00:15:26.073 "method": "bdev_nvme_start_discovery", 00:15:26.073 "req_id": 1 00:15:26.073 } 00:15:26.073 Got JSON-RPC error response 00:15:26.073 response: 00:15:26.073 { 00:15:26.073 "code": -110, 00:15:26.073 "message": "Connection timed out" 00:15:26.073 } 00:15:26.073 15:19:35 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:26.073 15:19:35 -- common/autotest_common.sh@641 -- # es=1 00:15:26.073 15:19:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:26.073 15:19:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:26.073 15:19:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:26.073 15:19:35 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:26.073 15:19:35 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:26.073 15:19:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.073 15:19:35 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:26.073 15:19:35 -- common/autotest_common.sh@10 -- # set +x 00:15:26.073 15:19:35 -- host/discovery.sh@67 -- # xargs 00:15:26.073 15:19:35 -- host/discovery.sh@67 -- # sort 00:15:26.073 15:19:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.073 15:19:35 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:26.073 15:19:35 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:26.073 15:19:35 -- host/discovery.sh@161 -- # kill 73459 00:15:26.073 15:19:35 -- host/discovery.sh@162 -- # nvmftestfini 00:15:26.073 15:19:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:26.073 15:19:35 -- nvmf/common.sh@117 -- # sync 00:15:26.332 15:19:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:26.332 15:19:35 -- nvmf/common.sh@120 -- # set +e 00:15:26.332 15:19:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:26.332 15:19:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:26.332 rmmod nvme_tcp 00:15:26.332 rmmod nvme_fabrics 00:15:26.332 rmmod nvme_keyring 00:15:26.332 15:19:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:26.332 15:19:35 -- nvmf/common.sh@124 -- # set -e 00:15:26.332 15:19:35 -- nvmf/common.sh@125 -- # return 0 00:15:26.332 15:19:35 -- nvmf/common.sh@478 -- # '[' -n 73427 ']' 00:15:26.332 15:19:35 -- nvmf/common.sh@479 -- # killprocess 73427 00:15:26.332 15:19:35 -- common/autotest_common.sh@936 -- # '[' -z 73427 ']' 00:15:26.332 15:19:35 -- common/autotest_common.sh@940 -- # kill -0 73427 00:15:26.332 15:19:35 -- common/autotest_common.sh@941 -- # uname 00:15:26.332 15:19:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:26.332 15:19:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73427 00:15:26.332 killing process with pid 73427 00:15:26.332 15:19:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:26.332 15:19:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:26.332 15:19:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73427' 00:15:26.332 15:19:35 -- common/autotest_common.sh@955 -- # kill 73427 00:15:26.332 15:19:35 -- common/autotest_common.sh@960 -- # wait 73427 00:15:26.590 15:19:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:26.590 15:19:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:26.590 15:19:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:26.590 15:19:35 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:26.590 15:19:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:26.590 15:19:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.590 15:19:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.590 15:19:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.590 15:19:35 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:26.590 00:15:26.590 real 0m10.100s 00:15:26.590 user 0m19.460s 00:15:26.590 sys 0m1.982s 00:15:26.590 ************************************ 00:15:26.590 END TEST nvmf_discovery 00:15:26.590 ************************************ 00:15:26.590 15:19:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:26.590 15:19:35 -- common/autotest_common.sh@10 -- # set +x 00:15:26.590 15:19:35 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:26.590 15:19:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:26.590 15:19:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:26.590 15:19:35 -- common/autotest_common.sh@10 -- # set +x 00:15:26.848 ************************************ 00:15:26.848 START TEST nvmf_discovery_remove_ifc 00:15:26.848 ************************************ 00:15:26.848 15:19:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:26.848 * Looking for test storage... 00:15:26.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:26.848 15:19:35 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:26.848 15:19:35 -- nvmf/common.sh@7 -- # uname -s 00:15:26.848 15:19:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.848 15:19:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.848 15:19:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.848 15:19:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.848 15:19:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.848 15:19:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.848 15:19:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.848 15:19:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.848 15:19:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.848 15:19:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.848 15:19:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:15:26.848 15:19:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:15:26.848 15:19:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.848 15:19:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.848 15:19:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:26.848 15:19:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.848 15:19:35 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:26.848 15:19:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.848 15:19:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.848 15:19:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.848 15:19:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.848 15:19:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.849 15:19:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.849 15:19:35 -- paths/export.sh@5 -- # export PATH 00:15:26.849 15:19:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.849 15:19:35 -- nvmf/common.sh@47 -- # : 0 00:15:26.849 15:19:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:26.849 15:19:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:26.849 15:19:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.849 15:19:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.849 15:19:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.849 15:19:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:26.849 15:19:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:26.849 15:19:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:26.849 15:19:35 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:26.849 15:19:35 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:26.849 15:19:35 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:26.849 15:19:35 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:26.849 15:19:35 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:26.849 15:19:35 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:26.849 15:19:35 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:26.849 15:19:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:26.849 15:19:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.849 15:19:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:26.849 15:19:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:26.849 15:19:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:26.849 15:19:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.849 15:19:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.849 15:19:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.849 15:19:35 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:26.849 15:19:35 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:26.849 15:19:35 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:26.849 15:19:35 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:26.849 15:19:35 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:26.849 15:19:35 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:26.849 15:19:35 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.849 15:19:35 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.849 15:19:35 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:26.849 15:19:35 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:26.849 15:19:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:26.849 15:19:35 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:26.849 15:19:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:26.849 15:19:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.849 15:19:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:26.849 15:19:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:26.849 15:19:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:26.849 15:19:35 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:26.849 15:19:35 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:26.849 15:19:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:26.849 Cannot find device "nvmf_tgt_br" 00:15:26.849 15:19:36 -- nvmf/common.sh@155 -- # true 00:15:26.849 15:19:36 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:26.849 Cannot find device "nvmf_tgt_br2" 00:15:26.849 15:19:36 -- nvmf/common.sh@156 -- # true 00:15:26.849 15:19:36 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:26.849 15:19:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:26.849 Cannot find device "nvmf_tgt_br" 00:15:26.849 15:19:36 -- nvmf/common.sh@158 -- # true 00:15:26.849 15:19:36 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:26.849 Cannot find device "nvmf_tgt_br2" 00:15:26.849 15:19:36 -- nvmf/common.sh@159 -- # true 00:15:26.849 15:19:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:26.849 15:19:36 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:27.107 15:19:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.107 15:19:36 -- nvmf/common.sh@162 -- # true 00:15:27.107 15:19:36 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.107 15:19:36 -- nvmf/common.sh@163 -- # true 00:15:27.107 15:19:36 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:27.107 15:19:36 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:27.107 15:19:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:27.107 15:19:36 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:27.107 15:19:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:27.107 15:19:36 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:27.107 15:19:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:27.107 15:19:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:27.107 15:19:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:27.107 15:19:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:27.107 15:19:36 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:27.107 15:19:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:27.107 15:19:36 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:27.107 15:19:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:27.107 15:19:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:27.107 15:19:36 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:27.107 15:19:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:27.107 15:19:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:27.107 15:19:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:27.107 15:19:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:27.107 15:19:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:27.107 15:19:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:27.107 15:19:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:27.107 15:19:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:27.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:27.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:15:27.107 00:15:27.107 --- 10.0.0.2 ping statistics --- 00:15:27.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.107 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:15:27.107 15:19:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:27.107 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:27.107 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:15:27.107 00:15:27.107 --- 10.0.0.3 ping statistics --- 00:15:27.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.108 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:27.108 15:19:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:27.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:27.108 00:15:27.108 --- 10.0.0.1 ping statistics --- 00:15:27.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.108 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:27.108 15:19:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.108 15:19:36 -- nvmf/common.sh@422 -- # return 0 00:15:27.108 15:19:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:27.108 15:19:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.108 15:19:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:27.108 15:19:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:27.108 15:19:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.108 15:19:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:27.108 15:19:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:27.108 15:19:36 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:27.108 15:19:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:27.108 15:19:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:27.108 15:19:36 -- common/autotest_common.sh@10 -- # set +x 00:15:27.108 15:19:36 -- nvmf/common.sh@470 -- # nvmfpid=73925 00:15:27.108 15:19:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:27.108 15:19:36 -- nvmf/common.sh@471 -- # waitforlisten 73925 00:15:27.108 15:19:36 -- common/autotest_common.sh@817 -- # '[' -z 73925 ']' 00:15:27.108 15:19:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.108 15:19:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:27.108 15:19:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.108 15:19:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:27.108 15:19:36 -- common/autotest_common.sh@10 -- # set +x 00:15:27.366 [2024-04-24 15:19:36.379245] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:15:27.366 [2024-04-24 15:19:36.379347] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.366 [2024-04-24 15:19:36.515376] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.624 [2024-04-24 15:19:36.628851] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.624 [2024-04-24 15:19:36.628925] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.624 [2024-04-24 15:19:36.628936] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.624 [2024-04-24 15:19:36.628945] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.624 [2024-04-24 15:19:36.628952] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.624 [2024-04-24 15:19:36.628987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.191 15:19:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:28.191 15:19:37 -- common/autotest_common.sh@850 -- # return 0 00:15:28.191 15:19:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:28.191 15:19:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:28.191 15:19:37 -- common/autotest_common.sh@10 -- # set +x 00:15:28.191 15:19:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.191 15:19:37 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:28.191 15:19:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:28.191 15:19:37 -- common/autotest_common.sh@10 -- # set +x 00:15:28.191 [2024-04-24 15:19:37.343314] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.191 [2024-04-24 15:19:37.351451] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:28.191 null0 00:15:28.191 [2024-04-24 15:19:37.383366] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.191 15:19:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:28.191 15:19:37 -- host/discovery_remove_ifc.sh@59 -- # hostpid=73956 00:15:28.191 15:19:37 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 73956 /tmp/host.sock 00:15:28.191 15:19:37 -- common/autotest_common.sh@817 -- # '[' -z 73956 ']' 00:15:28.191 15:19:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:15:28.191 15:19:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:28.191 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:28.191 15:19:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:28.191 15:19:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:28.191 15:19:37 -- common/autotest_common.sh@10 -- # set +x 00:15:28.191 15:19:37 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:28.449 [2024-04-24 15:19:37.451585] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:15:28.449 [2024-04-24 15:19:37.451670] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73956 ] 00:15:28.449 [2024-04-24 15:19:37.585022] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.707 [2024-04-24 15:19:37.704744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.276 15:19:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:29.276 15:19:38 -- common/autotest_common.sh@850 -- # return 0 00:15:29.276 15:19:38 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:29.276 15:19:38 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:29.276 15:19:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:29.276 15:19:38 -- common/autotest_common.sh@10 -- # set +x 00:15:29.276 15:19:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:29.276 15:19:38 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:29.276 15:19:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:29.276 15:19:38 -- common/autotest_common.sh@10 -- # set +x 00:15:29.535 15:19:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:29.535 15:19:38 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:29.535 15:19:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:29.535 15:19:38 -- common/autotest_common.sh@10 -- # set +x 00:15:30.469 [2024-04-24 15:19:39.566479] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:30.469 [2024-04-24 15:19:39.566526] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:30.469 [2024-04-24 15:19:39.566546] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:30.469 [2024-04-24 15:19:39.572548] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:30.469 [2024-04-24 15:19:39.628977] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:30.469 [2024-04-24 15:19:39.629057] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:30.469 [2024-04-24 15:19:39.629087] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:30.469 [2024-04-24 15:19:39.629104] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:30.469 [2024-04-24 15:19:39.629132] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:30.469 15:19:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:30.469 15:19:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.469 [2024-04-24 15:19:39.635248] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf59090 was disconnected and freed. delete nvme_qpair. 00:15:30.469 15:19:39 -- common/autotest_common.sh@10 -- # set +x 00:15:30.469 15:19:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:30.469 15:19:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.469 15:19:39 -- common/autotest_common.sh@10 -- # set +x 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:30.469 15:19:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:30.728 15:19:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.728 15:19:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:30.728 15:19:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:31.705 15:19:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:31.705 15:19:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:31.705 15:19:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:31.705 15:19:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:31.705 15:19:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:31.705 15:19:40 -- common/autotest_common.sh@10 -- # set +x 00:15:31.705 15:19:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:31.705 15:19:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:31.705 15:19:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:31.705 15:19:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:32.670 15:19:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:32.670 15:19:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:32.670 15:19:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:32.670 15:19:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:32.670 15:19:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:32.670 15:19:41 -- common/autotest_common.sh@10 -- # set +x 00:15:32.670 15:19:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:32.670 15:19:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:32.670 15:19:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:32.670 15:19:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:34.049 15:19:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:34.049 15:19:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.049 15:19:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.049 15:19:42 -- common/autotest_common.sh@10 -- # set +x 00:15:34.049 15:19:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:34.049 15:19:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:34.049 15:19:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:34.049 15:19:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.049 15:19:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:34.049 15:19:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:34.995 15:19:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:34.995 15:19:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.995 15:19:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.995 15:19:43 -- common/autotest_common.sh@10 -- # set +x 00:15:34.995 15:19:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:34.995 15:19:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:34.995 15:19:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:34.995 15:19:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.995 15:19:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:34.995 15:19:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:35.929 15:19:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:35.929 15:19:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.929 15:19:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:35.930 15:19:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:35.930 15:19:45 -- common/autotest_common.sh@10 -- # set +x 00:15:35.930 15:19:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:35.930 15:19:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:35.930 15:19:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:35.930 [2024-04-24 15:19:45.056406] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:35.930 [2024-04-24 15:19:45.056486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.930 [2024-04-24 15:19:45.056502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.930 [2024-04-24 15:19:45.056516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.930 [2024-04-24 15:19:45.056526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.930 [2024-04-24 15:19:45.056544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.930 [2024-04-24 15:19:45.056555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.930 [2024-04-24 15:19:45.056565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.930 [2024-04-24 15:19:45.056575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.930 [2024-04-24 15:19:45.056586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.930 [2024-04-24 15:19:45.056595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.930 [2024-04-24 15:19:45.056605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7f70 is same with the state(5) to be set 00:15:35.930 15:19:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:35.930 15:19:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:35.930 [2024-04-24 15:19:45.066399] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7f70 (9): Bad file descriptor 00:15:35.930 [2024-04-24 15:19:45.076426] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:36.864 15:19:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:36.864 15:19:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:36.864 15:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.864 15:19:46 -- common/autotest_common.sh@10 -- # set +x 00:15:36.864 15:19:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:36.864 15:19:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:36.864 15:19:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:36.864 [2024-04-24 15:19:46.080558] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:38.236 [2024-04-24 15:19:47.104511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:15:39.171 [2024-04-24 15:19:48.128591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:15:39.171 [2024-04-24 15:19:48.128731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7f70 with addr=10.0.0.2, port=4420 00:15:39.171 [2024-04-24 15:19:48.128766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7f70 is same with the state(5) to be set 00:15:39.171 [2024-04-24 15:19:48.129692] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7f70 (9): Bad file descriptor 00:15:39.171 [2024-04-24 15:19:48.129776] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:39.171 [2024-04-24 15:19:48.129830] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:15:39.171 [2024-04-24 15:19:48.129898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.171 [2024-04-24 15:19:48.129934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-04-24 15:19:48.129960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.171 [2024-04-24 15:19:48.129981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-04-24 15:19:48.130003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.171 [2024-04-24 15:19:48.130023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-04-24 15:19:48.130045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.171 [2024-04-24 15:19:48.130066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-04-24 15:19:48.130089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.171 [2024-04-24 15:19:48.130109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.171 [2024-04-24 15:19:48.130130] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:15:39.171 [2024-04-24 15:19:48.130190] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7830 (9): Bad file descriptor 00:15:39.171 [2024-04-24 15:19:48.131197] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:39.171 [2024-04-24 15:19:48.131267] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:15:39.171 15:19:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.171 15:19:48 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:39.171 15:19:48 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:40.105 15:19:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:40.105 15:19:49 -- common/autotest_common.sh@10 -- # set +x 00:15:40.105 15:19:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.105 15:19:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.105 15:19:49 -- common/autotest_common.sh@10 -- # set +x 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:40.105 15:19:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:40.105 15:19:49 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:41.040 [2024-04-24 15:19:50.142502] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:41.040 [2024-04-24 15:19:50.142552] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:41.040 [2024-04-24 15:19:50.142601] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:41.040 [2024-04-24 15:19:50.148547] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:15:41.040 [2024-04-24 15:19:50.203930] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:41.040 [2024-04-24 15:19:50.204001] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:41.040 [2024-04-24 15:19:50.204025] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:41.040 [2024-04-24 15:19:50.204040] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:15:41.040 [2024-04-24 15:19:50.204049] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:41.040 [2024-04-24 15:19:50.210851] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf665a0 was disconnected and freed. delete nvme_qpair. 00:15:41.299 15:19:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:41.299 15:19:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:41.299 15:19:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.299 15:19:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:41.299 15:19:50 -- common/autotest_common.sh@10 -- # set +x 00:15:41.299 15:19:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:41.299 15:19:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:41.299 15:19:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.299 15:19:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:41.299 15:19:50 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:41.299 15:19:50 -- host/discovery_remove_ifc.sh@90 -- # killprocess 73956 00:15:41.299 15:19:50 -- common/autotest_common.sh@936 -- # '[' -z 73956 ']' 00:15:41.299 15:19:50 -- common/autotest_common.sh@940 -- # kill -0 73956 00:15:41.299 15:19:50 -- common/autotest_common.sh@941 -- # uname 00:15:41.299 15:19:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:41.299 15:19:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73956 00:15:41.299 killing process with pid 73956 00:15:41.299 15:19:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:41.299 15:19:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:41.299 15:19:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73956' 00:15:41.299 15:19:50 -- common/autotest_common.sh@955 -- # kill 73956 00:15:41.299 15:19:50 -- common/autotest_common.sh@960 -- # wait 73956 00:15:41.558 15:19:50 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:41.558 15:19:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:41.558 15:19:50 -- nvmf/common.sh@117 -- # sync 00:15:41.558 15:19:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.558 15:19:50 -- nvmf/common.sh@120 -- # set +e 00:15:41.558 15:19:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.558 15:19:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.558 rmmod nvme_tcp 00:15:41.558 rmmod nvme_fabrics 00:15:41.558 rmmod nvme_keyring 00:15:41.558 15:19:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.558 15:19:50 -- nvmf/common.sh@124 -- # set -e 00:15:41.558 15:19:50 -- nvmf/common.sh@125 -- # return 0 00:15:41.558 15:19:50 -- nvmf/common.sh@478 -- # '[' -n 73925 ']' 00:15:41.558 15:19:50 -- nvmf/common.sh@479 -- # killprocess 73925 00:15:41.558 15:19:50 -- common/autotest_common.sh@936 -- # '[' -z 73925 ']' 00:15:41.558 15:19:50 -- common/autotest_common.sh@940 -- # kill -0 73925 00:15:41.558 15:19:50 -- common/autotest_common.sh@941 -- # uname 00:15:41.558 15:19:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:41.558 15:19:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73925 00:15:41.558 killing process with pid 73925 00:15:41.558 15:19:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:41.558 15:19:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:41.558 15:19:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73925' 00:15:41.558 15:19:50 -- common/autotest_common.sh@955 -- # kill 73925 00:15:41.558 15:19:50 -- common/autotest_common.sh@960 -- # wait 73925 00:15:41.816 15:19:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:41.816 15:19:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:41.816 15:19:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:41.816 15:19:50 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.816 15:19:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.816 15:19:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.816 15:19:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.816 15:19:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.816 15:19:51 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:41.816 00:15:41.816 real 0m15.185s 00:15:41.816 user 0m24.324s 00:15:41.816 sys 0m2.589s 00:15:41.816 15:19:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:41.816 ************************************ 00:15:41.816 END TEST nvmf_discovery_remove_ifc 00:15:41.816 15:19:51 -- common/autotest_common.sh@10 -- # set +x 00:15:41.816 ************************************ 00:15:42.076 15:19:51 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:42.076 15:19:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:42.076 15:19:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:42.076 15:19:51 -- common/autotest_common.sh@10 -- # set +x 00:15:42.076 ************************************ 00:15:42.076 START TEST nvmf_identify_kernel_target 00:15:42.076 ************************************ 00:15:42.076 15:19:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:42.076 * Looking for test storage... 00:15:42.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:42.076 15:19:51 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:42.076 15:19:51 -- nvmf/common.sh@7 -- # uname -s 00:15:42.076 15:19:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.076 15:19:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.076 15:19:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.076 15:19:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.076 15:19:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.076 15:19:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.076 15:19:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.076 15:19:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.076 15:19:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.076 15:19:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.076 15:19:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:15:42.076 15:19:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:15:42.076 15:19:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.076 15:19:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.076 15:19:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:42.076 15:19:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.076 15:19:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.076 15:19:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.076 15:19:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.076 15:19:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.076 15:19:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.076 15:19:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.076 15:19:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.076 15:19:51 -- paths/export.sh@5 -- # export PATH 00:15:42.076 15:19:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.076 15:19:51 -- nvmf/common.sh@47 -- # : 0 00:15:42.076 15:19:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:42.076 15:19:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:42.076 15:19:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.076 15:19:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.076 15:19:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.076 15:19:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:42.076 15:19:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:42.076 15:19:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:42.076 15:19:51 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:15:42.076 15:19:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:42.076 15:19:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.076 15:19:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:42.076 15:19:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:42.076 15:19:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:42.076 15:19:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.076 15:19:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.076 15:19:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.076 15:19:51 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:42.076 15:19:51 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:42.076 15:19:51 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:42.076 15:19:51 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:42.076 15:19:51 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:42.076 15:19:51 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:42.076 15:19:51 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.076 15:19:51 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.076 15:19:51 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:42.076 15:19:51 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:42.076 15:19:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:42.076 15:19:51 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:42.076 15:19:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:42.076 15:19:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.076 15:19:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:42.076 15:19:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:42.076 15:19:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:42.076 15:19:51 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:42.076 15:19:51 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:42.076 15:19:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:42.076 Cannot find device "nvmf_tgt_br" 00:15:42.076 15:19:51 -- nvmf/common.sh@155 -- # true 00:15:42.076 15:19:51 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:42.076 Cannot find device "nvmf_tgt_br2" 00:15:42.076 15:19:51 -- nvmf/common.sh@156 -- # true 00:15:42.076 15:19:51 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:42.334 15:19:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:42.334 Cannot find device "nvmf_tgt_br" 00:15:42.334 15:19:51 -- nvmf/common.sh@158 -- # true 00:15:42.334 15:19:51 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:42.334 Cannot find device "nvmf_tgt_br2" 00:15:42.334 15:19:51 -- nvmf/common.sh@159 -- # true 00:15:42.334 15:19:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:42.334 15:19:51 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:42.334 15:19:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.334 15:19:51 -- nvmf/common.sh@162 -- # true 00:15:42.334 15:19:51 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.334 15:19:51 -- nvmf/common.sh@163 -- # true 00:15:42.334 15:19:51 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:42.334 15:19:51 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:42.334 15:19:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:42.334 15:19:51 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:42.334 15:19:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:42.334 15:19:51 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:42.334 15:19:51 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:42.334 15:19:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:42.334 15:19:51 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:42.334 15:19:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:42.334 15:19:51 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:42.334 15:19:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:42.334 15:19:51 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:42.334 15:19:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:42.334 15:19:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:42.334 15:19:51 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:42.334 15:19:51 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:42.334 15:19:51 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:42.334 15:19:51 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:42.334 15:19:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:42.334 15:19:51 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:42.334 15:19:51 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:42.592 15:19:51 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:42.592 15:19:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:42.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:15:42.592 00:15:42.592 --- 10.0.0.2 ping statistics --- 00:15:42.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.592 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:42.592 15:19:51 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:42.592 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:42.592 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:42.592 00:15:42.592 --- 10.0.0.3 ping statistics --- 00:15:42.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.592 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:42.592 15:19:51 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:42.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:42.592 00:15:42.592 --- 10.0.0.1 ping statistics --- 00:15:42.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.592 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:42.592 15:19:51 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.592 15:19:51 -- nvmf/common.sh@422 -- # return 0 00:15:42.592 15:19:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:42.592 15:19:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.592 15:19:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:42.592 15:19:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:42.592 15:19:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.593 15:19:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:42.593 15:19:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:42.593 15:19:51 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:15:42.593 15:19:51 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:15:42.593 15:19:51 -- nvmf/common.sh@717 -- # local ip 00:15:42.593 15:19:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:42.593 15:19:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:42.593 15:19:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:42.593 15:19:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:42.593 15:19:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:42.593 15:19:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:42.593 15:19:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:42.593 15:19:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:42.593 15:19:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:42.593 15:19:51 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:15:42.593 15:19:51 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:15:42.593 15:19:51 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:15:42.593 15:19:51 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:15:42.593 15:19:51 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:42.593 15:19:51 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:42.593 15:19:51 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:42.593 15:19:51 -- nvmf/common.sh@628 -- # local block nvme 00:15:42.593 15:19:51 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:15:42.593 15:19:51 -- nvmf/common.sh@631 -- # modprobe nvmet 00:15:42.593 15:19:51 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:42.593 15:19:51 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:42.851 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:42.851 Waiting for block devices as requested 00:15:42.851 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:43.155 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:43.155 15:19:52 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:15:43.155 15:19:52 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:43.155 15:19:52 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:15:43.155 15:19:52 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:15:43.155 15:19:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:43.155 15:19:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:43.155 15:19:52 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:15:43.155 15:19:52 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:15:43.155 15:19:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:43.155 No valid GPT data, bailing 00:15:43.155 15:19:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:43.155 15:19:52 -- scripts/common.sh@391 -- # pt= 00:15:43.155 15:19:52 -- scripts/common.sh@392 -- # return 1 00:15:43.155 15:19:52 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:15:43.155 15:19:52 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:15:43.155 15:19:52 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:43.155 15:19:52 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:15:43.155 15:19:52 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:15:43.155 15:19:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:43.155 15:19:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:43.155 15:19:52 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:15:43.155 15:19:52 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:15:43.155 15:19:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:43.155 No valid GPT data, bailing 00:15:43.155 15:19:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:43.155 15:19:52 -- scripts/common.sh@391 -- # pt= 00:15:43.155 15:19:52 -- scripts/common.sh@392 -- # return 1 00:15:43.155 15:19:52 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:15:43.155 15:19:52 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:15:43.155 15:19:52 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:43.155 15:19:52 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:15:43.156 15:19:52 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:15:43.156 15:19:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:43.156 15:19:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:43.156 15:19:52 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:15:43.156 15:19:52 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:15:43.156 15:19:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:43.415 No valid GPT data, bailing 00:15:43.415 15:19:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:43.415 15:19:52 -- scripts/common.sh@391 -- # pt= 00:15:43.415 15:19:52 -- scripts/common.sh@392 -- # return 1 00:15:43.415 15:19:52 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:15:43.415 15:19:52 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:15:43.415 15:19:52 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:43.415 15:19:52 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:15:43.415 15:19:52 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:15:43.415 15:19:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:43.415 15:19:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:43.415 15:19:52 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:15:43.415 15:19:52 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:15:43.415 15:19:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:43.415 No valid GPT data, bailing 00:15:43.415 15:19:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:43.415 15:19:52 -- scripts/common.sh@391 -- # pt= 00:15:43.415 15:19:52 -- scripts/common.sh@392 -- # return 1 00:15:43.415 15:19:52 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:15:43.415 15:19:52 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:15:43.415 15:19:52 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:43.415 15:19:52 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:43.415 15:19:52 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:43.415 15:19:52 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:15:43.415 15:19:52 -- nvmf/common.sh@656 -- # echo 1 00:15:43.415 15:19:52 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:15:43.415 15:19:52 -- nvmf/common.sh@658 -- # echo 1 00:15:43.415 15:19:52 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:15:43.415 15:19:52 -- nvmf/common.sh@661 -- # echo tcp 00:15:43.415 15:19:52 -- nvmf/common.sh@662 -- # echo 4420 00:15:43.415 15:19:52 -- nvmf/common.sh@663 -- # echo ipv4 00:15:43.415 15:19:52 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:43.415 15:19:52 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab --hostid=bfc76e61-7421-4d42-8df7-48fb051b4cab -a 10.0.0.1 -t tcp -s 4420 00:15:43.415 00:15:43.415 Discovery Log Number of Records 2, Generation counter 2 00:15:43.415 =====Discovery Log Entry 0====== 00:15:43.415 trtype: tcp 00:15:43.415 adrfam: ipv4 00:15:43.415 subtype: current discovery subsystem 00:15:43.415 treq: not specified, sq flow control disable supported 00:15:43.415 portid: 1 00:15:43.415 trsvcid: 4420 00:15:43.415 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:43.415 traddr: 10.0.0.1 00:15:43.415 eflags: none 00:15:43.415 sectype: none 00:15:43.415 =====Discovery Log Entry 1====== 00:15:43.415 trtype: tcp 00:15:43.415 adrfam: ipv4 00:15:43.415 subtype: nvme subsystem 00:15:43.415 treq: not specified, sq flow control disable supported 00:15:43.415 portid: 1 00:15:43.415 trsvcid: 4420 00:15:43.415 subnqn: nqn.2016-06.io.spdk:testnqn 00:15:43.415 traddr: 10.0.0.1 00:15:43.415 eflags: none 00:15:43.415 sectype: none 00:15:43.416 15:19:52 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:15:43.416 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:15:43.676 ===================================================== 00:15:43.676 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:43.676 ===================================================== 00:15:43.676 Controller Capabilities/Features 00:15:43.676 ================================ 00:15:43.676 Vendor ID: 0000 00:15:43.676 Subsystem Vendor ID: 0000 00:15:43.676 Serial Number: 1deba1e849d6d31ba165 00:15:43.676 Model Number: Linux 00:15:43.676 Firmware Version: 6.7.0-68 00:15:43.676 Recommended Arb Burst: 0 00:15:43.676 IEEE OUI Identifier: 00 00 00 00:15:43.676 Multi-path I/O 00:15:43.676 May have multiple subsystem ports: No 00:15:43.676 May have multiple controllers: No 00:15:43.676 Associated with SR-IOV VF: No 00:15:43.676 Max Data Transfer Size: Unlimited 00:15:43.676 Max Number of Namespaces: 0 00:15:43.676 Max Number of I/O Queues: 1024 00:15:43.676 NVMe Specification Version (VS): 1.3 00:15:43.676 NVMe Specification Version (Identify): 1.3 00:15:43.676 Maximum Queue Entries: 1024 00:15:43.676 Contiguous Queues Required: No 00:15:43.676 Arbitration Mechanisms Supported 00:15:43.676 Weighted Round Robin: Not Supported 00:15:43.676 Vendor Specific: Not Supported 00:15:43.676 Reset Timeout: 7500 ms 00:15:43.676 Doorbell Stride: 4 bytes 00:15:43.676 NVM Subsystem Reset: Not Supported 00:15:43.676 Command Sets Supported 00:15:43.676 NVM Command Set: Supported 00:15:43.676 Boot Partition: Not Supported 00:15:43.676 Memory Page Size Minimum: 4096 bytes 00:15:43.676 Memory Page Size Maximum: 4096 bytes 00:15:43.676 Persistent Memory Region: Not Supported 00:15:43.676 Optional Asynchronous Events Supported 00:15:43.676 Namespace Attribute Notices: Not Supported 00:15:43.676 Firmware Activation Notices: Not Supported 00:15:43.676 ANA Change Notices: Not Supported 00:15:43.676 PLE Aggregate Log Change Notices: Not Supported 00:15:43.676 LBA Status Info Alert Notices: Not Supported 00:15:43.676 EGE Aggregate Log Change Notices: Not Supported 00:15:43.676 Normal NVM Subsystem Shutdown event: Not Supported 00:15:43.676 Zone Descriptor Change Notices: Not Supported 00:15:43.676 Discovery Log Change Notices: Supported 00:15:43.676 Controller Attributes 00:15:43.677 128-bit Host Identifier: Not Supported 00:15:43.677 Non-Operational Permissive Mode: Not Supported 00:15:43.677 NVM Sets: Not Supported 00:15:43.677 Read Recovery Levels: Not Supported 00:15:43.677 Endurance Groups: Not Supported 00:15:43.677 Predictable Latency Mode: Not Supported 00:15:43.677 Traffic Based Keep ALive: Not Supported 00:15:43.677 Namespace Granularity: Not Supported 00:15:43.677 SQ Associations: Not Supported 00:15:43.677 UUID List: Not Supported 00:15:43.677 Multi-Domain Subsystem: Not Supported 00:15:43.677 Fixed Capacity Management: Not Supported 00:15:43.677 Variable Capacity Management: Not Supported 00:15:43.677 Delete Endurance Group: Not Supported 00:15:43.677 Delete NVM Set: Not Supported 00:15:43.677 Extended LBA Formats Supported: Not Supported 00:15:43.677 Flexible Data Placement Supported: Not Supported 00:15:43.677 00:15:43.677 Controller Memory Buffer Support 00:15:43.677 ================================ 00:15:43.677 Supported: No 00:15:43.677 00:15:43.677 Persistent Memory Region Support 00:15:43.677 ================================ 00:15:43.677 Supported: No 00:15:43.677 00:15:43.677 Admin Command Set Attributes 00:15:43.677 ============================ 00:15:43.677 Security Send/Receive: Not Supported 00:15:43.677 Format NVM: Not Supported 00:15:43.677 Firmware Activate/Download: Not Supported 00:15:43.677 Namespace Management: Not Supported 00:15:43.677 Device Self-Test: Not Supported 00:15:43.677 Directives: Not Supported 00:15:43.677 NVMe-MI: Not Supported 00:15:43.677 Virtualization Management: Not Supported 00:15:43.677 Doorbell Buffer Config: Not Supported 00:15:43.677 Get LBA Status Capability: Not Supported 00:15:43.677 Command & Feature Lockdown Capability: Not Supported 00:15:43.677 Abort Command Limit: 1 00:15:43.677 Async Event Request Limit: 1 00:15:43.677 Number of Firmware Slots: N/A 00:15:43.677 Firmware Slot 1 Read-Only: N/A 00:15:43.677 Firmware Activation Without Reset: N/A 00:15:43.677 Multiple Update Detection Support: N/A 00:15:43.677 Firmware Update Granularity: No Information Provided 00:15:43.677 Per-Namespace SMART Log: No 00:15:43.677 Asymmetric Namespace Access Log Page: Not Supported 00:15:43.677 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:43.677 Command Effects Log Page: Not Supported 00:15:43.677 Get Log Page Extended Data: Supported 00:15:43.677 Telemetry Log Pages: Not Supported 00:15:43.677 Persistent Event Log Pages: Not Supported 00:15:43.677 Supported Log Pages Log Page: May Support 00:15:43.677 Commands Supported & Effects Log Page: Not Supported 00:15:43.677 Feature Identifiers & Effects Log Page:May Support 00:15:43.677 NVMe-MI Commands & Effects Log Page: May Support 00:15:43.677 Data Area 4 for Telemetry Log: Not Supported 00:15:43.677 Error Log Page Entries Supported: 1 00:15:43.677 Keep Alive: Not Supported 00:15:43.677 00:15:43.677 NVM Command Set Attributes 00:15:43.677 ========================== 00:15:43.677 Submission Queue Entry Size 00:15:43.677 Max: 1 00:15:43.677 Min: 1 00:15:43.677 Completion Queue Entry Size 00:15:43.677 Max: 1 00:15:43.677 Min: 1 00:15:43.677 Number of Namespaces: 0 00:15:43.677 Compare Command: Not Supported 00:15:43.677 Write Uncorrectable Command: Not Supported 00:15:43.677 Dataset Management Command: Not Supported 00:15:43.677 Write Zeroes Command: Not Supported 00:15:43.677 Set Features Save Field: Not Supported 00:15:43.677 Reservations: Not Supported 00:15:43.677 Timestamp: Not Supported 00:15:43.677 Copy: Not Supported 00:15:43.677 Volatile Write Cache: Not Present 00:15:43.677 Atomic Write Unit (Normal): 1 00:15:43.677 Atomic Write Unit (PFail): 1 00:15:43.677 Atomic Compare & Write Unit: 1 00:15:43.677 Fused Compare & Write: Not Supported 00:15:43.677 Scatter-Gather List 00:15:43.677 SGL Command Set: Supported 00:15:43.677 SGL Keyed: Not Supported 00:15:43.677 SGL Bit Bucket Descriptor: Not Supported 00:15:43.677 SGL Metadata Pointer: Not Supported 00:15:43.677 Oversized SGL: Not Supported 00:15:43.677 SGL Metadata Address: Not Supported 00:15:43.677 SGL Offset: Supported 00:15:43.677 Transport SGL Data Block: Not Supported 00:15:43.677 Replay Protected Memory Block: Not Supported 00:15:43.677 00:15:43.677 Firmware Slot Information 00:15:43.677 ========================= 00:15:43.677 Active slot: 0 00:15:43.677 00:15:43.677 00:15:43.677 Error Log 00:15:43.677 ========= 00:15:43.677 00:15:43.677 Active Namespaces 00:15:43.677 ================= 00:15:43.677 Discovery Log Page 00:15:43.677 ================== 00:15:43.677 Generation Counter: 2 00:15:43.677 Number of Records: 2 00:15:43.677 Record Format: 0 00:15:43.677 00:15:43.677 Discovery Log Entry 0 00:15:43.677 ---------------------- 00:15:43.677 Transport Type: 3 (TCP) 00:15:43.677 Address Family: 1 (IPv4) 00:15:43.677 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:43.677 Entry Flags: 00:15:43.677 Duplicate Returned Information: 0 00:15:43.677 Explicit Persistent Connection Support for Discovery: 0 00:15:43.677 Transport Requirements: 00:15:43.677 Secure Channel: Not Specified 00:15:43.677 Port ID: 1 (0x0001) 00:15:43.677 Controller ID: 65535 (0xffff) 00:15:43.677 Admin Max SQ Size: 32 00:15:43.677 Transport Service Identifier: 4420 00:15:43.677 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:43.677 Transport Address: 10.0.0.1 00:15:43.677 Discovery Log Entry 1 00:15:43.677 ---------------------- 00:15:43.677 Transport Type: 3 (TCP) 00:15:43.677 Address Family: 1 (IPv4) 00:15:43.677 Subsystem Type: 2 (NVM Subsystem) 00:15:43.677 Entry Flags: 00:15:43.677 Duplicate Returned Information: 0 00:15:43.677 Explicit Persistent Connection Support for Discovery: 0 00:15:43.677 Transport Requirements: 00:15:43.677 Secure Channel: Not Specified 00:15:43.677 Port ID: 1 (0x0001) 00:15:43.677 Controller ID: 65535 (0xffff) 00:15:43.677 Admin Max SQ Size: 32 00:15:43.677 Transport Service Identifier: 4420 00:15:43.677 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:15:43.677 Transport Address: 10.0.0.1 00:15:43.677 15:19:52 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:15:43.954 get_feature(0x01) failed 00:15:43.954 get_feature(0x02) failed 00:15:43.954 get_feature(0x04) failed 00:15:43.954 ===================================================== 00:15:43.954 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:15:43.954 ===================================================== 00:15:43.954 Controller Capabilities/Features 00:15:43.954 ================================ 00:15:43.954 Vendor ID: 0000 00:15:43.954 Subsystem Vendor ID: 0000 00:15:43.954 Serial Number: 98a65554e2d2fbed5da7 00:15:43.954 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:15:43.954 Firmware Version: 6.7.0-68 00:15:43.954 Recommended Arb Burst: 6 00:15:43.954 IEEE OUI Identifier: 00 00 00 00:15:43.954 Multi-path I/O 00:15:43.954 May have multiple subsystem ports: Yes 00:15:43.954 May have multiple controllers: Yes 00:15:43.954 Associated with SR-IOV VF: No 00:15:43.954 Max Data Transfer Size: Unlimited 00:15:43.954 Max Number of Namespaces: 1024 00:15:43.954 Max Number of I/O Queues: 128 00:15:43.954 NVMe Specification Version (VS): 1.3 00:15:43.954 NVMe Specification Version (Identify): 1.3 00:15:43.954 Maximum Queue Entries: 1024 00:15:43.954 Contiguous Queues Required: No 00:15:43.954 Arbitration Mechanisms Supported 00:15:43.954 Weighted Round Robin: Not Supported 00:15:43.954 Vendor Specific: Not Supported 00:15:43.954 Reset Timeout: 7500 ms 00:15:43.954 Doorbell Stride: 4 bytes 00:15:43.954 NVM Subsystem Reset: Not Supported 00:15:43.954 Command Sets Supported 00:15:43.954 NVM Command Set: Supported 00:15:43.954 Boot Partition: Not Supported 00:15:43.954 Memory Page Size Minimum: 4096 bytes 00:15:43.954 Memory Page Size Maximum: 4096 bytes 00:15:43.954 Persistent Memory Region: Not Supported 00:15:43.954 Optional Asynchronous Events Supported 00:15:43.954 Namespace Attribute Notices: Supported 00:15:43.954 Firmware Activation Notices: Not Supported 00:15:43.954 ANA Change Notices: Supported 00:15:43.954 PLE Aggregate Log Change Notices: Not Supported 00:15:43.954 LBA Status Info Alert Notices: Not Supported 00:15:43.954 EGE Aggregate Log Change Notices: Not Supported 00:15:43.954 Normal NVM Subsystem Shutdown event: Not Supported 00:15:43.954 Zone Descriptor Change Notices: Not Supported 00:15:43.954 Discovery Log Change Notices: Not Supported 00:15:43.954 Controller Attributes 00:15:43.954 128-bit Host Identifier: Supported 00:15:43.954 Non-Operational Permissive Mode: Not Supported 00:15:43.954 NVM Sets: Not Supported 00:15:43.954 Read Recovery Levels: Not Supported 00:15:43.954 Endurance Groups: Not Supported 00:15:43.955 Predictable Latency Mode: Not Supported 00:15:43.955 Traffic Based Keep ALive: Supported 00:15:43.955 Namespace Granularity: Not Supported 00:15:43.955 SQ Associations: Not Supported 00:15:43.955 UUID List: Not Supported 00:15:43.955 Multi-Domain Subsystem: Not Supported 00:15:43.955 Fixed Capacity Management: Not Supported 00:15:43.955 Variable Capacity Management: Not Supported 00:15:43.955 Delete Endurance Group: Not Supported 00:15:43.955 Delete NVM Set: Not Supported 00:15:43.955 Extended LBA Formats Supported: Not Supported 00:15:43.955 Flexible Data Placement Supported: Not Supported 00:15:43.955 00:15:43.955 Controller Memory Buffer Support 00:15:43.955 ================================ 00:15:43.955 Supported: No 00:15:43.955 00:15:43.955 Persistent Memory Region Support 00:15:43.955 ================================ 00:15:43.955 Supported: No 00:15:43.955 00:15:43.955 Admin Command Set Attributes 00:15:43.955 ============================ 00:15:43.955 Security Send/Receive: Not Supported 00:15:43.955 Format NVM: Not Supported 00:15:43.955 Firmware Activate/Download: Not Supported 00:15:43.955 Namespace Management: Not Supported 00:15:43.955 Device Self-Test: Not Supported 00:15:43.955 Directives: Not Supported 00:15:43.955 NVMe-MI: Not Supported 00:15:43.955 Virtualization Management: Not Supported 00:15:43.955 Doorbell Buffer Config: Not Supported 00:15:43.955 Get LBA Status Capability: Not Supported 00:15:43.955 Command & Feature Lockdown Capability: Not Supported 00:15:43.955 Abort Command Limit: 4 00:15:43.955 Async Event Request Limit: 4 00:15:43.955 Number of Firmware Slots: N/A 00:15:43.955 Firmware Slot 1 Read-Only: N/A 00:15:43.955 Firmware Activation Without Reset: N/A 00:15:43.955 Multiple Update Detection Support: N/A 00:15:43.955 Firmware Update Granularity: No Information Provided 00:15:43.955 Per-Namespace SMART Log: Yes 00:15:43.955 Asymmetric Namespace Access Log Page: Supported 00:15:43.955 ANA Transition Time : 10 sec 00:15:43.955 00:15:43.955 Asymmetric Namespace Access Capabilities 00:15:43.955 ANA Optimized State : Supported 00:15:43.955 ANA Non-Optimized State : Supported 00:15:43.955 ANA Inaccessible State : Supported 00:15:43.955 ANA Persistent Loss State : Supported 00:15:43.955 ANA Change State : Supported 00:15:43.955 ANAGRPID is not changed : No 00:15:43.955 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:15:43.955 00:15:43.955 ANA Group Identifier Maximum : 128 00:15:43.955 Number of ANA Group Identifiers : 128 00:15:43.955 Max Number of Allowed Namespaces : 1024 00:15:43.955 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:15:43.955 Command Effects Log Page: Supported 00:15:43.955 Get Log Page Extended Data: Supported 00:15:43.955 Telemetry Log Pages: Not Supported 00:15:43.955 Persistent Event Log Pages: Not Supported 00:15:43.955 Supported Log Pages Log Page: May Support 00:15:43.955 Commands Supported & Effects Log Page: Not Supported 00:15:43.955 Feature Identifiers & Effects Log Page:May Support 00:15:43.955 NVMe-MI Commands & Effects Log Page: May Support 00:15:43.955 Data Area 4 for Telemetry Log: Not Supported 00:15:43.955 Error Log Page Entries Supported: 128 00:15:43.955 Keep Alive: Supported 00:15:43.955 Keep Alive Granularity: 1000 ms 00:15:43.955 00:15:43.955 NVM Command Set Attributes 00:15:43.955 ========================== 00:15:43.955 Submission Queue Entry Size 00:15:43.955 Max: 64 00:15:43.955 Min: 64 00:15:43.955 Completion Queue Entry Size 00:15:43.955 Max: 16 00:15:43.955 Min: 16 00:15:43.955 Number of Namespaces: 1024 00:15:43.955 Compare Command: Not Supported 00:15:43.955 Write Uncorrectable Command: Not Supported 00:15:43.955 Dataset Management Command: Supported 00:15:43.955 Write Zeroes Command: Supported 00:15:43.955 Set Features Save Field: Not Supported 00:15:43.955 Reservations: Not Supported 00:15:43.955 Timestamp: Not Supported 00:15:43.955 Copy: Not Supported 00:15:43.955 Volatile Write Cache: Present 00:15:43.955 Atomic Write Unit (Normal): 1 00:15:43.955 Atomic Write Unit (PFail): 1 00:15:43.955 Atomic Compare & Write Unit: 1 00:15:43.955 Fused Compare & Write: Not Supported 00:15:43.955 Scatter-Gather List 00:15:43.955 SGL Command Set: Supported 00:15:43.955 SGL Keyed: Not Supported 00:15:43.955 SGL Bit Bucket Descriptor: Not Supported 00:15:43.955 SGL Metadata Pointer: Not Supported 00:15:43.955 Oversized SGL: Not Supported 00:15:43.955 SGL Metadata Address: Not Supported 00:15:43.955 SGL Offset: Supported 00:15:43.955 Transport SGL Data Block: Not Supported 00:15:43.955 Replay Protected Memory Block: Not Supported 00:15:43.955 00:15:43.955 Firmware Slot Information 00:15:43.955 ========================= 00:15:43.955 Active slot: 0 00:15:43.955 00:15:43.955 Asymmetric Namespace Access 00:15:43.955 =========================== 00:15:43.955 Change Count : 0 00:15:43.955 Number of ANA Group Descriptors : 1 00:15:43.955 ANA Group Descriptor : 0 00:15:43.955 ANA Group ID : 1 00:15:43.955 Number of NSID Values : 1 00:15:43.955 Change Count : 0 00:15:43.955 ANA State : 1 00:15:43.955 Namespace Identifier : 1 00:15:43.955 00:15:43.955 Commands Supported and Effects 00:15:43.955 ============================== 00:15:43.955 Admin Commands 00:15:43.955 -------------- 00:15:43.955 Get Log Page (02h): Supported 00:15:43.955 Identify (06h): Supported 00:15:43.955 Abort (08h): Supported 00:15:43.955 Set Features (09h): Supported 00:15:43.955 Get Features (0Ah): Supported 00:15:43.955 Asynchronous Event Request (0Ch): Supported 00:15:43.955 Keep Alive (18h): Supported 00:15:43.955 I/O Commands 00:15:43.955 ------------ 00:15:43.955 Flush (00h): Supported 00:15:43.955 Write (01h): Supported LBA-Change 00:15:43.955 Read (02h): Supported 00:15:43.955 Write Zeroes (08h): Supported LBA-Change 00:15:43.955 Dataset Management (09h): Supported 00:15:43.955 00:15:43.955 Error Log 00:15:43.955 ========= 00:15:43.955 Entry: 0 00:15:43.955 Error Count: 0x3 00:15:43.955 Submission Queue Id: 0x0 00:15:43.955 Command Id: 0x5 00:15:43.955 Phase Bit: 0 00:15:43.955 Status Code: 0x2 00:15:43.955 Status Code Type: 0x0 00:15:43.955 Do Not Retry: 1 00:15:43.955 Error Location: 0x28 00:15:43.955 LBA: 0x0 00:15:43.955 Namespace: 0x0 00:15:43.955 Vendor Log Page: 0x0 00:15:43.955 ----------- 00:15:43.955 Entry: 1 00:15:43.955 Error Count: 0x2 00:15:43.955 Submission Queue Id: 0x0 00:15:43.955 Command Id: 0x5 00:15:43.955 Phase Bit: 0 00:15:43.955 Status Code: 0x2 00:15:43.955 Status Code Type: 0x0 00:15:43.955 Do Not Retry: 1 00:15:43.955 Error Location: 0x28 00:15:43.955 LBA: 0x0 00:15:43.955 Namespace: 0x0 00:15:43.955 Vendor Log Page: 0x0 00:15:43.955 ----------- 00:15:43.955 Entry: 2 00:15:43.955 Error Count: 0x1 00:15:43.955 Submission Queue Id: 0x0 00:15:43.955 Command Id: 0x4 00:15:43.955 Phase Bit: 0 00:15:43.955 Status Code: 0x2 00:15:43.955 Status Code Type: 0x0 00:15:43.955 Do Not Retry: 1 00:15:43.955 Error Location: 0x28 00:15:43.955 LBA: 0x0 00:15:43.955 Namespace: 0x0 00:15:43.955 Vendor Log Page: 0x0 00:15:43.955 00:15:43.955 Number of Queues 00:15:43.955 ================ 00:15:43.955 Number of I/O Submission Queues: 128 00:15:43.955 Number of I/O Completion Queues: 128 00:15:43.955 00:15:43.955 ZNS Specific Controller Data 00:15:43.955 ============================ 00:15:43.955 Zone Append Size Limit: 0 00:15:43.955 00:15:43.955 00:15:43.955 Active Namespaces 00:15:43.955 ================= 00:15:43.955 get_feature(0x05) failed 00:15:43.955 Namespace ID:1 00:15:43.955 Command Set Identifier: NVM (00h) 00:15:43.955 Deallocate: Supported 00:15:43.955 Deallocated/Unwritten Error: Not Supported 00:15:43.955 Deallocated Read Value: Unknown 00:15:43.955 Deallocate in Write Zeroes: Not Supported 00:15:43.955 Deallocated Guard Field: 0xFFFF 00:15:43.955 Flush: Supported 00:15:43.955 Reservation: Not Supported 00:15:43.955 Namespace Sharing Capabilities: Multiple Controllers 00:15:43.955 Size (in LBAs): 1310720 (5GiB) 00:15:43.955 Capacity (in LBAs): 1310720 (5GiB) 00:15:43.955 Utilization (in LBAs): 1310720 (5GiB) 00:15:43.956 UUID: 801ef479-daa6-41b4-88ee-92eb72cd34bc 00:15:43.956 Thin Provisioning: Not Supported 00:15:43.956 Per-NS Atomic Units: Yes 00:15:43.956 Atomic Boundary Size (Normal): 0 00:15:43.956 Atomic Boundary Size (PFail): 0 00:15:43.956 Atomic Boundary Offset: 0 00:15:43.956 NGUID/EUI64 Never Reused: No 00:15:43.956 ANA group ID: 1 00:15:43.956 Namespace Write Protected: No 00:15:43.956 Number of LBA Formats: 1 00:15:43.956 Current LBA Format: LBA Format #00 00:15:43.956 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:15:43.956 00:15:43.956 15:19:52 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:15:43.956 15:19:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:43.956 15:19:52 -- nvmf/common.sh@117 -- # sync 00:15:43.956 15:19:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.956 15:19:52 -- nvmf/common.sh@120 -- # set +e 00:15:43.956 15:19:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.956 15:19:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.956 rmmod nvme_tcp 00:15:43.956 rmmod nvme_fabrics 00:15:43.956 15:19:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.956 15:19:53 -- nvmf/common.sh@124 -- # set -e 00:15:43.956 15:19:53 -- nvmf/common.sh@125 -- # return 0 00:15:43.956 15:19:53 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:43.956 15:19:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:43.956 15:19:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:43.956 15:19:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:43.956 15:19:53 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.956 15:19:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.956 15:19:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.956 15:19:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.956 15:19:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.956 15:19:53 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:43.956 15:19:53 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:15:43.956 15:19:53 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:15:43.956 15:19:53 -- nvmf/common.sh@675 -- # echo 0 00:15:43.956 15:19:53 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:43.956 15:19:53 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:43.956 15:19:53 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:15:43.956 15:19:53 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:43.956 15:19:53 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:15:43.956 15:19:53 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:15:43.956 15:19:53 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:44.890 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:44.890 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:44.890 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:44.890 00:15:44.890 real 0m2.846s 00:15:44.890 user 0m0.962s 00:15:44.890 sys 0m1.359s 00:15:44.890 15:19:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:44.890 15:19:54 -- common/autotest_common.sh@10 -- # set +x 00:15:44.890 ************************************ 00:15:44.890 END TEST nvmf_identify_kernel_target 00:15:44.890 ************************************ 00:15:44.890 15:19:54 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:44.890 15:19:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:44.890 15:19:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:44.890 15:19:54 -- common/autotest_common.sh@10 -- # set +x 00:15:44.890 ************************************ 00:15:44.890 START TEST nvmf_auth 00:15:44.890 ************************************ 00:15:44.890 15:19:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:45.149 * Looking for test storage... 00:15:45.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:45.149 15:19:54 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:45.149 15:19:54 -- nvmf/common.sh@7 -- # uname -s 00:15:45.149 15:19:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.149 15:19:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.149 15:19:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.149 15:19:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.149 15:19:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.149 15:19:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.149 15:19:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.149 15:19:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.149 15:19:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.149 15:19:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.150 15:19:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:15:45.150 15:19:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:15:45.150 15:19:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.150 15:19:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.150 15:19:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:45.150 15:19:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.150 15:19:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.150 15:19:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.150 15:19:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.150 15:19:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.150 15:19:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.150 15:19:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.150 15:19:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.150 15:19:54 -- paths/export.sh@5 -- # export PATH 00:15:45.150 15:19:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.150 15:19:54 -- nvmf/common.sh@47 -- # : 0 00:15:45.150 15:19:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.150 15:19:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.150 15:19:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.150 15:19:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.150 15:19:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.150 15:19:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.150 15:19:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.150 15:19:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.150 15:19:54 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:45.150 15:19:54 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:45.150 15:19:54 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:15:45.150 15:19:54 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:15:45.150 15:19:54 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:45.150 15:19:54 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:45.150 15:19:54 -- host/auth.sh@21 -- # keys=() 00:15:45.150 15:19:54 -- host/auth.sh@77 -- # nvmftestinit 00:15:45.150 15:19:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:45.150 15:19:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.150 15:19:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:45.150 15:19:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:45.150 15:19:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:45.150 15:19:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.150 15:19:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.150 15:19:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.150 15:19:54 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:45.150 15:19:54 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:45.150 15:19:54 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:45.150 15:19:54 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:45.150 15:19:54 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:45.150 15:19:54 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:45.150 15:19:54 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.150 15:19:54 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.150 15:19:54 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:45.150 15:19:54 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:45.150 15:19:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:45.150 15:19:54 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:45.150 15:19:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:45.150 15:19:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.150 15:19:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:45.150 15:19:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:45.150 15:19:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:45.150 15:19:54 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:45.150 15:19:54 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:45.150 15:19:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:45.150 Cannot find device "nvmf_tgt_br" 00:15:45.150 15:19:54 -- nvmf/common.sh@155 -- # true 00:15:45.150 15:19:54 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.150 Cannot find device "nvmf_tgt_br2" 00:15:45.150 15:19:54 -- nvmf/common.sh@156 -- # true 00:15:45.150 15:19:54 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:45.150 15:19:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:45.150 Cannot find device "nvmf_tgt_br" 00:15:45.150 15:19:54 -- nvmf/common.sh@158 -- # true 00:15:45.150 15:19:54 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:45.150 Cannot find device "nvmf_tgt_br2" 00:15:45.150 15:19:54 -- nvmf/common.sh@159 -- # true 00:15:45.150 15:19:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:45.150 15:19:54 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:45.150 15:19:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.150 15:19:54 -- nvmf/common.sh@162 -- # true 00:15:45.150 15:19:54 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.150 15:19:54 -- nvmf/common.sh@163 -- # true 00:15:45.150 15:19:54 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:45.150 15:19:54 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:45.150 15:19:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:45.150 15:19:54 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:45.409 15:19:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:45.409 15:19:54 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:45.409 15:19:54 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:45.409 15:19:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:45.409 15:19:54 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:45.409 15:19:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:45.409 15:19:54 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:45.409 15:19:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:45.409 15:19:54 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:45.409 15:19:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:45.409 15:19:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:45.409 15:19:54 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:45.409 15:19:54 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:45.409 15:19:54 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:45.409 15:19:54 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:45.409 15:19:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:45.409 15:19:54 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:45.409 15:19:54 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:45.409 15:19:54 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:45.409 15:19:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:45.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:15:45.409 00:15:45.409 --- 10.0.0.2 ping statistics --- 00:15:45.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.409 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:45.409 15:19:54 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:45.409 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:45.409 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:45.409 00:15:45.409 --- 10.0.0.3 ping statistics --- 00:15:45.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.409 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:45.409 15:19:54 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:45.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:45.409 00:15:45.409 --- 10.0.0.1 ping statistics --- 00:15:45.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.409 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:45.409 15:19:54 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.409 15:19:54 -- nvmf/common.sh@422 -- # return 0 00:15:45.409 15:19:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:45.409 15:19:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.409 15:19:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:45.409 15:19:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:45.409 15:19:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.409 15:19:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:45.409 15:19:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:45.409 15:19:54 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:15:45.409 15:19:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:45.409 15:19:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:45.409 15:19:54 -- common/autotest_common.sh@10 -- # set +x 00:15:45.409 15:19:54 -- nvmf/common.sh@470 -- # nvmfpid=74856 00:15:45.409 15:19:54 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:15:45.409 15:19:54 -- nvmf/common.sh@471 -- # waitforlisten 74856 00:15:45.409 15:19:54 -- common/autotest_common.sh@817 -- # '[' -z 74856 ']' 00:15:45.409 15:19:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.409 15:19:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:45.409 15:19:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.409 15:19:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:45.409 15:19:54 -- common/autotest_common.sh@10 -- # set +x 00:15:46.342 15:19:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:46.342 15:19:55 -- common/autotest_common.sh@850 -- # return 0 00:15:46.342 15:19:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:46.342 15:19:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:46.342 15:19:55 -- common/autotest_common.sh@10 -- # set +x 00:15:46.600 15:19:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.600 15:19:55 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:15:46.600 15:19:55 -- host/auth.sh@81 -- # gen_key null 32 00:15:46.600 15:19:55 -- host/auth.sh@53 -- # local digest len file key 00:15:46.600 15:19:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:46.600 15:19:55 -- host/auth.sh@54 -- # local -A digests 00:15:46.600 15:19:55 -- host/auth.sh@56 -- # digest=null 00:15:46.600 15:19:55 -- host/auth.sh@56 -- # len=32 00:15:46.600 15:19:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:46.600 15:19:55 -- host/auth.sh@57 -- # key=50c23c6fe8b84ec71e773753dc119953 00:15:46.600 15:19:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:15:46.600 15:19:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.6zK 00:15:46.600 15:19:55 -- host/auth.sh@59 -- # format_dhchap_key 50c23c6fe8b84ec71e773753dc119953 0 00:15:46.600 15:19:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 50c23c6fe8b84ec71e773753dc119953 0 00:15:46.600 15:19:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:15:46.600 15:19:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:15:46.600 15:19:55 -- nvmf/common.sh@693 -- # key=50c23c6fe8b84ec71e773753dc119953 00:15:46.600 15:19:55 -- nvmf/common.sh@693 -- # digest=0 00:15:46.600 15:19:55 -- nvmf/common.sh@694 -- # python - 00:15:46.600 15:19:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.6zK 00:15:46.600 15:19:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.6zK 00:15:46.600 15:19:55 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.6zK 00:15:46.600 15:19:55 -- host/auth.sh@82 -- # gen_key null 48 00:15:46.600 15:19:55 -- host/auth.sh@53 -- # local digest len file key 00:15:46.600 15:19:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:46.600 15:19:55 -- host/auth.sh@54 -- # local -A digests 00:15:46.600 15:19:55 -- host/auth.sh@56 -- # digest=null 00:15:46.600 15:19:55 -- host/auth.sh@56 -- # len=48 00:15:46.600 15:19:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:46.600 15:19:55 -- host/auth.sh@57 -- # key=65db4f770422c01c042a04ae20b8d570c7db938b59ca2237 00:15:46.600 15:19:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:15:46.600 15:19:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.VKa 00:15:46.600 15:19:55 -- host/auth.sh@59 -- # format_dhchap_key 65db4f770422c01c042a04ae20b8d570c7db938b59ca2237 0 00:15:46.600 15:19:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 65db4f770422c01c042a04ae20b8d570c7db938b59ca2237 0 00:15:46.600 15:19:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:15:46.600 15:19:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:15:46.600 15:19:55 -- nvmf/common.sh@693 -- # key=65db4f770422c01c042a04ae20b8d570c7db938b59ca2237 00:15:46.600 15:19:55 -- nvmf/common.sh@693 -- # digest=0 00:15:46.600 15:19:55 -- nvmf/common.sh@694 -- # python - 00:15:46.600 15:19:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.VKa 00:15:46.600 15:19:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.VKa 00:15:46.600 15:19:55 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.VKa 00:15:46.600 15:19:55 -- host/auth.sh@83 -- # gen_key sha256 32 00:15:46.600 15:19:55 -- host/auth.sh@53 -- # local digest len file key 00:15:46.600 15:19:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:46.600 15:19:55 -- host/auth.sh@54 -- # local -A digests 00:15:46.600 15:19:55 -- host/auth.sh@56 -- # digest=sha256 00:15:46.600 15:19:55 -- host/auth.sh@56 -- # len=32 00:15:46.600 15:19:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:46.600 15:19:55 -- host/auth.sh@57 -- # key=7b335ca9357101222ff8723e00198232 00:15:46.600 15:19:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:15:46.600 15:19:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.OOn 00:15:46.600 15:19:55 -- host/auth.sh@59 -- # format_dhchap_key 7b335ca9357101222ff8723e00198232 1 00:15:46.600 15:19:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 7b335ca9357101222ff8723e00198232 1 00:15:46.601 15:19:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:15:46.601 15:19:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:15:46.601 15:19:55 -- nvmf/common.sh@693 -- # key=7b335ca9357101222ff8723e00198232 00:15:46.601 15:19:55 -- nvmf/common.sh@693 -- # digest=1 00:15:46.601 15:19:55 -- nvmf/common.sh@694 -- # python - 00:15:46.601 15:19:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.OOn 00:15:46.601 15:19:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.OOn 00:15:46.601 15:19:55 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.OOn 00:15:46.601 15:19:55 -- host/auth.sh@84 -- # gen_key sha384 48 00:15:46.601 15:19:55 -- host/auth.sh@53 -- # local digest len file key 00:15:46.601 15:19:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:46.601 15:19:55 -- host/auth.sh@54 -- # local -A digests 00:15:46.601 15:19:55 -- host/auth.sh@56 -- # digest=sha384 00:15:46.601 15:19:55 -- host/auth.sh@56 -- # len=48 00:15:46.601 15:19:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:46.601 15:19:55 -- host/auth.sh@57 -- # key=ae13cdeff9cbccc36b095b131ef1626e4e8ef2ebaa86354c 00:15:46.601 15:19:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:15:46.601 15:19:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.sjr 00:15:46.601 15:19:55 -- host/auth.sh@59 -- # format_dhchap_key ae13cdeff9cbccc36b095b131ef1626e4e8ef2ebaa86354c 2 00:15:46.601 15:19:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 ae13cdeff9cbccc36b095b131ef1626e4e8ef2ebaa86354c 2 00:15:46.601 15:19:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:15:46.601 15:19:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:15:46.601 15:19:55 -- nvmf/common.sh@693 -- # key=ae13cdeff9cbccc36b095b131ef1626e4e8ef2ebaa86354c 00:15:46.601 15:19:55 -- nvmf/common.sh@693 -- # digest=2 00:15:46.601 15:19:55 -- nvmf/common.sh@694 -- # python - 00:15:46.859 15:19:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.sjr 00:15:46.859 15:19:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.sjr 00:15:46.859 15:19:55 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.sjr 00:15:46.859 15:19:55 -- host/auth.sh@85 -- # gen_key sha512 64 00:15:46.859 15:19:55 -- host/auth.sh@53 -- # local digest len file key 00:15:46.859 15:19:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:46.859 15:19:55 -- host/auth.sh@54 -- # local -A digests 00:15:46.859 15:19:55 -- host/auth.sh@56 -- # digest=sha512 00:15:46.859 15:19:55 -- host/auth.sh@56 -- # len=64 00:15:46.859 15:19:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:46.859 15:19:55 -- host/auth.sh@57 -- # key=8b53484d9d11917c1e4380f6478a0190de3b5438d40061f77248aacc2ad76165 00:15:46.859 15:19:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:15:46.859 15:19:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.P8D 00:15:46.859 15:19:55 -- host/auth.sh@59 -- # format_dhchap_key 8b53484d9d11917c1e4380f6478a0190de3b5438d40061f77248aacc2ad76165 3 00:15:46.859 15:19:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 8b53484d9d11917c1e4380f6478a0190de3b5438d40061f77248aacc2ad76165 3 00:15:46.859 15:19:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:15:46.859 15:19:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:15:46.859 15:19:55 -- nvmf/common.sh@693 -- # key=8b53484d9d11917c1e4380f6478a0190de3b5438d40061f77248aacc2ad76165 00:15:46.859 15:19:55 -- nvmf/common.sh@693 -- # digest=3 00:15:46.859 15:19:55 -- nvmf/common.sh@694 -- # python - 00:15:46.859 15:19:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.P8D 00:15:46.859 15:19:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.P8D 00:15:46.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.859 15:19:55 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.P8D 00:15:46.859 15:19:55 -- host/auth.sh@87 -- # waitforlisten 74856 00:15:46.859 15:19:55 -- common/autotest_common.sh@817 -- # '[' -z 74856 ']' 00:15:46.859 15:19:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.859 15:19:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:46.859 15:19:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.859 15:19:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:46.859 15:19:55 -- common/autotest_common.sh@10 -- # set +x 00:15:47.117 15:19:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:47.117 15:19:56 -- common/autotest_common.sh@850 -- # return 0 00:15:47.117 15:19:56 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:15:47.117 15:19:56 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6zK 00:15:47.117 15:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.117 15:19:56 -- common/autotest_common.sh@10 -- # set +x 00:15:47.117 15:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.117 15:19:56 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:15:47.117 15:19:56 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.VKa 00:15:47.117 15:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.117 15:19:56 -- common/autotest_common.sh@10 -- # set +x 00:15:47.117 15:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.117 15:19:56 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:15:47.117 15:19:56 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.OOn 00:15:47.117 15:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.117 15:19:56 -- common/autotest_common.sh@10 -- # set +x 00:15:47.117 15:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.117 15:19:56 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:15:47.118 15:19:56 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.sjr 00:15:47.118 15:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.118 15:19:56 -- common/autotest_common.sh@10 -- # set +x 00:15:47.118 15:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.118 15:19:56 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:15:47.118 15:19:56 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.P8D 00:15:47.118 15:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.118 15:19:56 -- common/autotest_common.sh@10 -- # set +x 00:15:47.118 15:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.118 15:19:56 -- host/auth.sh@92 -- # nvmet_auth_init 00:15:47.118 15:19:56 -- host/auth.sh@35 -- # get_main_ns_ip 00:15:47.118 15:19:56 -- nvmf/common.sh@717 -- # local ip 00:15:47.118 15:19:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:47.118 15:19:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:47.118 15:19:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:47.118 15:19:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:47.118 15:19:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:47.118 15:19:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:47.118 15:19:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:47.118 15:19:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:47.118 15:19:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:47.118 15:19:56 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:15:47.118 15:19:56 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:15:47.118 15:19:56 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:15:47.118 15:19:56 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:47.118 15:19:56 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:47.118 15:19:56 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:47.118 15:19:56 -- nvmf/common.sh@628 -- # local block nvme 00:15:47.118 15:19:56 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:15:47.118 15:19:56 -- nvmf/common.sh@631 -- # modprobe nvmet 00:15:47.118 15:19:56 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:47.118 15:19:56 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:47.376 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:47.634 Waiting for block devices as requested 00:15:47.634 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:47.634 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:48.201 15:19:57 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:15:48.201 15:19:57 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:48.201 15:19:57 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:15:48.201 15:19:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:15:48.201 15:19:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:48.201 15:19:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:48.201 15:19:57 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:15:48.201 15:19:57 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:15:48.201 15:19:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:48.201 No valid GPT data, bailing 00:15:48.460 15:19:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:48.460 15:19:57 -- scripts/common.sh@391 -- # pt= 00:15:48.460 15:19:57 -- scripts/common.sh@392 -- # return 1 00:15:48.460 15:19:57 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:15:48.460 15:19:57 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:15:48.460 15:19:57 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:48.460 15:19:57 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:15:48.460 15:19:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:15:48.460 15:19:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:48.460 15:19:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:48.460 15:19:57 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:15:48.460 15:19:57 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:15:48.460 15:19:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:48.460 No valid GPT data, bailing 00:15:48.460 15:19:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:48.460 15:19:57 -- scripts/common.sh@391 -- # pt= 00:15:48.460 15:19:57 -- scripts/common.sh@392 -- # return 1 00:15:48.460 15:19:57 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:15:48.460 15:19:57 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:15:48.460 15:19:57 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:48.460 15:19:57 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:15:48.460 15:19:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:15:48.460 15:19:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:48.460 15:19:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:48.460 15:19:57 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:15:48.460 15:19:57 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:15:48.460 15:19:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:48.460 No valid GPT data, bailing 00:15:48.460 15:19:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:48.460 15:19:57 -- scripts/common.sh@391 -- # pt= 00:15:48.460 15:19:57 -- scripts/common.sh@392 -- # return 1 00:15:48.460 15:19:57 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:15:48.460 15:19:57 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:15:48.460 15:19:57 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:48.460 15:19:57 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:15:48.460 15:19:57 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:15:48.460 15:19:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:48.460 15:19:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:48.460 15:19:57 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:15:48.460 15:19:57 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:15:48.460 15:19:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:48.460 No valid GPT data, bailing 00:15:48.460 15:19:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:48.460 15:19:57 -- scripts/common.sh@391 -- # pt= 00:15:48.460 15:19:57 -- scripts/common.sh@392 -- # return 1 00:15:48.460 15:19:57 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:15:48.460 15:19:57 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:15:48.460 15:19:57 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:48.460 15:19:57 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:48.460 15:19:57 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:48.719 15:19:57 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:15:48.719 15:19:57 -- nvmf/common.sh@656 -- # echo 1 00:15:48.719 15:19:57 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:15:48.719 15:19:57 -- nvmf/common.sh@658 -- # echo 1 00:15:48.719 15:19:57 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:15:48.719 15:19:57 -- nvmf/common.sh@661 -- # echo tcp 00:15:48.719 15:19:57 -- nvmf/common.sh@662 -- # echo 4420 00:15:48.719 15:19:57 -- nvmf/common.sh@663 -- # echo ipv4 00:15:48.719 15:19:57 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:48.719 15:19:57 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab --hostid=bfc76e61-7421-4d42-8df7-48fb051b4cab -a 10.0.0.1 -t tcp -s 4420 00:15:48.719 00:15:48.719 Discovery Log Number of Records 2, Generation counter 2 00:15:48.719 =====Discovery Log Entry 0====== 00:15:48.719 trtype: tcp 00:15:48.719 adrfam: ipv4 00:15:48.719 subtype: current discovery subsystem 00:15:48.719 treq: not specified, sq flow control disable supported 00:15:48.719 portid: 1 00:15:48.719 trsvcid: 4420 00:15:48.719 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:48.719 traddr: 10.0.0.1 00:15:48.719 eflags: none 00:15:48.719 sectype: none 00:15:48.719 =====Discovery Log Entry 1====== 00:15:48.719 trtype: tcp 00:15:48.719 adrfam: ipv4 00:15:48.719 subtype: nvme subsystem 00:15:48.719 treq: not specified, sq flow control disable supported 00:15:48.719 portid: 1 00:15:48.719 trsvcid: 4420 00:15:48.719 subnqn: nqn.2024-02.io.spdk:cnode0 00:15:48.719 traddr: 10.0.0.1 00:15:48.719 eflags: none 00:15:48.719 sectype: none 00:15:48.719 15:19:57 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:48.719 15:19:57 -- host/auth.sh@37 -- # echo 0 00:15:48.719 15:19:57 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:15:48.719 15:19:57 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:48.719 15:19:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:48.719 15:19:57 -- host/auth.sh@44 -- # digest=sha256 00:15:48.719 15:19:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:48.719 15:19:57 -- host/auth.sh@44 -- # keyid=1 00:15:48.719 15:19:57 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:15:48.719 15:19:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:48.719 15:19:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:15:48.719 15:19:57 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:15:48.719 15:19:57 -- host/auth.sh@100 -- # IFS=, 00:15:48.719 15:19:57 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:15:48.719 15:19:57 -- host/auth.sh@100 -- # IFS=, 00:15:48.719 15:19:57 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:48.719 15:19:57 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:15:48.719 15:19:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:48.719 15:19:57 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:15:48.719 15:19:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:48.719 15:19:57 -- host/auth.sh@68 -- # keyid=1 00:15:48.719 15:19:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:48.719 15:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.719 15:19:57 -- common/autotest_common.sh@10 -- # set +x 00:15:48.719 15:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.719 15:19:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:48.719 15:19:57 -- nvmf/common.sh@717 -- # local ip 00:15:48.719 15:19:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:48.719 15:19:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:48.719 15:19:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:48.719 15:19:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:48.719 15:19:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:48.719 15:19:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:48.719 15:19:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:48.719 15:19:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:48.719 15:19:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:48.719 15:19:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:15:48.719 15:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.719 15:19:57 -- common/autotest_common.sh@10 -- # set +x 00:15:48.978 nvme0n1 00:15:48.978 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.978 15:19:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:48.978 15:19:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:48.978 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.978 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:48.978 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.978 15:19:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.978 15:19:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:48.978 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.978 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:48.978 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.978 15:19:58 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:15:48.978 15:19:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.978 15:19:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:48.978 15:19:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:15:48.978 15:19:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:48.978 15:19:58 -- host/auth.sh@44 -- # digest=sha256 00:15:48.978 15:19:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:48.978 15:19:58 -- host/auth.sh@44 -- # keyid=0 00:15:48.978 15:19:58 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:15:48.978 15:19:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:48.978 15:19:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:15:48.978 15:19:58 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:15:48.978 15:19:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:15:48.978 15:19:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:48.978 15:19:58 -- host/auth.sh@68 -- # digest=sha256 00:15:48.978 15:19:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:15:48.978 15:19:58 -- host/auth.sh@68 -- # keyid=0 00:15:48.978 15:19:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:48.978 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.978 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:48.978 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.978 15:19:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:48.978 15:19:58 -- nvmf/common.sh@717 -- # local ip 00:15:48.978 15:19:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:48.978 15:19:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:48.978 15:19:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:48.978 15:19:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:48.978 15:19:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:48.978 15:19:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:48.978 15:19:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:48.978 15:19:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:48.978 15:19:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:48.978 15:19:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:15:48.978 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.978 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:48.978 nvme0n1 00:15:48.978 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.978 15:19:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:48.978 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.978 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:48.978 15:19:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:48.978 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.236 15:19:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.236 15:19:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:49.236 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.236 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.236 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.236 15:19:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:49.236 15:19:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:49.236 15:19:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:49.236 15:19:58 -- host/auth.sh@44 -- # digest=sha256 00:15:49.236 15:19:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:49.236 15:19:58 -- host/auth.sh@44 -- # keyid=1 00:15:49.236 15:19:58 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:15:49.236 15:19:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:49.236 15:19:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:15:49.236 15:19:58 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:15:49.236 15:19:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:15:49.236 15:19:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:49.236 15:19:58 -- host/auth.sh@68 -- # digest=sha256 00:15:49.236 15:19:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:15:49.236 15:19:58 -- host/auth.sh@68 -- # keyid=1 00:15:49.236 15:19:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:49.236 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.236 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.236 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.236 15:19:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:49.236 15:19:58 -- nvmf/common.sh@717 -- # local ip 00:15:49.236 15:19:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:49.236 15:19:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:49.236 15:19:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.236 15:19:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.236 15:19:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:49.236 15:19:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.236 15:19:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:49.236 15:19:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:49.236 15:19:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:49.236 15:19:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:15:49.236 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.236 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.236 nvme0n1 00:15:49.236 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.236 15:19:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:49.236 15:19:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.236 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.236 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.236 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.236 15:19:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.236 15:19:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:49.236 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.236 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.236 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.236 15:19:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:49.236 15:19:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:15:49.236 15:19:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:49.236 15:19:58 -- host/auth.sh@44 -- # digest=sha256 00:15:49.236 15:19:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:49.236 15:19:58 -- host/auth.sh@44 -- # keyid=2 00:15:49.236 15:19:58 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:15:49.236 15:19:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:49.236 15:19:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:15:49.236 15:19:58 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:15:49.236 15:19:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:15:49.236 15:19:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:49.236 15:19:58 -- host/auth.sh@68 -- # digest=sha256 00:15:49.236 15:19:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:15:49.236 15:19:58 -- host/auth.sh@68 -- # keyid=2 00:15:49.236 15:19:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:49.236 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.236 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.236 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.236 15:19:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:49.236 15:19:58 -- nvmf/common.sh@717 -- # local ip 00:15:49.236 15:19:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:49.236 15:19:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:49.236 15:19:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.236 15:19:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.236 15:19:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:49.236 15:19:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.236 15:19:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:49.236 15:19:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:49.236 15:19:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:49.236 15:19:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:15:49.236 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.236 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.495 nvme0n1 00:15:49.495 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.495 15:19:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.495 15:19:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:49.495 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.495 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.495 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.495 15:19:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.495 15:19:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:49.495 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.495 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.495 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.495 15:19:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:49.495 15:19:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:15:49.495 15:19:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:49.495 15:19:58 -- host/auth.sh@44 -- # digest=sha256 00:15:49.495 15:19:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:49.495 15:19:58 -- host/auth.sh@44 -- # keyid=3 00:15:49.495 15:19:58 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:15:49.495 15:19:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:49.495 15:19:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:15:49.495 15:19:58 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:15:49.495 15:19:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:15:49.495 15:19:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:49.495 15:19:58 -- host/auth.sh@68 -- # digest=sha256 00:15:49.495 15:19:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:15:49.495 15:19:58 -- host/auth.sh@68 -- # keyid=3 00:15:49.495 15:19:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:49.495 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.495 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.495 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.495 15:19:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:49.495 15:19:58 -- nvmf/common.sh@717 -- # local ip 00:15:49.495 15:19:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:49.495 15:19:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:49.495 15:19:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.495 15:19:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.495 15:19:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:49.495 15:19:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.495 15:19:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:49.495 15:19:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:49.495 15:19:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:49.495 15:19:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:15:49.495 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.495 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.753 nvme0n1 00:15:49.753 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.753 15:19:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.753 15:19:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:49.753 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.753 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.753 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.753 15:19:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.753 15:19:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:49.753 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.753 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.753 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.753 15:19:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:49.753 15:19:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:15:49.753 15:19:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:49.753 15:19:58 -- host/auth.sh@44 -- # digest=sha256 00:15:49.753 15:19:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:49.753 15:19:58 -- host/auth.sh@44 -- # keyid=4 00:15:49.753 15:19:58 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:15:49.753 15:19:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:49.753 15:19:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:15:49.753 15:19:58 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:15:49.753 15:19:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:15:49.753 15:19:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:49.753 15:19:58 -- host/auth.sh@68 -- # digest=sha256 00:15:49.753 15:19:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:15:49.753 15:19:58 -- host/auth.sh@68 -- # keyid=4 00:15:49.753 15:19:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:49.753 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.753 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.753 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.753 15:19:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:49.753 15:19:58 -- nvmf/common.sh@717 -- # local ip 00:15:49.753 15:19:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:49.753 15:19:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:49.753 15:19:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.753 15:19:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.753 15:19:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:49.753 15:19:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.753 15:19:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:49.753 15:19:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:49.753 15:19:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:49.753 15:19:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:49.753 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.753 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.753 nvme0n1 00:15:49.753 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.753 15:19:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.753 15:19:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:49.753 15:19:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.753 15:19:58 -- common/autotest_common.sh@10 -- # set +x 00:15:49.753 15:19:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.024 15:19:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.024 15:19:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.024 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.024 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:50.024 15:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.024 15:19:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.024 15:19:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:50.024 15:19:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:15:50.024 15:19:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:50.024 15:19:59 -- host/auth.sh@44 -- # digest=sha256 00:15:50.024 15:19:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:50.024 15:19:59 -- host/auth.sh@44 -- # keyid=0 00:15:50.024 15:19:59 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:15:50.024 15:19:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:50.024 15:19:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:15:50.301 15:19:59 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:15:50.301 15:19:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:15:50.301 15:19:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:50.301 15:19:59 -- host/auth.sh@68 -- # digest=sha256 00:15:50.301 15:19:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:15:50.301 15:19:59 -- host/auth.sh@68 -- # keyid=0 00:15:50.301 15:19:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.301 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.301 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:50.301 15:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.301 15:19:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:50.301 15:19:59 -- nvmf/common.sh@717 -- # local ip 00:15:50.301 15:19:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:50.301 15:19:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:50.301 15:19:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.301 15:19:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.301 15:19:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:50.301 15:19:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.301 15:19:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:50.301 15:19:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:50.301 15:19:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:50.301 15:19:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:15:50.301 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.301 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:50.301 nvme0n1 00:15:50.301 15:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.301 15:19:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.301 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.301 15:19:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:50.301 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:50.301 15:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.301 15:19:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.301 15:19:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.301 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.301 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:50.559 15:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.559 15:19:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:50.559 15:19:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:15:50.559 15:19:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:50.559 15:19:59 -- host/auth.sh@44 -- # digest=sha256 00:15:50.559 15:19:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:50.559 15:19:59 -- host/auth.sh@44 -- # keyid=1 00:15:50.559 15:19:59 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:15:50.559 15:19:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:50.559 15:19:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:15:50.559 15:19:59 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:15:50.559 15:19:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:15:50.559 15:19:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:50.560 15:19:59 -- host/auth.sh@68 -- # digest=sha256 00:15:50.560 15:19:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:15:50.560 15:19:59 -- host/auth.sh@68 -- # keyid=1 00:15:50.560 15:19:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.560 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.560 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:50.560 15:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.560 15:19:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:50.560 15:19:59 -- nvmf/common.sh@717 -- # local ip 00:15:50.560 15:19:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:50.560 15:19:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:50.560 15:19:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.560 15:19:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.560 15:19:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:50.560 15:19:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.560 15:19:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:50.560 15:19:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:50.560 15:19:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:50.560 15:19:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:15:50.560 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.560 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:50.560 nvme0n1 00:15:50.560 15:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.560 15:19:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.560 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.560 15:19:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:50.560 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:50.560 15:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.560 15:19:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.560 15:19:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.560 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.560 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:50.560 15:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.560 15:19:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:50.560 15:19:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:15:50.560 15:19:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:50.560 15:19:59 -- host/auth.sh@44 -- # digest=sha256 00:15:50.560 15:19:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:50.560 15:19:59 -- host/auth.sh@44 -- # keyid=2 00:15:50.560 15:19:59 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:15:50.560 15:19:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:50.560 15:19:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:15:50.560 15:19:59 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:15:50.560 15:19:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:15:50.560 15:19:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:50.560 15:19:59 -- host/auth.sh@68 -- # digest=sha256 00:15:50.560 15:19:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:15:50.560 15:19:59 -- host/auth.sh@68 -- # keyid=2 00:15:50.560 15:19:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.560 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.560 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:50.560 15:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.560 15:19:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:50.560 15:19:59 -- nvmf/common.sh@717 -- # local ip 00:15:50.560 15:19:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:50.560 15:19:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:50.560 15:19:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.560 15:19:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.560 15:19:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:50.560 15:19:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.560 15:19:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:50.560 15:19:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:50.560 15:19:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:50.560 15:19:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:15:50.560 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.560 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:50.820 nvme0n1 00:15:50.820 15:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.820 15:19:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.820 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.820 15:19:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:50.820 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:50.820 15:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.820 15:19:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.820 15:19:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.820 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.820 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:50.820 15:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.820 15:19:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:50.820 15:19:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:15:50.820 15:19:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:50.820 15:19:59 -- host/auth.sh@44 -- # digest=sha256 00:15:50.820 15:19:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:50.820 15:19:59 -- host/auth.sh@44 -- # keyid=3 00:15:50.820 15:19:59 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:15:50.820 15:19:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:50.820 15:19:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:15:50.820 15:19:59 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:15:50.820 15:19:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:15:50.820 15:19:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:50.820 15:19:59 -- host/auth.sh@68 -- # digest=sha256 00:15:50.820 15:19:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:15:50.820 15:19:59 -- host/auth.sh@68 -- # keyid=3 00:15:50.820 15:19:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.820 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.820 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:50.820 15:19:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.820 15:19:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:50.820 15:19:59 -- nvmf/common.sh@717 -- # local ip 00:15:50.820 15:19:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:50.820 15:19:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:50.820 15:19:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.820 15:19:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.820 15:19:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:50.820 15:19:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.820 15:19:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:50.820 15:19:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:50.820 15:19:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:50.820 15:19:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:15:50.820 15:19:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.820 15:19:59 -- common/autotest_common.sh@10 -- # set +x 00:15:51.079 nvme0n1 00:15:51.079 15:20:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.079 15:20:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.079 15:20:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:51.079 15:20:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.079 15:20:00 -- common/autotest_common.sh@10 -- # set +x 00:15:51.079 15:20:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.079 15:20:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.079 15:20:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:51.079 15:20:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.079 15:20:00 -- common/autotest_common.sh@10 -- # set +x 00:15:51.079 15:20:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.079 15:20:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:51.079 15:20:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:15:51.079 15:20:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:51.079 15:20:00 -- host/auth.sh@44 -- # digest=sha256 00:15:51.079 15:20:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:51.079 15:20:00 -- host/auth.sh@44 -- # keyid=4 00:15:51.079 15:20:00 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:15:51.079 15:20:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:51.079 15:20:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:15:51.079 15:20:00 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:15:51.079 15:20:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:15:51.079 15:20:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:51.079 15:20:00 -- host/auth.sh@68 -- # digest=sha256 00:15:51.079 15:20:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:15:51.079 15:20:00 -- host/auth.sh@68 -- # keyid=4 00:15:51.079 15:20:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.079 15:20:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.079 15:20:00 -- common/autotest_common.sh@10 -- # set +x 00:15:51.079 15:20:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.079 15:20:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:51.079 15:20:00 -- nvmf/common.sh@717 -- # local ip 00:15:51.079 15:20:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:51.079 15:20:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:51.079 15:20:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.079 15:20:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.079 15:20:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:51.079 15:20:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.079 15:20:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:51.079 15:20:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:51.079 15:20:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:51.079 15:20:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:51.079 15:20:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.079 15:20:00 -- common/autotest_common.sh@10 -- # set +x 00:15:51.079 nvme0n1 00:15:51.079 15:20:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.361 15:20:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.361 15:20:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.361 15:20:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:51.361 15:20:00 -- common/autotest_common.sh@10 -- # set +x 00:15:51.361 15:20:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.361 15:20:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.361 15:20:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:51.361 15:20:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.362 15:20:00 -- common/autotest_common.sh@10 -- # set +x 00:15:51.362 15:20:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.362 15:20:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.362 15:20:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:51.362 15:20:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:15:51.362 15:20:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:51.362 15:20:00 -- host/auth.sh@44 -- # digest=sha256 00:15:51.362 15:20:00 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:51.362 15:20:00 -- host/auth.sh@44 -- # keyid=0 00:15:51.362 15:20:00 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:15:51.362 15:20:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:51.362 15:20:00 -- host/auth.sh@48 -- # echo ffdhe4096 00:15:51.928 15:20:01 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:15:51.928 15:20:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:15:51.928 15:20:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:51.928 15:20:01 -- host/auth.sh@68 -- # digest=sha256 00:15:51.928 15:20:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:15:51.928 15:20:01 -- host/auth.sh@68 -- # keyid=0 00:15:51.928 15:20:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:51.928 15:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.928 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:15:51.928 15:20:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.928 15:20:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:51.928 15:20:01 -- nvmf/common.sh@717 -- # local ip 00:15:51.928 15:20:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:51.928 15:20:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:51.928 15:20:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.928 15:20:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.928 15:20:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:51.928 15:20:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.928 15:20:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:51.928 15:20:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:51.928 15:20:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:51.928 15:20:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:15:51.928 15:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.928 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:15:52.186 nvme0n1 00:15:52.186 15:20:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.186 15:20:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.186 15:20:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:52.186 15:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.186 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:15:52.186 15:20:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.186 15:20:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.186 15:20:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.186 15:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.186 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:15:52.186 15:20:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.186 15:20:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:52.186 15:20:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:15:52.186 15:20:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:52.186 15:20:01 -- host/auth.sh@44 -- # digest=sha256 00:15:52.186 15:20:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:52.186 15:20:01 -- host/auth.sh@44 -- # keyid=1 00:15:52.186 15:20:01 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:15:52.186 15:20:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:52.186 15:20:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:15:52.186 15:20:01 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:15:52.186 15:20:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:15:52.186 15:20:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:52.186 15:20:01 -- host/auth.sh@68 -- # digest=sha256 00:15:52.186 15:20:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:15:52.186 15:20:01 -- host/auth.sh@68 -- # keyid=1 00:15:52.186 15:20:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:52.186 15:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.186 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:15:52.186 15:20:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.186 15:20:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:52.186 15:20:01 -- nvmf/common.sh@717 -- # local ip 00:15:52.186 15:20:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:52.186 15:20:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:52.186 15:20:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.186 15:20:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.186 15:20:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:52.186 15:20:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.186 15:20:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:52.186 15:20:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:52.186 15:20:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:52.186 15:20:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:15:52.186 15:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.186 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:15:52.449 nvme0n1 00:15:52.449 15:20:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.449 15:20:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.449 15:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.449 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:15:52.449 15:20:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:52.449 15:20:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.449 15:20:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.449 15:20:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.449 15:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.449 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:15:52.449 15:20:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.449 15:20:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:52.449 15:20:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:15:52.449 15:20:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:52.449 15:20:01 -- host/auth.sh@44 -- # digest=sha256 00:15:52.449 15:20:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:52.449 15:20:01 -- host/auth.sh@44 -- # keyid=2 00:15:52.449 15:20:01 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:15:52.449 15:20:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:52.449 15:20:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:15:52.449 15:20:01 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:15:52.449 15:20:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:15:52.449 15:20:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:52.449 15:20:01 -- host/auth.sh@68 -- # digest=sha256 00:15:52.449 15:20:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:15:52.449 15:20:01 -- host/auth.sh@68 -- # keyid=2 00:15:52.449 15:20:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:52.449 15:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.449 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:15:52.449 15:20:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.449 15:20:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:52.449 15:20:01 -- nvmf/common.sh@717 -- # local ip 00:15:52.449 15:20:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:52.449 15:20:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:52.449 15:20:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.449 15:20:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.449 15:20:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:52.449 15:20:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.449 15:20:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:52.449 15:20:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:52.449 15:20:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:52.449 15:20:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:15:52.449 15:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.449 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:15:52.707 nvme0n1 00:15:52.707 15:20:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.707 15:20:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.707 15:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.707 15:20:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:52.707 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:15:52.708 15:20:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.708 15:20:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.708 15:20:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.708 15:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.708 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:15:52.708 15:20:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.708 15:20:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:52.708 15:20:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:15:52.708 15:20:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:52.708 15:20:01 -- host/auth.sh@44 -- # digest=sha256 00:15:52.708 15:20:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:52.708 15:20:01 -- host/auth.sh@44 -- # keyid=3 00:15:52.708 15:20:01 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:15:52.708 15:20:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:52.708 15:20:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:15:52.708 15:20:01 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:15:52.708 15:20:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:15:52.708 15:20:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:52.708 15:20:01 -- host/auth.sh@68 -- # digest=sha256 00:15:52.708 15:20:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:15:52.708 15:20:01 -- host/auth.sh@68 -- # keyid=3 00:15:52.708 15:20:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:52.708 15:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.708 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:15:52.708 15:20:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.708 15:20:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:52.708 15:20:01 -- nvmf/common.sh@717 -- # local ip 00:15:52.708 15:20:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:52.708 15:20:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:52.708 15:20:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.708 15:20:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.708 15:20:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:52.708 15:20:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.708 15:20:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:52.708 15:20:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:52.708 15:20:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:52.708 15:20:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:15:52.708 15:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.708 15:20:01 -- common/autotest_common.sh@10 -- # set +x 00:15:52.966 nvme0n1 00:15:52.966 15:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.966 15:20:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:52.966 15:20:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:52.966 15:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.966 15:20:02 -- common/autotest_common.sh@10 -- # set +x 00:15:52.966 15:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.966 15:20:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.966 15:20:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:52.966 15:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.966 15:20:02 -- common/autotest_common.sh@10 -- # set +x 00:15:52.966 15:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.966 15:20:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:52.966 15:20:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:15:52.966 15:20:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:52.966 15:20:02 -- host/auth.sh@44 -- # digest=sha256 00:15:52.966 15:20:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:52.966 15:20:02 -- host/auth.sh@44 -- # keyid=4 00:15:52.966 15:20:02 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:15:52.966 15:20:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:52.966 15:20:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:15:52.966 15:20:02 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:15:52.966 15:20:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:15:52.966 15:20:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:52.966 15:20:02 -- host/auth.sh@68 -- # digest=sha256 00:15:52.966 15:20:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:15:52.966 15:20:02 -- host/auth.sh@68 -- # keyid=4 00:15:52.966 15:20:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:52.966 15:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.966 15:20:02 -- common/autotest_common.sh@10 -- # set +x 00:15:52.966 15:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.966 15:20:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:52.966 15:20:02 -- nvmf/common.sh@717 -- # local ip 00:15:52.966 15:20:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:52.966 15:20:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:52.966 15:20:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:52.966 15:20:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:52.966 15:20:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:52.966 15:20:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:52.966 15:20:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:52.966 15:20:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:52.966 15:20:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:52.966 15:20:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:52.966 15:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.966 15:20:02 -- common/autotest_common.sh@10 -- # set +x 00:15:53.225 nvme0n1 00:15:53.225 15:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.225 15:20:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:53.225 15:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.225 15:20:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:53.225 15:20:02 -- common/autotest_common.sh@10 -- # set +x 00:15:53.225 15:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.225 15:20:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.225 15:20:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:53.225 15:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.225 15:20:02 -- common/autotest_common.sh@10 -- # set +x 00:15:53.225 15:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.225 15:20:02 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.225 15:20:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:53.225 15:20:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:15:53.225 15:20:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:53.225 15:20:02 -- host/auth.sh@44 -- # digest=sha256 00:15:53.225 15:20:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:53.225 15:20:02 -- host/auth.sh@44 -- # keyid=0 00:15:53.225 15:20:02 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:15:53.225 15:20:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:53.225 15:20:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:15:55.134 15:20:04 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:15:55.134 15:20:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:15:55.134 15:20:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:55.134 15:20:04 -- host/auth.sh@68 -- # digest=sha256 00:15:55.134 15:20:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:15:55.134 15:20:04 -- host/auth.sh@68 -- # keyid=0 00:15:55.134 15:20:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:55.134 15:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.134 15:20:04 -- common/autotest_common.sh@10 -- # set +x 00:15:55.134 15:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.134 15:20:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:55.134 15:20:04 -- nvmf/common.sh@717 -- # local ip 00:15:55.134 15:20:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:55.134 15:20:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:55.134 15:20:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:55.134 15:20:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:55.134 15:20:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:55.134 15:20:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:55.134 15:20:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:55.134 15:20:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:55.134 15:20:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:55.134 15:20:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:15:55.134 15:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.134 15:20:04 -- common/autotest_common.sh@10 -- # set +x 00:15:55.392 nvme0n1 00:15:55.392 15:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.392 15:20:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:55.392 15:20:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:55.392 15:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.392 15:20:04 -- common/autotest_common.sh@10 -- # set +x 00:15:55.392 15:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.651 15:20:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.651 15:20:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:55.651 15:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.651 15:20:04 -- common/autotest_common.sh@10 -- # set +x 00:15:55.651 15:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.651 15:20:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:55.651 15:20:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:15:55.651 15:20:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:55.651 15:20:04 -- host/auth.sh@44 -- # digest=sha256 00:15:55.651 15:20:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:55.651 15:20:04 -- host/auth.sh@44 -- # keyid=1 00:15:55.651 15:20:04 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:15:55.651 15:20:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:55.651 15:20:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:15:55.651 15:20:04 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:15:55.651 15:20:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:15:55.651 15:20:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:55.651 15:20:04 -- host/auth.sh@68 -- # digest=sha256 00:15:55.651 15:20:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:15:55.651 15:20:04 -- host/auth.sh@68 -- # keyid=1 00:15:55.651 15:20:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:55.651 15:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.651 15:20:04 -- common/autotest_common.sh@10 -- # set +x 00:15:55.651 15:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.651 15:20:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:55.651 15:20:04 -- nvmf/common.sh@717 -- # local ip 00:15:55.651 15:20:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:55.651 15:20:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:55.651 15:20:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:55.651 15:20:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:55.651 15:20:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:55.651 15:20:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:55.651 15:20:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:55.651 15:20:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:55.651 15:20:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:55.651 15:20:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:15:55.651 15:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.651 15:20:04 -- common/autotest_common.sh@10 -- # set +x 00:15:55.910 nvme0n1 00:15:55.910 15:20:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.910 15:20:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:55.910 15:20:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.910 15:20:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:55.910 15:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:55.910 15:20:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.910 15:20:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.910 15:20:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:55.910 15:20:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.910 15:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:55.910 15:20:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.910 15:20:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:55.910 15:20:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:15:55.910 15:20:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:55.910 15:20:05 -- host/auth.sh@44 -- # digest=sha256 00:15:55.910 15:20:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:55.910 15:20:05 -- host/auth.sh@44 -- # keyid=2 00:15:55.910 15:20:05 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:15:55.910 15:20:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:55.910 15:20:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:15:55.910 15:20:05 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:15:55.910 15:20:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:15:55.910 15:20:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:55.910 15:20:05 -- host/auth.sh@68 -- # digest=sha256 00:15:55.910 15:20:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:15:55.910 15:20:05 -- host/auth.sh@68 -- # keyid=2 00:15:55.910 15:20:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:55.910 15:20:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.910 15:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:55.910 15:20:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.910 15:20:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:55.910 15:20:05 -- nvmf/common.sh@717 -- # local ip 00:15:55.910 15:20:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:55.910 15:20:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:55.910 15:20:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:55.910 15:20:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:55.910 15:20:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:55.910 15:20:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:55.910 15:20:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:55.910 15:20:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:55.910 15:20:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:55.910 15:20:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:15:55.910 15:20:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.910 15:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:56.475 nvme0n1 00:15:56.475 15:20:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:56.475 15:20:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:56.475 15:20:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:56.475 15:20:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:56.475 15:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:56.475 15:20:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:56.475 15:20:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.475 15:20:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:56.475 15:20:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:56.475 15:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:56.475 15:20:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:56.475 15:20:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:56.475 15:20:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:15:56.475 15:20:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:56.475 15:20:05 -- host/auth.sh@44 -- # digest=sha256 00:15:56.475 15:20:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:56.475 15:20:05 -- host/auth.sh@44 -- # keyid=3 00:15:56.475 15:20:05 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:15:56.475 15:20:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:56.475 15:20:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:15:56.475 15:20:05 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:15:56.475 15:20:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:15:56.475 15:20:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:56.475 15:20:05 -- host/auth.sh@68 -- # digest=sha256 00:15:56.475 15:20:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:15:56.475 15:20:05 -- host/auth.sh@68 -- # keyid=3 00:15:56.475 15:20:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:56.475 15:20:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:56.475 15:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:56.475 15:20:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:56.475 15:20:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:56.475 15:20:05 -- nvmf/common.sh@717 -- # local ip 00:15:56.476 15:20:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:56.476 15:20:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:56.476 15:20:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:56.476 15:20:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:56.476 15:20:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:56.476 15:20:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:56.476 15:20:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:56.476 15:20:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:56.476 15:20:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:56.476 15:20:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:15:56.476 15:20:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:56.476 15:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:56.734 nvme0n1 00:15:56.734 15:20:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:56.734 15:20:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:56.734 15:20:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:56.734 15:20:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:56.734 15:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:56.734 15:20:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:56.734 15:20:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.734 15:20:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:56.734 15:20:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:56.734 15:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:56.734 15:20:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:56.734 15:20:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:56.734 15:20:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:15:56.734 15:20:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:56.734 15:20:05 -- host/auth.sh@44 -- # digest=sha256 00:15:56.734 15:20:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:56.734 15:20:05 -- host/auth.sh@44 -- # keyid=4 00:15:56.734 15:20:05 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:15:56.734 15:20:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:56.734 15:20:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:15:56.734 15:20:05 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:15:56.734 15:20:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:15:56.734 15:20:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:15:56.734 15:20:05 -- host/auth.sh@68 -- # digest=sha256 00:15:56.734 15:20:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:15:56.734 15:20:05 -- host/auth.sh@68 -- # keyid=4 00:15:56.734 15:20:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:56.734 15:20:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:56.734 15:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:56.734 15:20:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:56.734 15:20:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:15:56.734 15:20:05 -- nvmf/common.sh@717 -- # local ip 00:15:56.734 15:20:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:15:56.734 15:20:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:15:56.734 15:20:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:56.734 15:20:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:56.734 15:20:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:15:56.734 15:20:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:56.734 15:20:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:15:56.734 15:20:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:15:56.734 15:20:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:15:56.734 15:20:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:56.734 15:20:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:56.734 15:20:05 -- common/autotest_common.sh@10 -- # set +x 00:15:57.301 nvme0n1 00:15:57.301 15:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.301 15:20:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:15:57.301 15:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.301 15:20:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:15:57.301 15:20:06 -- common/autotest_common.sh@10 -- # set +x 00:15:57.301 15:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.301 15:20:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.301 15:20:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:57.301 15:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.301 15:20:06 -- common/autotest_common.sh@10 -- # set +x 00:15:57.301 15:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.301 15:20:06 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.301 15:20:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:15:57.301 15:20:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:15:57.301 15:20:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:15:57.301 15:20:06 -- host/auth.sh@44 -- # digest=sha256 00:15:57.301 15:20:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:57.301 15:20:06 -- host/auth.sh@44 -- # keyid=0 00:15:57.301 15:20:06 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:15:57.301 15:20:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:15:57.301 15:20:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:01.488 15:20:10 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:01.488 15:20:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:16:01.488 15:20:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:01.488 15:20:10 -- host/auth.sh@68 -- # digest=sha256 00:16:01.488 15:20:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:01.488 15:20:10 -- host/auth.sh@68 -- # keyid=0 00:16:01.488 15:20:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:01.488 15:20:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:01.488 15:20:10 -- common/autotest_common.sh@10 -- # set +x 00:16:01.488 15:20:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:01.488 15:20:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:01.488 15:20:10 -- nvmf/common.sh@717 -- # local ip 00:16:01.488 15:20:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:01.488 15:20:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:01.488 15:20:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:01.488 15:20:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:01.488 15:20:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:01.488 15:20:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:01.488 15:20:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:01.488 15:20:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:01.488 15:20:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:01.488 15:20:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:01.488 15:20:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:01.488 15:20:10 -- common/autotest_common.sh@10 -- # set +x 00:16:01.488 nvme0n1 00:16:01.488 15:20:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:01.488 15:20:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:01.488 15:20:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:01.488 15:20:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:01.488 15:20:10 -- common/autotest_common.sh@10 -- # set +x 00:16:01.747 15:20:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:01.747 15:20:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.747 15:20:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:01.747 15:20:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:01.747 15:20:10 -- common/autotest_common.sh@10 -- # set +x 00:16:01.747 15:20:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:01.747 15:20:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:01.747 15:20:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:01.747 15:20:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:01.747 15:20:10 -- host/auth.sh@44 -- # digest=sha256 00:16:01.747 15:20:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:01.747 15:20:10 -- host/auth.sh@44 -- # keyid=1 00:16:01.747 15:20:10 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:01.747 15:20:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:01.747 15:20:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:01.747 15:20:10 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:01.747 15:20:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:16:01.747 15:20:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:01.747 15:20:10 -- host/auth.sh@68 -- # digest=sha256 00:16:01.747 15:20:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:01.747 15:20:10 -- host/auth.sh@68 -- # keyid=1 00:16:01.747 15:20:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:01.747 15:20:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:01.747 15:20:10 -- common/autotest_common.sh@10 -- # set +x 00:16:01.747 15:20:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:01.747 15:20:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:01.747 15:20:10 -- nvmf/common.sh@717 -- # local ip 00:16:01.747 15:20:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:01.747 15:20:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:01.747 15:20:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:01.747 15:20:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:01.747 15:20:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:01.747 15:20:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:01.747 15:20:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:01.747 15:20:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:01.747 15:20:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:01.747 15:20:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:01.747 15:20:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:01.747 15:20:10 -- common/autotest_common.sh@10 -- # set +x 00:16:02.314 nvme0n1 00:16:02.314 15:20:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:02.314 15:20:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:02.314 15:20:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:02.314 15:20:11 -- common/autotest_common.sh@10 -- # set +x 00:16:02.314 15:20:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:02.314 15:20:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:02.314 15:20:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.314 15:20:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:02.314 15:20:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:02.314 15:20:11 -- common/autotest_common.sh@10 -- # set +x 00:16:02.314 15:20:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:02.314 15:20:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:02.314 15:20:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:02.314 15:20:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:02.314 15:20:11 -- host/auth.sh@44 -- # digest=sha256 00:16:02.314 15:20:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:02.314 15:20:11 -- host/auth.sh@44 -- # keyid=2 00:16:02.314 15:20:11 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:02.314 15:20:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:02.314 15:20:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:02.314 15:20:11 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:02.314 15:20:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:16:02.314 15:20:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:02.314 15:20:11 -- host/auth.sh@68 -- # digest=sha256 00:16:02.314 15:20:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:02.314 15:20:11 -- host/auth.sh@68 -- # keyid=2 00:16:02.314 15:20:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:02.314 15:20:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:02.314 15:20:11 -- common/autotest_common.sh@10 -- # set +x 00:16:02.315 15:20:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:02.315 15:20:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:02.315 15:20:11 -- nvmf/common.sh@717 -- # local ip 00:16:02.315 15:20:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:02.315 15:20:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:02.315 15:20:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:02.315 15:20:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:02.315 15:20:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:02.315 15:20:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:02.315 15:20:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:02.315 15:20:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:02.315 15:20:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:02.315 15:20:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:02.315 15:20:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:02.315 15:20:11 -- common/autotest_common.sh@10 -- # set +x 00:16:02.880 nvme0n1 00:16:02.880 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:02.880 15:20:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:02.880 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:02.880 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:16:03.139 15:20:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:03.139 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:03.139 15:20:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.139 15:20:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.139 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:03.139 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:16:03.139 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:03.139 15:20:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:03.139 15:20:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:03.139 15:20:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:03.139 15:20:12 -- host/auth.sh@44 -- # digest=sha256 00:16:03.139 15:20:12 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:03.139 15:20:12 -- host/auth.sh@44 -- # keyid=3 00:16:03.139 15:20:12 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:03.139 15:20:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:03.139 15:20:12 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:03.139 15:20:12 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:03.139 15:20:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:16:03.139 15:20:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:03.139 15:20:12 -- host/auth.sh@68 -- # digest=sha256 00:16:03.139 15:20:12 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:03.139 15:20:12 -- host/auth.sh@68 -- # keyid=3 00:16:03.139 15:20:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:03.139 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:03.139 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:16:03.139 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:03.139 15:20:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:03.139 15:20:12 -- nvmf/common.sh@717 -- # local ip 00:16:03.139 15:20:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:03.139 15:20:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:03.139 15:20:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.139 15:20:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.139 15:20:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:03.139 15:20:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.139 15:20:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:03.139 15:20:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:03.139 15:20:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:03.139 15:20:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:03.139 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:03.139 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:16:03.708 nvme0n1 00:16:03.708 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:03.708 15:20:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:03.708 15:20:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:03.708 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:03.708 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:16:03.708 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:03.708 15:20:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.708 15:20:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.708 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:03.708 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:16:03.708 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:03.708 15:20:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:03.708 15:20:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:03.708 15:20:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:03.708 15:20:12 -- host/auth.sh@44 -- # digest=sha256 00:16:03.708 15:20:12 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:03.708 15:20:12 -- host/auth.sh@44 -- # keyid=4 00:16:03.708 15:20:12 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:03.708 15:20:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:03.708 15:20:12 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:03.708 15:20:12 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:03.708 15:20:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:16:03.708 15:20:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:03.708 15:20:12 -- host/auth.sh@68 -- # digest=sha256 00:16:03.708 15:20:12 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:03.708 15:20:12 -- host/auth.sh@68 -- # keyid=4 00:16:03.708 15:20:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:03.708 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:03.708 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:16:03.708 15:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:03.709 15:20:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:03.709 15:20:12 -- nvmf/common.sh@717 -- # local ip 00:16:03.709 15:20:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:03.709 15:20:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:03.709 15:20:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:03.709 15:20:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:03.709 15:20:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:03.709 15:20:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:03.709 15:20:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:03.709 15:20:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:03.709 15:20:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:03.709 15:20:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:03.709 15:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:03.709 15:20:12 -- common/autotest_common.sh@10 -- # set +x 00:16:04.664 nvme0n1 00:16:04.664 15:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.664 15:20:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.664 15:20:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:04.664 15:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.664 15:20:13 -- common/autotest_common.sh@10 -- # set +x 00:16:04.664 15:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.664 15:20:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.664 15:20:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.664 15:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.664 15:20:13 -- common/autotest_common.sh@10 -- # set +x 00:16:04.664 15:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.664 15:20:13 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:16:04.664 15:20:13 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.664 15:20:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:04.664 15:20:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:04.664 15:20:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:04.664 15:20:13 -- host/auth.sh@44 -- # digest=sha384 00:16:04.664 15:20:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:04.664 15:20:13 -- host/auth.sh@44 -- # keyid=0 00:16:04.664 15:20:13 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:04.664 15:20:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:04.664 15:20:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:04.664 15:20:13 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:04.664 15:20:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:16:04.664 15:20:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:04.664 15:20:13 -- host/auth.sh@68 -- # digest=sha384 00:16:04.664 15:20:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:04.664 15:20:13 -- host/auth.sh@68 -- # keyid=0 00:16:04.664 15:20:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:04.664 15:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.664 15:20:13 -- common/autotest_common.sh@10 -- # set +x 00:16:04.664 15:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.664 15:20:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:04.664 15:20:13 -- nvmf/common.sh@717 -- # local ip 00:16:04.664 15:20:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:04.664 15:20:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:04.664 15:20:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.664 15:20:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.664 15:20:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:04.664 15:20:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.664 15:20:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:04.664 15:20:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:04.664 15:20:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:04.664 15:20:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:04.664 15:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.665 15:20:13 -- common/autotest_common.sh@10 -- # set +x 00:16:04.665 nvme0n1 00:16:04.665 15:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.665 15:20:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.665 15:20:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:04.665 15:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.665 15:20:13 -- common/autotest_common.sh@10 -- # set +x 00:16:04.665 15:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.665 15:20:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.665 15:20:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.665 15:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.665 15:20:13 -- common/autotest_common.sh@10 -- # set +x 00:16:04.665 15:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.665 15:20:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:04.665 15:20:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:04.665 15:20:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:04.665 15:20:13 -- host/auth.sh@44 -- # digest=sha384 00:16:04.665 15:20:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:04.665 15:20:13 -- host/auth.sh@44 -- # keyid=1 00:16:04.665 15:20:13 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:04.665 15:20:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:04.665 15:20:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:04.665 15:20:13 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:04.665 15:20:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:16:04.665 15:20:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:04.665 15:20:13 -- host/auth.sh@68 -- # digest=sha384 00:16:04.665 15:20:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:04.665 15:20:13 -- host/auth.sh@68 -- # keyid=1 00:16:04.665 15:20:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:04.665 15:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.665 15:20:13 -- common/autotest_common.sh@10 -- # set +x 00:16:04.665 15:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.665 15:20:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:04.665 15:20:13 -- nvmf/common.sh@717 -- # local ip 00:16:04.665 15:20:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:04.665 15:20:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:04.665 15:20:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.665 15:20:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.665 15:20:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:04.665 15:20:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.665 15:20:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:04.665 15:20:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:04.665 15:20:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:04.665 15:20:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:04.665 15:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.665 15:20:13 -- common/autotest_common.sh@10 -- # set +x 00:16:04.924 nvme0n1 00:16:04.924 15:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.924 15:20:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.924 15:20:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:04.924 15:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.924 15:20:13 -- common/autotest_common.sh@10 -- # set +x 00:16:04.924 15:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.924 15:20:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.924 15:20:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.924 15:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.924 15:20:13 -- common/autotest_common.sh@10 -- # set +x 00:16:04.924 15:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.924 15:20:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:04.924 15:20:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:04.924 15:20:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:04.924 15:20:13 -- host/auth.sh@44 -- # digest=sha384 00:16:04.924 15:20:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:04.924 15:20:13 -- host/auth.sh@44 -- # keyid=2 00:16:04.924 15:20:13 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:04.924 15:20:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:04.924 15:20:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:04.924 15:20:13 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:04.924 15:20:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:16:04.924 15:20:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:04.924 15:20:13 -- host/auth.sh@68 -- # digest=sha384 00:16:04.924 15:20:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:04.924 15:20:13 -- host/auth.sh@68 -- # keyid=2 00:16:04.924 15:20:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:04.924 15:20:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.924 15:20:13 -- common/autotest_common.sh@10 -- # set +x 00:16:04.924 15:20:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.924 15:20:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:04.924 15:20:13 -- nvmf/common.sh@717 -- # local ip 00:16:04.924 15:20:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:04.924 15:20:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:04.924 15:20:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:04.924 15:20:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:04.924 15:20:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:04.924 15:20:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:04.924 15:20:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:04.924 15:20:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:04.924 15:20:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:04.924 15:20:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:04.924 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.924 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:04.924 nvme0n1 00:16:04.924 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.924 15:20:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:04.924 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.924 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:04.924 15:20:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:04.924 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.924 15:20:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.924 15:20:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:04.924 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.924 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.183 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.183 15:20:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:05.183 15:20:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:05.183 15:20:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:05.183 15:20:14 -- host/auth.sh@44 -- # digest=sha384 00:16:05.183 15:20:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:05.183 15:20:14 -- host/auth.sh@44 -- # keyid=3 00:16:05.183 15:20:14 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:05.184 15:20:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:05.184 15:20:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:05.184 15:20:14 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:05.184 15:20:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:16:05.184 15:20:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:05.184 15:20:14 -- host/auth.sh@68 -- # digest=sha384 00:16:05.184 15:20:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:05.184 15:20:14 -- host/auth.sh@68 -- # keyid=3 00:16:05.184 15:20:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.184 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.184 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.184 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.184 15:20:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:05.184 15:20:14 -- nvmf/common.sh@717 -- # local ip 00:16:05.184 15:20:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:05.184 15:20:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:05.184 15:20:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.184 15:20:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.184 15:20:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:05.184 15:20:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.184 15:20:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:05.184 15:20:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:05.184 15:20:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:05.184 15:20:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:05.184 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.184 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.184 nvme0n1 00:16:05.184 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.184 15:20:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.184 15:20:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:05.184 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.184 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.184 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.184 15:20:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.184 15:20:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.184 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.184 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.184 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.184 15:20:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:05.184 15:20:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:05.184 15:20:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:05.184 15:20:14 -- host/auth.sh@44 -- # digest=sha384 00:16:05.184 15:20:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:05.184 15:20:14 -- host/auth.sh@44 -- # keyid=4 00:16:05.184 15:20:14 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:05.184 15:20:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:05.184 15:20:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:05.184 15:20:14 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:05.184 15:20:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:16:05.184 15:20:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:05.184 15:20:14 -- host/auth.sh@68 -- # digest=sha384 00:16:05.184 15:20:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:05.184 15:20:14 -- host/auth.sh@68 -- # keyid=4 00:16:05.184 15:20:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:05.184 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.184 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.184 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.184 15:20:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:05.184 15:20:14 -- nvmf/common.sh@717 -- # local ip 00:16:05.184 15:20:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:05.184 15:20:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:05.184 15:20:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.184 15:20:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.184 15:20:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:05.184 15:20:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.184 15:20:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:05.184 15:20:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:05.184 15:20:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:05.184 15:20:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:05.184 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.184 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.442 nvme0n1 00:16:05.442 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.442 15:20:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.442 15:20:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:05.442 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.442 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.442 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.442 15:20:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.442 15:20:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.442 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.442 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.442 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.442 15:20:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.442 15:20:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:05.442 15:20:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:05.442 15:20:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:05.442 15:20:14 -- host/auth.sh@44 -- # digest=sha384 00:16:05.442 15:20:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:05.442 15:20:14 -- host/auth.sh@44 -- # keyid=0 00:16:05.442 15:20:14 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:05.442 15:20:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:05.442 15:20:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:05.442 15:20:14 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:05.442 15:20:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:16:05.442 15:20:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:05.442 15:20:14 -- host/auth.sh@68 -- # digest=sha384 00:16:05.442 15:20:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:05.442 15:20:14 -- host/auth.sh@68 -- # keyid=0 00:16:05.442 15:20:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:05.442 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.442 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.442 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.442 15:20:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:05.442 15:20:14 -- nvmf/common.sh@717 -- # local ip 00:16:05.442 15:20:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:05.442 15:20:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:05.442 15:20:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.442 15:20:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.442 15:20:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:05.442 15:20:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.442 15:20:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:05.442 15:20:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:05.442 15:20:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:05.442 15:20:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:05.442 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.442 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.442 nvme0n1 00:16:05.443 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.443 15:20:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.443 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.443 15:20:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:05.443 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.701 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.701 15:20:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.701 15:20:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.701 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.701 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.701 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.701 15:20:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:05.701 15:20:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:05.701 15:20:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:05.701 15:20:14 -- host/auth.sh@44 -- # digest=sha384 00:16:05.701 15:20:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:05.701 15:20:14 -- host/auth.sh@44 -- # keyid=1 00:16:05.701 15:20:14 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:05.701 15:20:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:05.701 15:20:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:05.701 15:20:14 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:05.701 15:20:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:16:05.701 15:20:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:05.701 15:20:14 -- host/auth.sh@68 -- # digest=sha384 00:16:05.701 15:20:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:05.701 15:20:14 -- host/auth.sh@68 -- # keyid=1 00:16:05.701 15:20:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:05.701 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.701 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.701 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.701 15:20:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:05.701 15:20:14 -- nvmf/common.sh@717 -- # local ip 00:16:05.701 15:20:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:05.701 15:20:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:05.701 15:20:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.701 15:20:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.701 15:20:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:05.701 15:20:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.701 15:20:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:05.701 15:20:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:05.701 15:20:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:05.701 15:20:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:05.701 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.701 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.701 nvme0n1 00:16:05.701 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.701 15:20:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.701 15:20:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:05.701 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.701 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.701 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.701 15:20:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.701 15:20:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.701 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.701 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.701 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.701 15:20:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:05.701 15:20:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:05.701 15:20:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:05.701 15:20:14 -- host/auth.sh@44 -- # digest=sha384 00:16:05.701 15:20:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:05.701 15:20:14 -- host/auth.sh@44 -- # keyid=2 00:16:05.701 15:20:14 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:05.701 15:20:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:05.701 15:20:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:05.701 15:20:14 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:05.701 15:20:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:16:05.701 15:20:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:05.701 15:20:14 -- host/auth.sh@68 -- # digest=sha384 00:16:05.702 15:20:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:05.702 15:20:14 -- host/auth.sh@68 -- # keyid=2 00:16:05.702 15:20:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:05.702 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.702 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.702 15:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.702 15:20:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:05.702 15:20:14 -- nvmf/common.sh@717 -- # local ip 00:16:05.702 15:20:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:05.702 15:20:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:05.702 15:20:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.702 15:20:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.702 15:20:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:05.702 15:20:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.702 15:20:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:05.702 15:20:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:05.702 15:20:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:05.960 15:20:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:05.960 15:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.960 15:20:14 -- common/autotest_common.sh@10 -- # set +x 00:16:05.960 nvme0n1 00:16:05.960 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.960 15:20:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:05.960 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.960 15:20:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:05.960 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:05.960 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.960 15:20:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.960 15:20:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:05.960 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.960 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:05.960 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.960 15:20:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:05.960 15:20:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:05.960 15:20:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:05.960 15:20:15 -- host/auth.sh@44 -- # digest=sha384 00:16:05.960 15:20:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:05.960 15:20:15 -- host/auth.sh@44 -- # keyid=3 00:16:05.960 15:20:15 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:05.960 15:20:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:05.960 15:20:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:05.960 15:20:15 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:05.960 15:20:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:16:05.960 15:20:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:05.960 15:20:15 -- host/auth.sh@68 -- # digest=sha384 00:16:05.960 15:20:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:05.960 15:20:15 -- host/auth.sh@68 -- # keyid=3 00:16:05.960 15:20:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:05.960 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.960 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:05.960 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.960 15:20:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:05.960 15:20:15 -- nvmf/common.sh@717 -- # local ip 00:16:05.960 15:20:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:05.960 15:20:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:05.960 15:20:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.960 15:20:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.960 15:20:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:05.960 15:20:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.960 15:20:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:05.960 15:20:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:05.960 15:20:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:05.960 15:20:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:05.960 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.960 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.219 nvme0n1 00:16:06.219 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.220 15:20:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:06.220 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.220 15:20:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:06.220 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.220 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.220 15:20:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.220 15:20:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:06.220 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.220 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.220 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.220 15:20:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:06.220 15:20:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:06.220 15:20:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:06.220 15:20:15 -- host/auth.sh@44 -- # digest=sha384 00:16:06.220 15:20:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:06.220 15:20:15 -- host/auth.sh@44 -- # keyid=4 00:16:06.220 15:20:15 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:06.220 15:20:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:06.220 15:20:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:06.220 15:20:15 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:06.220 15:20:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:16:06.220 15:20:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:06.220 15:20:15 -- host/auth.sh@68 -- # digest=sha384 00:16:06.220 15:20:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:06.220 15:20:15 -- host/auth.sh@68 -- # keyid=4 00:16:06.220 15:20:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.220 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.220 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.220 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.220 15:20:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:06.220 15:20:15 -- nvmf/common.sh@717 -- # local ip 00:16:06.220 15:20:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:06.220 15:20:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:06.220 15:20:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.220 15:20:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.220 15:20:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:06.220 15:20:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.220 15:20:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:06.220 15:20:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:06.220 15:20:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:06.220 15:20:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:06.220 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.220 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.220 nvme0n1 00:16:06.220 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.478 15:20:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:06.478 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.478 15:20:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:06.478 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.478 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.478 15:20:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.478 15:20:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:06.478 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.478 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.478 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.478 15:20:15 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.478 15:20:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:06.478 15:20:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:06.478 15:20:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:06.478 15:20:15 -- host/auth.sh@44 -- # digest=sha384 00:16:06.478 15:20:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:06.478 15:20:15 -- host/auth.sh@44 -- # keyid=0 00:16:06.478 15:20:15 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:06.478 15:20:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:06.478 15:20:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:06.478 15:20:15 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:06.478 15:20:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:16:06.478 15:20:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:06.478 15:20:15 -- host/auth.sh@68 -- # digest=sha384 00:16:06.478 15:20:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:06.478 15:20:15 -- host/auth.sh@68 -- # keyid=0 00:16:06.478 15:20:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.478 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.478 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.478 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.478 15:20:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:06.478 15:20:15 -- nvmf/common.sh@717 -- # local ip 00:16:06.478 15:20:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:06.478 15:20:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:06.478 15:20:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.478 15:20:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.478 15:20:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:06.478 15:20:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.478 15:20:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:06.478 15:20:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:06.478 15:20:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:06.478 15:20:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:06.478 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.478 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.736 nvme0n1 00:16:06.736 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.736 15:20:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:06.736 15:20:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:06.736 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.736 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.736 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.736 15:20:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.736 15:20:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:06.736 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.736 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.736 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.736 15:20:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:06.736 15:20:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:06.736 15:20:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:06.736 15:20:15 -- host/auth.sh@44 -- # digest=sha384 00:16:06.736 15:20:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:06.736 15:20:15 -- host/auth.sh@44 -- # keyid=1 00:16:06.736 15:20:15 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:06.736 15:20:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:06.736 15:20:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:06.736 15:20:15 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:06.737 15:20:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:16:06.737 15:20:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:06.737 15:20:15 -- host/auth.sh@68 -- # digest=sha384 00:16:06.737 15:20:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:06.737 15:20:15 -- host/auth.sh@68 -- # keyid=1 00:16:06.737 15:20:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.737 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.737 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.737 15:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.737 15:20:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:06.737 15:20:15 -- nvmf/common.sh@717 -- # local ip 00:16:06.737 15:20:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:06.737 15:20:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:06.737 15:20:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.737 15:20:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.737 15:20:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:06.737 15:20:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.737 15:20:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:06.737 15:20:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:06.737 15:20:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:06.737 15:20:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:06.737 15:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.737 15:20:15 -- common/autotest_common.sh@10 -- # set +x 00:16:06.995 nvme0n1 00:16:06.995 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.995 15:20:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:06.995 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.995 15:20:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:06.995 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:06.995 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.995 15:20:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.995 15:20:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:06.995 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.995 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:06.995 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.995 15:20:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:06.995 15:20:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:06.995 15:20:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:06.995 15:20:16 -- host/auth.sh@44 -- # digest=sha384 00:16:06.995 15:20:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:06.995 15:20:16 -- host/auth.sh@44 -- # keyid=2 00:16:06.995 15:20:16 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:06.995 15:20:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:06.995 15:20:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:06.995 15:20:16 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:06.995 15:20:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:16:06.995 15:20:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:06.995 15:20:16 -- host/auth.sh@68 -- # digest=sha384 00:16:06.995 15:20:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:06.995 15:20:16 -- host/auth.sh@68 -- # keyid=2 00:16:06.995 15:20:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.995 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.995 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:06.995 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.995 15:20:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:06.995 15:20:16 -- nvmf/common.sh@717 -- # local ip 00:16:06.995 15:20:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:06.995 15:20:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:06.995 15:20:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.995 15:20:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.995 15:20:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:06.995 15:20:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.995 15:20:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:06.995 15:20:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:06.995 15:20:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:06.995 15:20:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:06.995 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.995 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:07.254 nvme0n1 00:16:07.254 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.254 15:20:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:07.254 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.254 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:07.254 15:20:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:07.254 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.254 15:20:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.254 15:20:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:07.254 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.254 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:07.254 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.254 15:20:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:07.254 15:20:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:07.254 15:20:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:07.254 15:20:16 -- host/auth.sh@44 -- # digest=sha384 00:16:07.254 15:20:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:07.254 15:20:16 -- host/auth.sh@44 -- # keyid=3 00:16:07.254 15:20:16 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:07.254 15:20:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:07.254 15:20:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:07.254 15:20:16 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:07.254 15:20:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:16:07.254 15:20:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:07.254 15:20:16 -- host/auth.sh@68 -- # digest=sha384 00:16:07.254 15:20:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:07.254 15:20:16 -- host/auth.sh@68 -- # keyid=3 00:16:07.254 15:20:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:07.254 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.254 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:07.254 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.254 15:20:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:07.254 15:20:16 -- nvmf/common.sh@717 -- # local ip 00:16:07.254 15:20:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:07.254 15:20:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:07.254 15:20:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:07.254 15:20:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:07.254 15:20:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:07.254 15:20:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:07.254 15:20:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:07.254 15:20:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:07.254 15:20:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:07.254 15:20:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:07.254 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.254 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:07.512 nvme0n1 00:16:07.512 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.512 15:20:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:07.512 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.512 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:07.512 15:20:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:07.512 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.512 15:20:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.512 15:20:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:07.512 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.512 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:07.512 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.512 15:20:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:07.512 15:20:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:07.512 15:20:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:07.512 15:20:16 -- host/auth.sh@44 -- # digest=sha384 00:16:07.512 15:20:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:07.512 15:20:16 -- host/auth.sh@44 -- # keyid=4 00:16:07.512 15:20:16 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:07.512 15:20:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:07.512 15:20:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:07.512 15:20:16 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:07.512 15:20:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:16:07.512 15:20:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:07.512 15:20:16 -- host/auth.sh@68 -- # digest=sha384 00:16:07.512 15:20:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:07.512 15:20:16 -- host/auth.sh@68 -- # keyid=4 00:16:07.512 15:20:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:07.512 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.512 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:07.512 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.512 15:20:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:07.512 15:20:16 -- nvmf/common.sh@717 -- # local ip 00:16:07.512 15:20:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:07.512 15:20:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:07.512 15:20:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:07.512 15:20:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:07.512 15:20:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:07.512 15:20:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:07.512 15:20:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:07.512 15:20:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:07.512 15:20:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:07.512 15:20:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:07.512 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.512 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:07.771 nvme0n1 00:16:07.771 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.771 15:20:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:07.771 15:20:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:07.771 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.771 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:07.771 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.771 15:20:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.771 15:20:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:07.771 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.771 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:07.771 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.771 15:20:16 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.771 15:20:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:07.771 15:20:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:07.771 15:20:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:07.771 15:20:16 -- host/auth.sh@44 -- # digest=sha384 00:16:07.771 15:20:16 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:07.771 15:20:16 -- host/auth.sh@44 -- # keyid=0 00:16:07.771 15:20:16 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:07.771 15:20:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:07.771 15:20:16 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:07.771 15:20:16 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:07.771 15:20:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:16:07.771 15:20:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:07.771 15:20:16 -- host/auth.sh@68 -- # digest=sha384 00:16:07.771 15:20:16 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:07.771 15:20:16 -- host/auth.sh@68 -- # keyid=0 00:16:07.771 15:20:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.771 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.771 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:07.771 15:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.771 15:20:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:07.771 15:20:16 -- nvmf/common.sh@717 -- # local ip 00:16:07.771 15:20:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:07.771 15:20:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:07.771 15:20:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:07.771 15:20:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:07.771 15:20:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:07.771 15:20:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:07.771 15:20:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:07.771 15:20:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:07.771 15:20:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:07.771 15:20:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:07.771 15:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.771 15:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:08.029 nvme0n1 00:16:08.029 15:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.029 15:20:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.029 15:20:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:08.029 15:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.029 15:20:17 -- common/autotest_common.sh@10 -- # set +x 00:16:08.029 15:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.288 15:20:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.288 15:20:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:08.288 15:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.288 15:20:17 -- common/autotest_common.sh@10 -- # set +x 00:16:08.288 15:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.288 15:20:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:08.288 15:20:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:08.288 15:20:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:08.288 15:20:17 -- host/auth.sh@44 -- # digest=sha384 00:16:08.288 15:20:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:08.288 15:20:17 -- host/auth.sh@44 -- # keyid=1 00:16:08.288 15:20:17 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:08.288 15:20:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:08.288 15:20:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:08.288 15:20:17 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:08.288 15:20:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:16:08.288 15:20:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:08.288 15:20:17 -- host/auth.sh@68 -- # digest=sha384 00:16:08.288 15:20:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:08.288 15:20:17 -- host/auth.sh@68 -- # keyid=1 00:16:08.288 15:20:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.288 15:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.288 15:20:17 -- common/autotest_common.sh@10 -- # set +x 00:16:08.288 15:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.288 15:20:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:08.288 15:20:17 -- nvmf/common.sh@717 -- # local ip 00:16:08.288 15:20:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:08.288 15:20:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:08.288 15:20:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:08.288 15:20:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:08.288 15:20:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:08.288 15:20:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:08.288 15:20:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:08.288 15:20:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:08.288 15:20:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:08.288 15:20:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:08.288 15:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.288 15:20:17 -- common/autotest_common.sh@10 -- # set +x 00:16:08.546 nvme0n1 00:16:08.546 15:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.546 15:20:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.546 15:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.546 15:20:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:08.546 15:20:17 -- common/autotest_common.sh@10 -- # set +x 00:16:08.546 15:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.546 15:20:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.546 15:20:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:08.546 15:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.546 15:20:17 -- common/autotest_common.sh@10 -- # set +x 00:16:08.546 15:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.546 15:20:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:08.546 15:20:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:08.546 15:20:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:08.546 15:20:17 -- host/auth.sh@44 -- # digest=sha384 00:16:08.546 15:20:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:08.546 15:20:17 -- host/auth.sh@44 -- # keyid=2 00:16:08.546 15:20:17 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:08.546 15:20:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:08.546 15:20:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:08.546 15:20:17 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:08.546 15:20:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:16:08.546 15:20:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:08.546 15:20:17 -- host/auth.sh@68 -- # digest=sha384 00:16:08.546 15:20:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:08.546 15:20:17 -- host/auth.sh@68 -- # keyid=2 00:16:08.546 15:20:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.546 15:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.546 15:20:17 -- common/autotest_common.sh@10 -- # set +x 00:16:08.546 15:20:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.546 15:20:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:08.546 15:20:17 -- nvmf/common.sh@717 -- # local ip 00:16:08.546 15:20:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:08.546 15:20:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:08.546 15:20:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:08.546 15:20:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:08.546 15:20:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:08.546 15:20:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:08.546 15:20:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:08.546 15:20:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:08.546 15:20:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:08.546 15:20:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:08.546 15:20:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.546 15:20:17 -- common/autotest_common.sh@10 -- # set +x 00:16:09.112 nvme0n1 00:16:09.112 15:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.113 15:20:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.113 15:20:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:09.113 15:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.113 15:20:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.113 15:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.113 15:20:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.113 15:20:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.113 15:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.113 15:20:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.113 15:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.113 15:20:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:09.113 15:20:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:09.113 15:20:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:09.113 15:20:18 -- host/auth.sh@44 -- # digest=sha384 00:16:09.113 15:20:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:09.113 15:20:18 -- host/auth.sh@44 -- # keyid=3 00:16:09.113 15:20:18 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:09.113 15:20:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:09.113 15:20:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:09.113 15:20:18 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:09.113 15:20:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:16:09.113 15:20:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:09.113 15:20:18 -- host/auth.sh@68 -- # digest=sha384 00:16:09.113 15:20:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:09.113 15:20:18 -- host/auth.sh@68 -- # keyid=3 00:16:09.113 15:20:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.113 15:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.113 15:20:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.113 15:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.113 15:20:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:09.113 15:20:18 -- nvmf/common.sh@717 -- # local ip 00:16:09.113 15:20:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:09.113 15:20:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:09.113 15:20:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.113 15:20:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.113 15:20:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:09.113 15:20:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.113 15:20:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:09.113 15:20:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:09.113 15:20:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:09.113 15:20:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:09.113 15:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.113 15:20:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.372 nvme0n1 00:16:09.372 15:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.372 15:20:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.372 15:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.372 15:20:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:09.372 15:20:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.372 15:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.372 15:20:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.372 15:20:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.372 15:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.372 15:20:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.372 15:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.372 15:20:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:09.372 15:20:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:09.372 15:20:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:09.372 15:20:18 -- host/auth.sh@44 -- # digest=sha384 00:16:09.372 15:20:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:09.372 15:20:18 -- host/auth.sh@44 -- # keyid=4 00:16:09.372 15:20:18 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:09.372 15:20:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:09.372 15:20:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:09.372 15:20:18 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:09.372 15:20:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:16:09.372 15:20:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:09.372 15:20:18 -- host/auth.sh@68 -- # digest=sha384 00:16:09.372 15:20:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:09.372 15:20:18 -- host/auth.sh@68 -- # keyid=4 00:16:09.372 15:20:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.372 15:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.372 15:20:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.372 15:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.372 15:20:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:09.372 15:20:18 -- nvmf/common.sh@717 -- # local ip 00:16:09.372 15:20:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:09.372 15:20:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:09.372 15:20:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.372 15:20:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.372 15:20:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:09.372 15:20:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.372 15:20:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:09.372 15:20:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:09.372 15:20:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:09.372 15:20:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:09.372 15:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.372 15:20:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.938 nvme0n1 00:16:09.938 15:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.938 15:20:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:09.938 15:20:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.938 15:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.938 15:20:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.938 15:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.938 15:20:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.938 15:20:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.938 15:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.938 15:20:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.938 15:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.938 15:20:19 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.938 15:20:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:09.938 15:20:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:09.938 15:20:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:09.938 15:20:19 -- host/auth.sh@44 -- # digest=sha384 00:16:09.938 15:20:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:09.938 15:20:19 -- host/auth.sh@44 -- # keyid=0 00:16:09.938 15:20:19 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:09.938 15:20:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:09.938 15:20:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:09.938 15:20:19 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:09.938 15:20:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:16:09.938 15:20:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:09.938 15:20:19 -- host/auth.sh@68 -- # digest=sha384 00:16:09.938 15:20:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:09.938 15:20:19 -- host/auth.sh@68 -- # keyid=0 00:16:09.938 15:20:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:09.938 15:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.938 15:20:19 -- common/autotest_common.sh@10 -- # set +x 00:16:09.938 15:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:09.938 15:20:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:09.938 15:20:19 -- nvmf/common.sh@717 -- # local ip 00:16:09.938 15:20:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:09.938 15:20:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:09.938 15:20:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.938 15:20:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.938 15:20:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:09.938 15:20:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.938 15:20:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:09.938 15:20:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:09.938 15:20:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:09.938 15:20:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:09.938 15:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:09.938 15:20:19 -- common/autotest_common.sh@10 -- # set +x 00:16:10.504 nvme0n1 00:16:10.504 15:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:10.504 15:20:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.504 15:20:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:10.504 15:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:10.504 15:20:19 -- common/autotest_common.sh@10 -- # set +x 00:16:10.504 15:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:10.504 15:20:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.504 15:20:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.504 15:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:10.504 15:20:19 -- common/autotest_common.sh@10 -- # set +x 00:16:10.504 15:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:10.504 15:20:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:10.504 15:20:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:10.504 15:20:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:10.504 15:20:19 -- host/auth.sh@44 -- # digest=sha384 00:16:10.504 15:20:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:10.504 15:20:19 -- host/auth.sh@44 -- # keyid=1 00:16:10.504 15:20:19 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:10.504 15:20:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:10.504 15:20:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:10.504 15:20:19 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:10.504 15:20:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:16:10.504 15:20:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:10.504 15:20:19 -- host/auth.sh@68 -- # digest=sha384 00:16:10.504 15:20:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:10.504 15:20:19 -- host/auth.sh@68 -- # keyid=1 00:16:10.504 15:20:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.504 15:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:10.504 15:20:19 -- common/autotest_common.sh@10 -- # set +x 00:16:10.504 15:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:10.504 15:20:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:10.504 15:20:19 -- nvmf/common.sh@717 -- # local ip 00:16:10.504 15:20:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:10.504 15:20:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:10.504 15:20:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.504 15:20:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.504 15:20:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:10.504 15:20:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.504 15:20:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:10.504 15:20:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:10.504 15:20:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:10.504 15:20:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:10.504 15:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:10.504 15:20:19 -- common/autotest_common.sh@10 -- # set +x 00:16:11.071 nvme0n1 00:16:11.071 15:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.071 15:20:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.071 15:20:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:11.071 15:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.071 15:20:20 -- common/autotest_common.sh@10 -- # set +x 00:16:11.331 15:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.331 15:20:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.331 15:20:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.331 15:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.331 15:20:20 -- common/autotest_common.sh@10 -- # set +x 00:16:11.331 15:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.331 15:20:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:11.331 15:20:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:11.331 15:20:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:11.331 15:20:20 -- host/auth.sh@44 -- # digest=sha384 00:16:11.331 15:20:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:11.331 15:20:20 -- host/auth.sh@44 -- # keyid=2 00:16:11.331 15:20:20 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:11.331 15:20:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:11.331 15:20:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:11.331 15:20:20 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:11.331 15:20:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:16:11.331 15:20:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:11.331 15:20:20 -- host/auth.sh@68 -- # digest=sha384 00:16:11.331 15:20:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:11.331 15:20:20 -- host/auth.sh@68 -- # keyid=2 00:16:11.331 15:20:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:11.331 15:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.331 15:20:20 -- common/autotest_common.sh@10 -- # set +x 00:16:11.331 15:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.331 15:20:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:11.331 15:20:20 -- nvmf/common.sh@717 -- # local ip 00:16:11.331 15:20:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:11.331 15:20:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:11.331 15:20:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.331 15:20:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.331 15:20:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:11.331 15:20:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.331 15:20:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:11.331 15:20:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:11.331 15:20:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:11.331 15:20:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:11.331 15:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.331 15:20:20 -- common/autotest_common.sh@10 -- # set +x 00:16:11.898 nvme0n1 00:16:11.898 15:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.898 15:20:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.898 15:20:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:11.898 15:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.898 15:20:20 -- common/autotest_common.sh@10 -- # set +x 00:16:11.898 15:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.898 15:20:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.898 15:20:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.898 15:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.898 15:20:21 -- common/autotest_common.sh@10 -- # set +x 00:16:11.898 15:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.898 15:20:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:11.898 15:20:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:11.898 15:20:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:11.898 15:20:21 -- host/auth.sh@44 -- # digest=sha384 00:16:11.898 15:20:21 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:11.898 15:20:21 -- host/auth.sh@44 -- # keyid=3 00:16:11.898 15:20:21 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:11.898 15:20:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:11.898 15:20:21 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:11.898 15:20:21 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:11.898 15:20:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:16:11.898 15:20:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:11.898 15:20:21 -- host/auth.sh@68 -- # digest=sha384 00:16:11.898 15:20:21 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:11.898 15:20:21 -- host/auth.sh@68 -- # keyid=3 00:16:11.898 15:20:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:11.898 15:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.898 15:20:21 -- common/autotest_common.sh@10 -- # set +x 00:16:11.898 15:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.898 15:20:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:11.898 15:20:21 -- nvmf/common.sh@717 -- # local ip 00:16:11.898 15:20:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:11.898 15:20:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:11.898 15:20:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.898 15:20:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.898 15:20:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:11.898 15:20:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.898 15:20:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:11.898 15:20:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:11.898 15:20:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:11.898 15:20:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:11.898 15:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.898 15:20:21 -- common/autotest_common.sh@10 -- # set +x 00:16:12.507 nvme0n1 00:16:12.507 15:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.507 15:20:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.507 15:20:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:12.507 15:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.507 15:20:21 -- common/autotest_common.sh@10 -- # set +x 00:16:12.507 15:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.507 15:20:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.507 15:20:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.507 15:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.507 15:20:21 -- common/autotest_common.sh@10 -- # set +x 00:16:12.507 15:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.507 15:20:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:12.507 15:20:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:16:12.507 15:20:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:12.507 15:20:21 -- host/auth.sh@44 -- # digest=sha384 00:16:12.507 15:20:21 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:12.507 15:20:21 -- host/auth.sh@44 -- # keyid=4 00:16:12.507 15:20:21 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:12.507 15:20:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:16:12.507 15:20:21 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:12.507 15:20:21 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:12.507 15:20:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:16:12.507 15:20:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:12.507 15:20:21 -- host/auth.sh@68 -- # digest=sha384 00:16:12.507 15:20:21 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:12.507 15:20:21 -- host/auth.sh@68 -- # keyid=4 00:16:12.507 15:20:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:12.507 15:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.507 15:20:21 -- common/autotest_common.sh@10 -- # set +x 00:16:12.766 15:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.766 15:20:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:12.766 15:20:21 -- nvmf/common.sh@717 -- # local ip 00:16:12.766 15:20:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:12.766 15:20:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:12.766 15:20:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.766 15:20:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.766 15:20:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:12.766 15:20:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.766 15:20:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:12.766 15:20:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:12.766 15:20:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:12.766 15:20:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:12.766 15:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.766 15:20:21 -- common/autotest_common.sh@10 -- # set +x 00:16:13.333 nvme0n1 00:16:13.333 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.333 15:20:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.333 15:20:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:13.333 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.333 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.333 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.333 15:20:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.333 15:20:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.333 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.333 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.333 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.333 15:20:22 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:16:13.333 15:20:22 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.333 15:20:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:13.333 15:20:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:16:13.333 15:20:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:13.333 15:20:22 -- host/auth.sh@44 -- # digest=sha512 00:16:13.333 15:20:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:13.333 15:20:22 -- host/auth.sh@44 -- # keyid=0 00:16:13.333 15:20:22 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:13.333 15:20:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:13.333 15:20:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:13.333 15:20:22 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:13.333 15:20:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:16:13.333 15:20:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:13.333 15:20:22 -- host/auth.sh@68 -- # digest=sha512 00:16:13.333 15:20:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:13.333 15:20:22 -- host/auth.sh@68 -- # keyid=0 00:16:13.333 15:20:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.333 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.333 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.333 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.333 15:20:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:13.333 15:20:22 -- nvmf/common.sh@717 -- # local ip 00:16:13.333 15:20:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:13.333 15:20:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:13.333 15:20:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.333 15:20:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.333 15:20:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:13.333 15:20:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.333 15:20:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:13.333 15:20:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:13.333 15:20:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:13.333 15:20:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:13.333 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.333 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.333 nvme0n1 00:16:13.333 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.333 15:20:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.333 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.333 15:20:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:13.333 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.333 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.592 15:20:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.592 15:20:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.592 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.592 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.592 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.592 15:20:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:13.592 15:20:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:16:13.592 15:20:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:13.592 15:20:22 -- host/auth.sh@44 -- # digest=sha512 00:16:13.592 15:20:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:13.592 15:20:22 -- host/auth.sh@44 -- # keyid=1 00:16:13.592 15:20:22 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:13.592 15:20:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:13.592 15:20:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:13.592 15:20:22 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:13.592 15:20:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:16:13.592 15:20:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:13.592 15:20:22 -- host/auth.sh@68 -- # digest=sha512 00:16:13.592 15:20:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:13.592 15:20:22 -- host/auth.sh@68 -- # keyid=1 00:16:13.592 15:20:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.592 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.592 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.592 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.592 15:20:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:13.592 15:20:22 -- nvmf/common.sh@717 -- # local ip 00:16:13.592 15:20:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:13.592 15:20:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:13.592 15:20:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.592 15:20:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.592 15:20:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:13.592 15:20:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.592 15:20:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:13.592 15:20:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:13.592 15:20:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:13.592 15:20:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:13.592 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.592 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.592 nvme0n1 00:16:13.592 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.592 15:20:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.592 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.592 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.592 15:20:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:13.592 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.592 15:20:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.592 15:20:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.592 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.592 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.592 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.592 15:20:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:13.592 15:20:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:16:13.592 15:20:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:13.592 15:20:22 -- host/auth.sh@44 -- # digest=sha512 00:16:13.592 15:20:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:13.592 15:20:22 -- host/auth.sh@44 -- # keyid=2 00:16:13.592 15:20:22 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:13.592 15:20:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:13.592 15:20:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:13.592 15:20:22 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:13.592 15:20:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:16:13.592 15:20:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:13.592 15:20:22 -- host/auth.sh@68 -- # digest=sha512 00:16:13.592 15:20:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:13.593 15:20:22 -- host/auth.sh@68 -- # keyid=2 00:16:13.593 15:20:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.593 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.593 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.593 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.593 15:20:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:13.593 15:20:22 -- nvmf/common.sh@717 -- # local ip 00:16:13.593 15:20:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:13.593 15:20:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:13.593 15:20:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.593 15:20:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.593 15:20:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:13.593 15:20:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.593 15:20:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:13.593 15:20:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:13.593 15:20:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:13.593 15:20:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:13.593 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.593 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.850 nvme0n1 00:16:13.850 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.850 15:20:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.850 15:20:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:13.850 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.850 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.850 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.850 15:20:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.850 15:20:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.850 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.850 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.850 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.850 15:20:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:13.850 15:20:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:16:13.850 15:20:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:13.850 15:20:22 -- host/auth.sh@44 -- # digest=sha512 00:16:13.850 15:20:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:13.850 15:20:22 -- host/auth.sh@44 -- # keyid=3 00:16:13.850 15:20:22 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:13.850 15:20:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:13.850 15:20:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:13.850 15:20:22 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:13.850 15:20:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:16:13.850 15:20:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:13.850 15:20:22 -- host/auth.sh@68 -- # digest=sha512 00:16:13.850 15:20:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:13.850 15:20:22 -- host/auth.sh@68 -- # keyid=3 00:16:13.850 15:20:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.851 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.851 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.851 15:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.851 15:20:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:13.851 15:20:22 -- nvmf/common.sh@717 -- # local ip 00:16:13.851 15:20:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:13.851 15:20:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:13.851 15:20:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.851 15:20:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.851 15:20:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:13.851 15:20:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.851 15:20:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:13.851 15:20:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:13.851 15:20:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:13.851 15:20:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:13.851 15:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.851 15:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:13.851 nvme0n1 00:16:13.851 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.851 15:20:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.851 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.851 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:13.851 15:20:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:14.109 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.109 15:20:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.109 15:20:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.109 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.109 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.109 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.109 15:20:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:14.109 15:20:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:16:14.109 15:20:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:14.109 15:20:23 -- host/auth.sh@44 -- # digest=sha512 00:16:14.109 15:20:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:14.109 15:20:23 -- host/auth.sh@44 -- # keyid=4 00:16:14.109 15:20:23 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:14.109 15:20:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:14.109 15:20:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:14.109 15:20:23 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:14.109 15:20:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:16:14.109 15:20:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:14.109 15:20:23 -- host/auth.sh@68 -- # digest=sha512 00:16:14.109 15:20:23 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:14.109 15:20:23 -- host/auth.sh@68 -- # keyid=4 00:16:14.109 15:20:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:14.109 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.109 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.109 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.109 15:20:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:14.109 15:20:23 -- nvmf/common.sh@717 -- # local ip 00:16:14.109 15:20:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:14.109 15:20:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:14.109 15:20:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.109 15:20:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.109 15:20:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:14.109 15:20:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.109 15:20:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:14.109 15:20:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:14.109 15:20:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:14.109 15:20:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:14.109 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.109 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.109 nvme0n1 00:16:14.109 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.109 15:20:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.109 15:20:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:14.109 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.109 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.109 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.109 15:20:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.109 15:20:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.109 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.109 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.109 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.109 15:20:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.109 15:20:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:14.109 15:20:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:16:14.109 15:20:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:14.109 15:20:23 -- host/auth.sh@44 -- # digest=sha512 00:16:14.109 15:20:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:14.109 15:20:23 -- host/auth.sh@44 -- # keyid=0 00:16:14.109 15:20:23 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:14.109 15:20:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:14.109 15:20:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:14.109 15:20:23 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:14.109 15:20:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:16:14.109 15:20:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:14.109 15:20:23 -- host/auth.sh@68 -- # digest=sha512 00:16:14.109 15:20:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:14.109 15:20:23 -- host/auth.sh@68 -- # keyid=0 00:16:14.109 15:20:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:14.109 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.109 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.109 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.109 15:20:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:14.109 15:20:23 -- nvmf/common.sh@717 -- # local ip 00:16:14.109 15:20:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:14.109 15:20:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:14.109 15:20:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.109 15:20:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.109 15:20:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:14.109 15:20:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.109 15:20:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:14.109 15:20:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:14.109 15:20:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:14.368 15:20:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:14.368 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.368 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.368 nvme0n1 00:16:14.368 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.368 15:20:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.368 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.368 15:20:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:14.368 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.368 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.368 15:20:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.368 15:20:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.368 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.368 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.368 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.368 15:20:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:14.368 15:20:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:16:14.368 15:20:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:14.368 15:20:23 -- host/auth.sh@44 -- # digest=sha512 00:16:14.368 15:20:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:14.368 15:20:23 -- host/auth.sh@44 -- # keyid=1 00:16:14.368 15:20:23 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:14.368 15:20:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:14.368 15:20:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:14.368 15:20:23 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:14.368 15:20:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:16:14.368 15:20:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:14.368 15:20:23 -- host/auth.sh@68 -- # digest=sha512 00:16:14.368 15:20:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:14.368 15:20:23 -- host/auth.sh@68 -- # keyid=1 00:16:14.368 15:20:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:14.368 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.368 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.368 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.368 15:20:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:14.368 15:20:23 -- nvmf/common.sh@717 -- # local ip 00:16:14.368 15:20:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:14.368 15:20:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:14.368 15:20:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.368 15:20:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.368 15:20:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:14.368 15:20:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.368 15:20:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:14.368 15:20:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:14.368 15:20:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:14.368 15:20:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:14.368 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.368 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.628 nvme0n1 00:16:14.628 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.628 15:20:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.628 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.628 15:20:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:14.628 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.628 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.628 15:20:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.628 15:20:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.628 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.628 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.628 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.628 15:20:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:14.628 15:20:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:16:14.628 15:20:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:14.628 15:20:23 -- host/auth.sh@44 -- # digest=sha512 00:16:14.628 15:20:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:14.628 15:20:23 -- host/auth.sh@44 -- # keyid=2 00:16:14.628 15:20:23 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:14.628 15:20:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:14.628 15:20:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:14.628 15:20:23 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:14.628 15:20:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:16:14.628 15:20:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:14.628 15:20:23 -- host/auth.sh@68 -- # digest=sha512 00:16:14.628 15:20:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:14.628 15:20:23 -- host/auth.sh@68 -- # keyid=2 00:16:14.628 15:20:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:14.628 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.628 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.628 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.628 15:20:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:14.628 15:20:23 -- nvmf/common.sh@717 -- # local ip 00:16:14.628 15:20:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:14.628 15:20:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:14.628 15:20:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.628 15:20:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.628 15:20:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:14.628 15:20:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.628 15:20:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:14.628 15:20:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:14.628 15:20:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:14.628 15:20:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:14.628 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.628 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.886 nvme0n1 00:16:14.886 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.886 15:20:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.886 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.886 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.886 15:20:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:14.886 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.886 15:20:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.886 15:20:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.886 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.886 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.886 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.886 15:20:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:14.886 15:20:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:16:14.886 15:20:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:14.886 15:20:23 -- host/auth.sh@44 -- # digest=sha512 00:16:14.886 15:20:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:14.886 15:20:23 -- host/auth.sh@44 -- # keyid=3 00:16:14.886 15:20:23 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:14.886 15:20:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:14.886 15:20:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:14.887 15:20:23 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:14.887 15:20:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:16:14.887 15:20:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:14.887 15:20:23 -- host/auth.sh@68 -- # digest=sha512 00:16:14.887 15:20:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:14.887 15:20:23 -- host/auth.sh@68 -- # keyid=3 00:16:14.887 15:20:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:14.887 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.887 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.887 15:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.887 15:20:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:14.887 15:20:23 -- nvmf/common.sh@717 -- # local ip 00:16:14.887 15:20:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:14.887 15:20:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:14.887 15:20:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.887 15:20:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.887 15:20:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:14.887 15:20:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.887 15:20:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:14.887 15:20:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:14.887 15:20:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:14.887 15:20:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:14.887 15:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.887 15:20:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.887 nvme0n1 00:16:14.887 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.887 15:20:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.887 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.887 15:20:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:14.887 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:14.887 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.146 15:20:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.146 15:20:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.146 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.146 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.146 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.146 15:20:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:15.146 15:20:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:16:15.146 15:20:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:15.146 15:20:24 -- host/auth.sh@44 -- # digest=sha512 00:16:15.146 15:20:24 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:15.146 15:20:24 -- host/auth.sh@44 -- # keyid=4 00:16:15.146 15:20:24 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:15.146 15:20:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:15.146 15:20:24 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:15.146 15:20:24 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:15.146 15:20:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:16:15.146 15:20:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:15.146 15:20:24 -- host/auth.sh@68 -- # digest=sha512 00:16:15.146 15:20:24 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:15.146 15:20:24 -- host/auth.sh@68 -- # keyid=4 00:16:15.146 15:20:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:15.146 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.146 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.146 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.146 15:20:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:15.146 15:20:24 -- nvmf/common.sh@717 -- # local ip 00:16:15.146 15:20:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:15.146 15:20:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:15.146 15:20:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.146 15:20:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.146 15:20:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:15.146 15:20:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.146 15:20:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:15.146 15:20:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:15.146 15:20:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:15.146 15:20:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:15.146 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.146 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.146 nvme0n1 00:16:15.146 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.146 15:20:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.146 15:20:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:15.146 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.146 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.146 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.146 15:20:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.146 15:20:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.146 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.146 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.146 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.146 15:20:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.146 15:20:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:15.146 15:20:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:16:15.146 15:20:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:15.146 15:20:24 -- host/auth.sh@44 -- # digest=sha512 00:16:15.146 15:20:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:15.146 15:20:24 -- host/auth.sh@44 -- # keyid=0 00:16:15.146 15:20:24 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:15.146 15:20:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:15.146 15:20:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:15.146 15:20:24 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:15.146 15:20:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:16:15.146 15:20:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:15.146 15:20:24 -- host/auth.sh@68 -- # digest=sha512 00:16:15.146 15:20:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:15.146 15:20:24 -- host/auth.sh@68 -- # keyid=0 00:16:15.146 15:20:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.146 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.146 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.146 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.146 15:20:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:15.146 15:20:24 -- nvmf/common.sh@717 -- # local ip 00:16:15.146 15:20:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:15.146 15:20:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:15.146 15:20:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.146 15:20:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.146 15:20:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:15.146 15:20:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.146 15:20:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:15.146 15:20:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:15.146 15:20:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:15.146 15:20:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:15.146 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.146 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.404 nvme0n1 00:16:15.404 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.404 15:20:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.404 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.404 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.404 15:20:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:15.404 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.404 15:20:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.404 15:20:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.404 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.404 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.404 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.404 15:20:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:15.404 15:20:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:16:15.404 15:20:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:15.404 15:20:24 -- host/auth.sh@44 -- # digest=sha512 00:16:15.404 15:20:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:15.404 15:20:24 -- host/auth.sh@44 -- # keyid=1 00:16:15.404 15:20:24 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:15.404 15:20:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:15.404 15:20:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:15.404 15:20:24 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:15.404 15:20:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:16:15.404 15:20:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:15.404 15:20:24 -- host/auth.sh@68 -- # digest=sha512 00:16:15.404 15:20:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:15.404 15:20:24 -- host/auth.sh@68 -- # keyid=1 00:16:15.404 15:20:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.404 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.404 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.404 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.404 15:20:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:15.404 15:20:24 -- nvmf/common.sh@717 -- # local ip 00:16:15.404 15:20:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:15.404 15:20:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:15.404 15:20:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.404 15:20:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.404 15:20:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:15.404 15:20:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.404 15:20:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:15.404 15:20:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:15.404 15:20:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:15.663 15:20:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:15.663 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.663 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.663 nvme0n1 00:16:15.663 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.663 15:20:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.663 15:20:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:15.663 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.663 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.663 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.663 15:20:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.663 15:20:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.663 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.663 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.663 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.663 15:20:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:15.663 15:20:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:16:15.663 15:20:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:15.663 15:20:24 -- host/auth.sh@44 -- # digest=sha512 00:16:15.663 15:20:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:15.663 15:20:24 -- host/auth.sh@44 -- # keyid=2 00:16:15.663 15:20:24 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:15.663 15:20:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:15.663 15:20:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:15.663 15:20:24 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:15.663 15:20:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:16:15.663 15:20:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:15.663 15:20:24 -- host/auth.sh@68 -- # digest=sha512 00:16:15.663 15:20:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:15.663 15:20:24 -- host/auth.sh@68 -- # keyid=2 00:16:15.663 15:20:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.663 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.663 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.922 15:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.922 15:20:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:15.922 15:20:24 -- nvmf/common.sh@717 -- # local ip 00:16:15.922 15:20:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:15.922 15:20:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:15.922 15:20:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.922 15:20:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.922 15:20:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:15.922 15:20:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.922 15:20:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:15.922 15:20:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:15.922 15:20:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:15.922 15:20:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:15.922 15:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.922 15:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:15.922 nvme0n1 00:16:15.922 15:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.922 15:20:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.922 15:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.922 15:20:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:15.922 15:20:25 -- common/autotest_common.sh@10 -- # set +x 00:16:15.922 15:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.922 15:20:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.922 15:20:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.922 15:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.922 15:20:25 -- common/autotest_common.sh@10 -- # set +x 00:16:16.180 15:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.180 15:20:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:16.180 15:20:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:16:16.180 15:20:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:16.180 15:20:25 -- host/auth.sh@44 -- # digest=sha512 00:16:16.180 15:20:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:16.180 15:20:25 -- host/auth.sh@44 -- # keyid=3 00:16:16.180 15:20:25 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:16.180 15:20:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:16.180 15:20:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:16.180 15:20:25 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:16.180 15:20:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:16:16.180 15:20:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:16.180 15:20:25 -- host/auth.sh@68 -- # digest=sha512 00:16:16.180 15:20:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:16.180 15:20:25 -- host/auth.sh@68 -- # keyid=3 00:16:16.180 15:20:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:16.180 15:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.180 15:20:25 -- common/autotest_common.sh@10 -- # set +x 00:16:16.180 15:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.180 15:20:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:16.180 15:20:25 -- nvmf/common.sh@717 -- # local ip 00:16:16.180 15:20:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:16.180 15:20:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:16.180 15:20:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.180 15:20:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.180 15:20:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:16.180 15:20:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.180 15:20:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:16.180 15:20:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:16.180 15:20:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:16.180 15:20:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:16.180 15:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.180 15:20:25 -- common/autotest_common.sh@10 -- # set +x 00:16:16.180 nvme0n1 00:16:16.180 15:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.181 15:20:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.181 15:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.181 15:20:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:16.181 15:20:25 -- common/autotest_common.sh@10 -- # set +x 00:16:16.181 15:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.181 15:20:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.181 15:20:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.181 15:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.181 15:20:25 -- common/autotest_common.sh@10 -- # set +x 00:16:16.439 15:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.439 15:20:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:16.439 15:20:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:16:16.439 15:20:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:16.439 15:20:25 -- host/auth.sh@44 -- # digest=sha512 00:16:16.439 15:20:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:16.439 15:20:25 -- host/auth.sh@44 -- # keyid=4 00:16:16.439 15:20:25 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:16.439 15:20:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:16.439 15:20:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:16.439 15:20:25 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:16.439 15:20:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:16:16.439 15:20:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:16.439 15:20:25 -- host/auth.sh@68 -- # digest=sha512 00:16:16.439 15:20:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:16.439 15:20:25 -- host/auth.sh@68 -- # keyid=4 00:16:16.439 15:20:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:16.439 15:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.439 15:20:25 -- common/autotest_common.sh@10 -- # set +x 00:16:16.439 15:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.439 15:20:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:16.439 15:20:25 -- nvmf/common.sh@717 -- # local ip 00:16:16.439 15:20:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:16.439 15:20:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:16.439 15:20:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.439 15:20:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.439 15:20:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:16.439 15:20:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.439 15:20:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:16.439 15:20:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:16.439 15:20:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:16.439 15:20:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:16.439 15:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.439 15:20:25 -- common/autotest_common.sh@10 -- # set +x 00:16:16.439 nvme0n1 00:16:16.439 15:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.439 15:20:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.439 15:20:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:16.439 15:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.439 15:20:25 -- common/autotest_common.sh@10 -- # set +x 00:16:16.439 15:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.697 15:20:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.697 15:20:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.697 15:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.697 15:20:25 -- common/autotest_common.sh@10 -- # set +x 00:16:16.697 15:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.698 15:20:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.698 15:20:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:16.698 15:20:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:16:16.698 15:20:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:16.698 15:20:25 -- host/auth.sh@44 -- # digest=sha512 00:16:16.698 15:20:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:16.698 15:20:25 -- host/auth.sh@44 -- # keyid=0 00:16:16.698 15:20:25 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:16.698 15:20:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:16.698 15:20:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:16.698 15:20:25 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:16.698 15:20:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:16:16.698 15:20:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:16.698 15:20:25 -- host/auth.sh@68 -- # digest=sha512 00:16:16.698 15:20:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:16.698 15:20:25 -- host/auth.sh@68 -- # keyid=0 00:16:16.698 15:20:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:16.698 15:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.698 15:20:25 -- common/autotest_common.sh@10 -- # set +x 00:16:16.698 15:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.698 15:20:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:16.698 15:20:25 -- nvmf/common.sh@717 -- # local ip 00:16:16.698 15:20:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:16.698 15:20:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:16.698 15:20:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.698 15:20:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.698 15:20:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:16.698 15:20:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.698 15:20:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:16.698 15:20:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:16.698 15:20:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:16.698 15:20:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:16.698 15:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.698 15:20:25 -- common/autotest_common.sh@10 -- # set +x 00:16:16.956 nvme0n1 00:16:16.956 15:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.956 15:20:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.956 15:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.956 15:20:26 -- common/autotest_common.sh@10 -- # set +x 00:16:16.956 15:20:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:16.956 15:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.956 15:20:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.956 15:20:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.956 15:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.956 15:20:26 -- common/autotest_common.sh@10 -- # set +x 00:16:16.956 15:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.956 15:20:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:16.956 15:20:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:16:16.956 15:20:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:16.956 15:20:26 -- host/auth.sh@44 -- # digest=sha512 00:16:16.956 15:20:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:16.956 15:20:26 -- host/auth.sh@44 -- # keyid=1 00:16:16.956 15:20:26 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:16.956 15:20:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:16.956 15:20:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:16.956 15:20:26 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:16.956 15:20:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:16:16.956 15:20:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:16.956 15:20:26 -- host/auth.sh@68 -- # digest=sha512 00:16:16.956 15:20:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:16.956 15:20:26 -- host/auth.sh@68 -- # keyid=1 00:16:16.956 15:20:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:16.956 15:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.956 15:20:26 -- common/autotest_common.sh@10 -- # set +x 00:16:16.956 15:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.956 15:20:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:16.956 15:20:26 -- nvmf/common.sh@717 -- # local ip 00:16:16.956 15:20:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:16.956 15:20:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:16.956 15:20:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.956 15:20:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.956 15:20:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:16.956 15:20:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.956 15:20:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:16.956 15:20:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:16.956 15:20:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:16.956 15:20:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:16.956 15:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.956 15:20:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.522 nvme0n1 00:16:17.522 15:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.522 15:20:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.522 15:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.522 15:20:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:17.522 15:20:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.522 15:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.523 15:20:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.523 15:20:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.523 15:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.523 15:20:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.523 15:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.523 15:20:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:17.523 15:20:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:16:17.523 15:20:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:17.523 15:20:26 -- host/auth.sh@44 -- # digest=sha512 00:16:17.523 15:20:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:17.523 15:20:26 -- host/auth.sh@44 -- # keyid=2 00:16:17.523 15:20:26 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:17.523 15:20:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:17.523 15:20:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:17.523 15:20:26 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:17.523 15:20:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:16:17.523 15:20:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:17.523 15:20:26 -- host/auth.sh@68 -- # digest=sha512 00:16:17.523 15:20:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:17.523 15:20:26 -- host/auth.sh@68 -- # keyid=2 00:16:17.523 15:20:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:17.523 15:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.523 15:20:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.523 15:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.523 15:20:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:17.523 15:20:26 -- nvmf/common.sh@717 -- # local ip 00:16:17.523 15:20:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:17.523 15:20:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:17.523 15:20:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.523 15:20:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.523 15:20:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:17.523 15:20:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.523 15:20:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:17.523 15:20:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:17.523 15:20:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:17.523 15:20:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:17.523 15:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.523 15:20:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.782 nvme0n1 00:16:17.782 15:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.782 15:20:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.782 15:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.782 15:20:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:17.782 15:20:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.782 15:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.782 15:20:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.782 15:20:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.782 15:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.782 15:20:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.782 15:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.782 15:20:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:17.782 15:20:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:16:17.782 15:20:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:17.782 15:20:26 -- host/auth.sh@44 -- # digest=sha512 00:16:17.782 15:20:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:17.782 15:20:26 -- host/auth.sh@44 -- # keyid=3 00:16:17.782 15:20:26 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:17.782 15:20:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:17.782 15:20:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:17.782 15:20:26 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:17.782 15:20:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:16:17.782 15:20:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:17.782 15:20:26 -- host/auth.sh@68 -- # digest=sha512 00:16:17.782 15:20:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:17.782 15:20:26 -- host/auth.sh@68 -- # keyid=3 00:16:17.782 15:20:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:17.782 15:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.782 15:20:26 -- common/autotest_common.sh@10 -- # set +x 00:16:17.782 15:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.782 15:20:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:17.782 15:20:26 -- nvmf/common.sh@717 -- # local ip 00:16:17.782 15:20:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:17.782 15:20:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:17.782 15:20:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.782 15:20:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.782 15:20:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:17.782 15:20:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.782 15:20:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:17.782 15:20:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:17.782 15:20:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:17.782 15:20:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:17.782 15:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.782 15:20:27 -- common/autotest_common.sh@10 -- # set +x 00:16:18.349 nvme0n1 00:16:18.349 15:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.349 15:20:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.349 15:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.349 15:20:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:18.349 15:20:27 -- common/autotest_common.sh@10 -- # set +x 00:16:18.349 15:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.349 15:20:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.349 15:20:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.349 15:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.349 15:20:27 -- common/autotest_common.sh@10 -- # set +x 00:16:18.349 15:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.349 15:20:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:18.349 15:20:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:16:18.349 15:20:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:18.349 15:20:27 -- host/auth.sh@44 -- # digest=sha512 00:16:18.349 15:20:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:18.349 15:20:27 -- host/auth.sh@44 -- # keyid=4 00:16:18.349 15:20:27 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:18.349 15:20:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:18.349 15:20:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:18.349 15:20:27 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:18.349 15:20:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:16:18.349 15:20:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:18.349 15:20:27 -- host/auth.sh@68 -- # digest=sha512 00:16:18.349 15:20:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:18.349 15:20:27 -- host/auth.sh@68 -- # keyid=4 00:16:18.349 15:20:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:18.349 15:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.349 15:20:27 -- common/autotest_common.sh@10 -- # set +x 00:16:18.349 15:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.349 15:20:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:18.349 15:20:27 -- nvmf/common.sh@717 -- # local ip 00:16:18.349 15:20:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:18.349 15:20:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:18.349 15:20:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.349 15:20:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.349 15:20:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:18.349 15:20:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.349 15:20:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:18.349 15:20:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:18.349 15:20:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:18.349 15:20:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:18.349 15:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.349 15:20:27 -- common/autotest_common.sh@10 -- # set +x 00:16:18.607 nvme0n1 00:16:18.607 15:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.607 15:20:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.608 15:20:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:18.608 15:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.608 15:20:27 -- common/autotest_common.sh@10 -- # set +x 00:16:18.608 15:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.608 15:20:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.608 15:20:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.608 15:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.608 15:20:27 -- common/autotest_common.sh@10 -- # set +x 00:16:18.866 15:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.866 15:20:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.866 15:20:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:18.866 15:20:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:16:18.866 15:20:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:18.866 15:20:27 -- host/auth.sh@44 -- # digest=sha512 00:16:18.866 15:20:27 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:18.866 15:20:27 -- host/auth.sh@44 -- # keyid=0 00:16:18.866 15:20:27 -- host/auth.sh@45 -- # key=DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:18.866 15:20:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:18.866 15:20:27 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:18.866 15:20:27 -- host/auth.sh@49 -- # echo DHHC-1:00:NTBjMjNjNmZlOGI4NGVjNzFlNzczNzUzZGMxMTk5NTPiLaGw: 00:16:18.866 15:20:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:16:18.866 15:20:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:18.866 15:20:27 -- host/auth.sh@68 -- # digest=sha512 00:16:18.866 15:20:27 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:18.866 15:20:27 -- host/auth.sh@68 -- # keyid=0 00:16:18.866 15:20:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:18.866 15:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.866 15:20:27 -- common/autotest_common.sh@10 -- # set +x 00:16:18.866 15:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.866 15:20:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:18.866 15:20:27 -- nvmf/common.sh@717 -- # local ip 00:16:18.866 15:20:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:18.866 15:20:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:18.866 15:20:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.866 15:20:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.866 15:20:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:18.866 15:20:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.866 15:20:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:18.866 15:20:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:18.866 15:20:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:18.866 15:20:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:18.866 15:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.866 15:20:27 -- common/autotest_common.sh@10 -- # set +x 00:16:19.433 nvme0n1 00:16:19.433 15:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.433 15:20:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.433 15:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.433 15:20:28 -- common/autotest_common.sh@10 -- # set +x 00:16:19.433 15:20:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:19.433 15:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.433 15:20:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.433 15:20:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.433 15:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.433 15:20:28 -- common/autotest_common.sh@10 -- # set +x 00:16:19.433 15:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.433 15:20:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:19.433 15:20:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:16:19.433 15:20:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:19.433 15:20:28 -- host/auth.sh@44 -- # digest=sha512 00:16:19.433 15:20:28 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:19.433 15:20:28 -- host/auth.sh@44 -- # keyid=1 00:16:19.433 15:20:28 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:19.433 15:20:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:19.433 15:20:28 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:19.433 15:20:28 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:19.433 15:20:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:16:19.433 15:20:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:19.433 15:20:28 -- host/auth.sh@68 -- # digest=sha512 00:16:19.433 15:20:28 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:19.433 15:20:28 -- host/auth.sh@68 -- # keyid=1 00:16:19.433 15:20:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:19.433 15:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.433 15:20:28 -- common/autotest_common.sh@10 -- # set +x 00:16:19.433 15:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.434 15:20:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:19.434 15:20:28 -- nvmf/common.sh@717 -- # local ip 00:16:19.434 15:20:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:19.434 15:20:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:19.434 15:20:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.434 15:20:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.434 15:20:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:19.434 15:20:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.434 15:20:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:19.434 15:20:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:19.434 15:20:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:19.434 15:20:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:19.434 15:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.434 15:20:28 -- common/autotest_common.sh@10 -- # set +x 00:16:20.000 nvme0n1 00:16:20.000 15:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.000 15:20:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.000 15:20:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:20.000 15:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.000 15:20:29 -- common/autotest_common.sh@10 -- # set +x 00:16:20.000 15:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.000 15:20:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.000 15:20:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.000 15:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.000 15:20:29 -- common/autotest_common.sh@10 -- # set +x 00:16:20.000 15:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.000 15:20:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:20.000 15:20:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:16:20.000 15:20:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:20.000 15:20:29 -- host/auth.sh@44 -- # digest=sha512 00:16:20.000 15:20:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:20.000 15:20:29 -- host/auth.sh@44 -- # keyid=2 00:16:20.000 15:20:29 -- host/auth.sh@45 -- # key=DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:20.000 15:20:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:20.000 15:20:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:20.000 15:20:29 -- host/auth.sh@49 -- # echo DHHC-1:01:N2IzMzVjYTkzNTcxMDEyMjJmZjg3MjNlMDAxOTgyMzJY5Tvm: 00:16:20.000 15:20:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:16:20.000 15:20:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:20.000 15:20:29 -- host/auth.sh@68 -- # digest=sha512 00:16:20.000 15:20:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:20.000 15:20:29 -- host/auth.sh@68 -- # keyid=2 00:16:20.000 15:20:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:20.000 15:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.000 15:20:29 -- common/autotest_common.sh@10 -- # set +x 00:16:20.258 15:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.258 15:20:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:20.258 15:20:29 -- nvmf/common.sh@717 -- # local ip 00:16:20.258 15:20:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:20.258 15:20:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:20.258 15:20:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.258 15:20:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.258 15:20:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:20.258 15:20:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.258 15:20:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:20.258 15:20:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:20.258 15:20:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:20.258 15:20:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:20.258 15:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.258 15:20:29 -- common/autotest_common.sh@10 -- # set +x 00:16:20.828 nvme0n1 00:16:20.828 15:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.828 15:20:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.828 15:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.828 15:20:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:20.828 15:20:29 -- common/autotest_common.sh@10 -- # set +x 00:16:20.828 15:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.828 15:20:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.828 15:20:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.828 15:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.828 15:20:29 -- common/autotest_common.sh@10 -- # set +x 00:16:20.828 15:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.828 15:20:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:20.828 15:20:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:16:20.828 15:20:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:20.828 15:20:29 -- host/auth.sh@44 -- # digest=sha512 00:16:20.828 15:20:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:20.828 15:20:29 -- host/auth.sh@44 -- # keyid=3 00:16:20.828 15:20:29 -- host/auth.sh@45 -- # key=DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:20.828 15:20:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:20.828 15:20:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:20.828 15:20:29 -- host/auth.sh@49 -- # echo DHHC-1:02:YWUxM2NkZWZmOWNiY2NjMzZiMDk1YjEzMWVmMTYyNmU0ZThlZjJlYmFhODYzNTRjukuF/Q==: 00:16:20.828 15:20:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:16:20.828 15:20:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:20.828 15:20:29 -- host/auth.sh@68 -- # digest=sha512 00:16:20.828 15:20:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:20.828 15:20:29 -- host/auth.sh@68 -- # keyid=3 00:16:20.828 15:20:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:20.828 15:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.828 15:20:29 -- common/autotest_common.sh@10 -- # set +x 00:16:20.828 15:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.828 15:20:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:20.828 15:20:29 -- nvmf/common.sh@717 -- # local ip 00:16:20.828 15:20:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:20.828 15:20:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:20.828 15:20:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.828 15:20:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.828 15:20:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:20.828 15:20:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.828 15:20:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:20.828 15:20:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:20.828 15:20:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:20.828 15:20:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:20.828 15:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.828 15:20:29 -- common/autotest_common.sh@10 -- # set +x 00:16:21.402 nvme0n1 00:16:21.402 15:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.402 15:20:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.402 15:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.402 15:20:30 -- common/autotest_common.sh@10 -- # set +x 00:16:21.402 15:20:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:21.402 15:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.402 15:20:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.402 15:20:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.402 15:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.402 15:20:30 -- common/autotest_common.sh@10 -- # set +x 00:16:21.402 15:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.402 15:20:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:21.402 15:20:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:16:21.402 15:20:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:21.402 15:20:30 -- host/auth.sh@44 -- # digest=sha512 00:16:21.402 15:20:30 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:21.402 15:20:30 -- host/auth.sh@44 -- # keyid=4 00:16:21.402 15:20:30 -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:21.402 15:20:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:16:21.402 15:20:30 -- host/auth.sh@48 -- # echo ffdhe8192 00:16:21.402 15:20:30 -- host/auth.sh@49 -- # echo DHHC-1:03:OGI1MzQ4NGQ5ZDExOTE3YzFlNDM4MGY2NDc4YTAxOTBkZTNiNTQzOGQ0MDA2MWY3NzI0OGFhY2MyYWQ3NjE2NfBEb1M=: 00:16:21.402 15:20:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:16:21.402 15:20:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:21.402 15:20:30 -- host/auth.sh@68 -- # digest=sha512 00:16:21.402 15:20:30 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:16:21.402 15:20:30 -- host/auth.sh@68 -- # keyid=4 00:16:21.402 15:20:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:21.402 15:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.402 15:20:30 -- common/autotest_common.sh@10 -- # set +x 00:16:21.402 15:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.402 15:20:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:21.402 15:20:30 -- nvmf/common.sh@717 -- # local ip 00:16:21.402 15:20:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:21.402 15:20:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:21.402 15:20:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.402 15:20:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.402 15:20:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:21.402 15:20:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.402 15:20:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:21.402 15:20:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:21.402 15:20:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:21.402 15:20:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:21.402 15:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.402 15:20:30 -- common/autotest_common.sh@10 -- # set +x 00:16:22.338 nvme0n1 00:16:22.338 15:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.338 15:20:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:22.338 15:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.338 15:20:31 -- common/autotest_common.sh@10 -- # set +x 00:16:22.338 15:20:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:22.338 15:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.338 15:20:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.338 15:20:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:22.338 15:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.338 15:20:31 -- common/autotest_common.sh@10 -- # set +x 00:16:22.338 15:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.338 15:20:31 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:22.338 15:20:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:22.338 15:20:31 -- host/auth.sh@44 -- # digest=sha256 00:16:22.338 15:20:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:22.338 15:20:31 -- host/auth.sh@44 -- # keyid=1 00:16:22.338 15:20:31 -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:22.338 15:20:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:22.338 15:20:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:22.338 15:20:31 -- host/auth.sh@49 -- # echo DHHC-1:00:NjVkYjRmNzcwNDIyYzAxYzA0MmEwNGFlMjBiOGQ1NzBjN2RiOTM4YjU5Y2EyMjM3dOOZgg==: 00:16:22.338 15:20:31 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.338 15:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.338 15:20:31 -- common/autotest_common.sh@10 -- # set +x 00:16:22.338 15:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.338 15:20:31 -- host/auth.sh@119 -- # get_main_ns_ip 00:16:22.338 15:20:31 -- nvmf/common.sh@717 -- # local ip 00:16:22.338 15:20:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:22.338 15:20:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:22.338 15:20:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:22.338 15:20:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:22.338 15:20:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:22.338 15:20:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:22.338 15:20:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:22.338 15:20:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:22.338 15:20:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:22.338 15:20:31 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:22.338 15:20:31 -- common/autotest_common.sh@638 -- # local es=0 00:16:22.338 15:20:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:22.338 15:20:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:22.338 15:20:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:22.338 15:20:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:22.338 15:20:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:22.338 15:20:31 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:22.338 15:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.338 15:20:31 -- common/autotest_common.sh@10 -- # set +x 00:16:22.338 request: 00:16:22.338 { 00:16:22.338 "name": "nvme0", 00:16:22.338 "trtype": "tcp", 00:16:22.338 "traddr": "10.0.0.1", 00:16:22.338 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:22.338 "adrfam": "ipv4", 00:16:22.338 "trsvcid": "4420", 00:16:22.338 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:22.338 "method": "bdev_nvme_attach_controller", 00:16:22.338 "req_id": 1 00:16:22.338 } 00:16:22.338 Got JSON-RPC error response 00:16:22.338 response: 00:16:22.338 { 00:16:22.338 "code": -32602, 00:16:22.338 "message": "Invalid parameters" 00:16:22.338 } 00:16:22.338 15:20:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:22.338 15:20:31 -- common/autotest_common.sh@641 -- # es=1 00:16:22.338 15:20:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:22.338 15:20:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:22.338 15:20:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:22.338 15:20:31 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:16:22.338 15:20:31 -- host/auth.sh@121 -- # jq length 00:16:22.338 15:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.338 15:20:31 -- common/autotest_common.sh@10 -- # set +x 00:16:22.338 15:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.338 15:20:31 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:16:22.338 15:20:31 -- host/auth.sh@124 -- # get_main_ns_ip 00:16:22.339 15:20:31 -- nvmf/common.sh@717 -- # local ip 00:16:22.339 15:20:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:22.339 15:20:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:22.339 15:20:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:22.339 15:20:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:22.339 15:20:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:22.339 15:20:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:22.339 15:20:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:22.339 15:20:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:22.339 15:20:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:22.339 15:20:31 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:22.339 15:20:31 -- common/autotest_common.sh@638 -- # local es=0 00:16:22.339 15:20:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:22.339 15:20:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:22.339 15:20:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:22.339 15:20:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:22.339 15:20:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:22.339 15:20:31 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:22.339 15:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.339 15:20:31 -- common/autotest_common.sh@10 -- # set +x 00:16:22.339 request: 00:16:22.339 { 00:16:22.339 "name": "nvme0", 00:16:22.339 "trtype": "tcp", 00:16:22.339 "traddr": "10.0.0.1", 00:16:22.339 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:22.339 "adrfam": "ipv4", 00:16:22.339 "trsvcid": "4420", 00:16:22.339 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:22.339 "dhchap_key": "key2", 00:16:22.339 "method": "bdev_nvme_attach_controller", 00:16:22.339 "req_id": 1 00:16:22.339 } 00:16:22.339 Got JSON-RPC error response 00:16:22.339 response: 00:16:22.339 { 00:16:22.339 "code": -32602, 00:16:22.339 "message": "Invalid parameters" 00:16:22.339 } 00:16:22.339 15:20:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:22.339 15:20:31 -- common/autotest_common.sh@641 -- # es=1 00:16:22.339 15:20:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:22.339 15:20:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:22.339 15:20:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:22.339 15:20:31 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:16:22.339 15:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.339 15:20:31 -- common/autotest_common.sh@10 -- # set +x 00:16:22.339 15:20:31 -- host/auth.sh@127 -- # jq length 00:16:22.339 15:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.339 15:20:31 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:16:22.339 15:20:31 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:16:22.339 15:20:31 -- host/auth.sh@130 -- # cleanup 00:16:22.339 15:20:31 -- host/auth.sh@24 -- # nvmftestfini 00:16:22.339 15:20:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:22.339 15:20:31 -- nvmf/common.sh@117 -- # sync 00:16:22.339 15:20:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.339 15:20:31 -- nvmf/common.sh@120 -- # set +e 00:16:22.339 15:20:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.339 15:20:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.339 rmmod nvme_tcp 00:16:22.339 rmmod nvme_fabrics 00:16:22.339 15:20:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.597 15:20:31 -- nvmf/common.sh@124 -- # set -e 00:16:22.597 15:20:31 -- nvmf/common.sh@125 -- # return 0 00:16:22.597 15:20:31 -- nvmf/common.sh@478 -- # '[' -n 74856 ']' 00:16:22.597 15:20:31 -- nvmf/common.sh@479 -- # killprocess 74856 00:16:22.597 15:20:31 -- common/autotest_common.sh@936 -- # '[' -z 74856 ']' 00:16:22.597 15:20:31 -- common/autotest_common.sh@940 -- # kill -0 74856 00:16:22.597 15:20:31 -- common/autotest_common.sh@941 -- # uname 00:16:22.597 15:20:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.597 15:20:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74856 00:16:22.597 killing process with pid 74856 00:16:22.597 15:20:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:22.597 15:20:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:22.597 15:20:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74856' 00:16:22.597 15:20:31 -- common/autotest_common.sh@955 -- # kill 74856 00:16:22.597 15:20:31 -- common/autotest_common.sh@960 -- # wait 74856 00:16:22.856 15:20:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:22.856 15:20:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:22.856 15:20:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:22.856 15:20:31 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.856 15:20:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:22.856 15:20:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.856 15:20:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.856 15:20:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.856 15:20:31 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:22.856 15:20:31 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:22.856 15:20:31 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:22.856 15:20:31 -- host/auth.sh@27 -- # clean_kernel_target 00:16:22.856 15:20:31 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:16:22.856 15:20:31 -- nvmf/common.sh@675 -- # echo 0 00:16:22.856 15:20:31 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:22.856 15:20:31 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:22.856 15:20:31 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:22.856 15:20:31 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:22.856 15:20:31 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:16:22.856 15:20:31 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:16:22.856 15:20:31 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:23.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:23.678 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:23.678 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:23.678 15:20:32 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.6zK /tmp/spdk.key-null.VKa /tmp/spdk.key-sha256.OOn /tmp/spdk.key-sha384.sjr /tmp/spdk.key-sha512.P8D /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:16:23.678 15:20:32 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:23.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:23.971 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:23.971 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:23.971 00:16:23.971 real 0m39.088s 00:16:23.971 user 0m35.612s 00:16:23.971 sys 0m3.659s 00:16:23.972 15:20:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:23.972 ************************************ 00:16:23.972 END TEST nvmf_auth 00:16:23.972 ************************************ 00:16:23.972 15:20:33 -- common/autotest_common.sh@10 -- # set +x 00:16:24.230 15:20:33 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:16:24.230 15:20:33 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:24.230 15:20:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:24.230 15:20:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.230 15:20:33 -- common/autotest_common.sh@10 -- # set +x 00:16:24.230 ************************************ 00:16:24.230 START TEST nvmf_digest 00:16:24.230 ************************************ 00:16:24.230 15:20:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:24.230 * Looking for test storage... 00:16:24.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:24.230 15:20:33 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:24.230 15:20:33 -- nvmf/common.sh@7 -- # uname -s 00:16:24.230 15:20:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.230 15:20:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.230 15:20:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.230 15:20:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.230 15:20:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.230 15:20:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.230 15:20:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.230 15:20:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.230 15:20:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.230 15:20:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.230 15:20:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:16:24.230 15:20:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:16:24.230 15:20:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.230 15:20:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.230 15:20:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:24.230 15:20:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.230 15:20:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.230 15:20:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.230 15:20:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.230 15:20:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.230 15:20:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.230 15:20:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.230 15:20:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.230 15:20:33 -- paths/export.sh@5 -- # export PATH 00:16:24.230 15:20:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.230 15:20:33 -- nvmf/common.sh@47 -- # : 0 00:16:24.230 15:20:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.230 15:20:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.230 15:20:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.230 15:20:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.230 15:20:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.230 15:20:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.230 15:20:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.230 15:20:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.230 15:20:33 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:24.230 15:20:33 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:24.230 15:20:33 -- host/digest.sh@16 -- # runtime=2 00:16:24.230 15:20:33 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:16:24.230 15:20:33 -- host/digest.sh@138 -- # nvmftestinit 00:16:24.230 15:20:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:24.230 15:20:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.230 15:20:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:24.230 15:20:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:24.230 15:20:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:24.230 15:20:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.230 15:20:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.230 15:20:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.230 15:20:33 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:24.230 15:20:33 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:24.230 15:20:33 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:24.230 15:20:33 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:24.230 15:20:33 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:24.230 15:20:33 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:24.230 15:20:33 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.230 15:20:33 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.230 15:20:33 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:24.230 15:20:33 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:24.230 15:20:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:24.230 15:20:33 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:24.230 15:20:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:24.230 15:20:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.230 15:20:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:24.230 15:20:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:24.230 15:20:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:24.230 15:20:33 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:24.230 15:20:33 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:24.230 15:20:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:24.489 Cannot find device "nvmf_tgt_br" 00:16:24.489 15:20:33 -- nvmf/common.sh@155 -- # true 00:16:24.489 15:20:33 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:24.489 Cannot find device "nvmf_tgt_br2" 00:16:24.489 15:20:33 -- nvmf/common.sh@156 -- # true 00:16:24.489 15:20:33 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:24.489 15:20:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:24.489 Cannot find device "nvmf_tgt_br" 00:16:24.489 15:20:33 -- nvmf/common.sh@158 -- # true 00:16:24.489 15:20:33 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:24.489 Cannot find device "nvmf_tgt_br2" 00:16:24.489 15:20:33 -- nvmf/common.sh@159 -- # true 00:16:24.489 15:20:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:24.489 15:20:33 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:24.489 15:20:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:24.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.489 15:20:33 -- nvmf/common.sh@162 -- # true 00:16:24.489 15:20:33 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:24.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.489 15:20:33 -- nvmf/common.sh@163 -- # true 00:16:24.489 15:20:33 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:24.489 15:20:33 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:24.489 15:20:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:24.489 15:20:33 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:24.489 15:20:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:24.489 15:20:33 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:24.489 15:20:33 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:24.489 15:20:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:24.489 15:20:33 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:24.489 15:20:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:24.489 15:20:33 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:24.489 15:20:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:24.489 15:20:33 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:24.489 15:20:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:24.489 15:20:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:24.489 15:20:33 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:24.489 15:20:33 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:24.489 15:20:33 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:24.489 15:20:33 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:24.489 15:20:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:24.747 15:20:33 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:24.747 15:20:33 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:24.747 15:20:33 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:24.747 15:20:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:24.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:16:24.747 00:16:24.747 --- 10.0.0.2 ping statistics --- 00:16:24.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.747 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:24.747 15:20:33 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:24.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:24.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:16:24.747 00:16:24.747 --- 10.0.0.3 ping statistics --- 00:16:24.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.747 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:24.747 15:20:33 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:24.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:24.747 00:16:24.747 --- 10.0.0.1 ping statistics --- 00:16:24.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.747 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:24.747 15:20:33 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.747 15:20:33 -- nvmf/common.sh@422 -- # return 0 00:16:24.747 15:20:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:24.747 15:20:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.747 15:20:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:24.747 15:20:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:24.747 15:20:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.747 15:20:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:24.747 15:20:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:24.747 15:20:33 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:24.747 15:20:33 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:16:24.747 15:20:33 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:16:24.747 15:20:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:24.747 15:20:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.747 15:20:33 -- common/autotest_common.sh@10 -- # set +x 00:16:24.747 ************************************ 00:16:24.747 START TEST nvmf_digest_clean 00:16:24.747 ************************************ 00:16:24.747 15:20:33 -- common/autotest_common.sh@1111 -- # run_digest 00:16:24.747 15:20:33 -- host/digest.sh@120 -- # local dsa_initiator 00:16:24.747 15:20:33 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:16:24.747 15:20:33 -- host/digest.sh@121 -- # dsa_initiator=false 00:16:24.747 15:20:33 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:16:24.747 15:20:33 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:16:24.747 15:20:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:24.747 15:20:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:24.747 15:20:33 -- common/autotest_common.sh@10 -- # set +x 00:16:24.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.747 15:20:33 -- nvmf/common.sh@470 -- # nvmfpid=76465 00:16:24.747 15:20:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:24.747 15:20:33 -- nvmf/common.sh@471 -- # waitforlisten 76465 00:16:24.747 15:20:33 -- common/autotest_common.sh@817 -- # '[' -z 76465 ']' 00:16:24.747 15:20:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.747 15:20:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:24.748 15:20:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.748 15:20:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:24.748 15:20:33 -- common/autotest_common.sh@10 -- # set +x 00:16:24.748 [2024-04-24 15:20:33.934357] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:16:24.748 [2024-04-24 15:20:33.934684] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.006 [2024-04-24 15:20:34.077296] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.006 [2024-04-24 15:20:34.202533] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.006 [2024-04-24 15:20:34.202795] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.006 [2024-04-24 15:20:34.203004] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.006 [2024-04-24 15:20:34.203269] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.006 [2024-04-24 15:20:34.203451] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.006 [2024-04-24 15:20:34.203610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.941 15:20:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:25.942 15:20:34 -- common/autotest_common.sh@850 -- # return 0 00:16:25.942 15:20:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:25.942 15:20:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:25.942 15:20:34 -- common/autotest_common.sh@10 -- # set +x 00:16:25.942 15:20:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.942 15:20:34 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:16:25.942 15:20:34 -- host/digest.sh@126 -- # common_target_config 00:16:25.942 15:20:34 -- host/digest.sh@43 -- # rpc_cmd 00:16:25.942 15:20:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.942 15:20:34 -- common/autotest_common.sh@10 -- # set +x 00:16:25.942 null0 00:16:25.942 [2024-04-24 15:20:35.085748] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.942 [2024-04-24 15:20:35.109890] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.942 15:20:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.942 15:20:35 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:16:25.942 15:20:35 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:25.942 15:20:35 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:25.942 15:20:35 -- host/digest.sh@80 -- # rw=randread 00:16:25.942 15:20:35 -- host/digest.sh@80 -- # bs=4096 00:16:25.942 15:20:35 -- host/digest.sh@80 -- # qd=128 00:16:25.942 15:20:35 -- host/digest.sh@80 -- # scan_dsa=false 00:16:25.942 15:20:35 -- host/digest.sh@83 -- # bperfpid=76497 00:16:25.942 15:20:35 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:25.942 15:20:35 -- host/digest.sh@84 -- # waitforlisten 76497 /var/tmp/bperf.sock 00:16:25.942 15:20:35 -- common/autotest_common.sh@817 -- # '[' -z 76497 ']' 00:16:25.942 15:20:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:25.942 15:20:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:25.942 15:20:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:25.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:25.942 15:20:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:25.942 15:20:35 -- common/autotest_common.sh@10 -- # set +x 00:16:25.942 [2024-04-24 15:20:35.172879] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:16:25.942 [2024-04-24 15:20:35.173146] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76497 ] 00:16:26.200 [2024-04-24 15:20:35.314894] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.200 [2024-04-24 15:20:35.426465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.135 15:20:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:27.135 15:20:36 -- common/autotest_common.sh@850 -- # return 0 00:16:27.135 15:20:36 -- host/digest.sh@86 -- # false 00:16:27.135 15:20:36 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:27.135 15:20:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:27.394 15:20:36 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:27.394 15:20:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:27.653 nvme0n1 00:16:27.653 15:20:36 -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:27.653 15:20:36 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:27.911 Running I/O for 2 seconds... 00:16:29.811 00:16:29.811 Latency(us) 00:16:29.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.811 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:29.811 nvme0n1 : 2.01 15194.30 59.35 0.00 0.00 8418.75 2204.39 23354.65 00:16:29.811 =================================================================================================================== 00:16:29.811 Total : 15194.30 59.35 0.00 0.00 8418.75 2204.39 23354.65 00:16:29.811 0 00:16:29.811 15:20:38 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:29.811 15:20:38 -- host/digest.sh@93 -- # get_accel_stats 00:16:29.811 15:20:38 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:29.811 15:20:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:29.811 15:20:38 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:29.811 | select(.opcode=="crc32c") 00:16:29.811 | "\(.module_name) \(.executed)"' 00:16:30.070 15:20:39 -- host/digest.sh@94 -- # false 00:16:30.070 15:20:39 -- host/digest.sh@94 -- # exp_module=software 00:16:30.070 15:20:39 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:30.070 15:20:39 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:30.070 15:20:39 -- host/digest.sh@98 -- # killprocess 76497 00:16:30.070 15:20:39 -- common/autotest_common.sh@936 -- # '[' -z 76497 ']' 00:16:30.070 15:20:39 -- common/autotest_common.sh@940 -- # kill -0 76497 00:16:30.070 15:20:39 -- common/autotest_common.sh@941 -- # uname 00:16:30.070 15:20:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:30.070 15:20:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76497 00:16:30.070 killing process with pid 76497 00:16:30.070 Received shutdown signal, test time was about 2.000000 seconds 00:16:30.070 00:16:30.070 Latency(us) 00:16:30.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.070 =================================================================================================================== 00:16:30.070 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:30.070 15:20:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:30.070 15:20:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:30.070 15:20:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76497' 00:16:30.070 15:20:39 -- common/autotest_common.sh@955 -- # kill 76497 00:16:30.070 15:20:39 -- common/autotest_common.sh@960 -- # wait 76497 00:16:30.328 15:20:39 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:16:30.328 15:20:39 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:30.328 15:20:39 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:30.328 15:20:39 -- host/digest.sh@80 -- # rw=randread 00:16:30.329 15:20:39 -- host/digest.sh@80 -- # bs=131072 00:16:30.329 15:20:39 -- host/digest.sh@80 -- # qd=16 00:16:30.329 15:20:39 -- host/digest.sh@80 -- # scan_dsa=false 00:16:30.329 15:20:39 -- host/digest.sh@83 -- # bperfpid=76567 00:16:30.329 15:20:39 -- host/digest.sh@84 -- # waitforlisten 76567 /var/tmp/bperf.sock 00:16:30.329 15:20:39 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:30.329 15:20:39 -- common/autotest_common.sh@817 -- # '[' -z 76567 ']' 00:16:30.329 15:20:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:30.329 15:20:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:30.329 15:20:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:30.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:30.329 15:20:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:30.329 15:20:39 -- common/autotest_common.sh@10 -- # set +x 00:16:30.645 [2024-04-24 15:20:39.581299] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:16:30.645 [2024-04-24 15:20:39.581606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefixI/O size of 131072 is greater than zero copy threshold (65536). 00:16:30.645 Zero copy mechanism will not be used. 00:16:30.645 =spdk_pid76567 ] 00:16:30.645 [2024-04-24 15:20:39.723602] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.645 [2024-04-24 15:20:39.850087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.580 15:20:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:31.580 15:20:40 -- common/autotest_common.sh@850 -- # return 0 00:16:31.580 15:20:40 -- host/digest.sh@86 -- # false 00:16:31.580 15:20:40 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:31.580 15:20:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:31.838 15:20:40 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:31.838 15:20:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:32.097 nvme0n1 00:16:32.097 15:20:41 -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:32.097 15:20:41 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:32.097 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:32.097 Zero copy mechanism will not be used. 00:16:32.097 Running I/O for 2 seconds... 00:16:34.629 00:16:34.629 Latency(us) 00:16:34.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.629 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:34.629 nvme0n1 : 2.00 7526.64 940.83 0.00 0.00 2122.82 1794.79 3261.91 00:16:34.629 =================================================================================================================== 00:16:34.629 Total : 7526.64 940.83 0.00 0.00 2122.82 1794.79 3261.91 00:16:34.629 0 00:16:34.629 15:20:43 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:34.629 15:20:43 -- host/digest.sh@93 -- # get_accel_stats 00:16:34.629 15:20:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:34.629 15:20:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:34.629 15:20:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:34.629 | select(.opcode=="crc32c") 00:16:34.629 | "\(.module_name) \(.executed)"' 00:16:34.629 15:20:43 -- host/digest.sh@94 -- # false 00:16:34.629 15:20:43 -- host/digest.sh@94 -- # exp_module=software 00:16:34.629 15:20:43 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:34.629 15:20:43 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:34.629 15:20:43 -- host/digest.sh@98 -- # killprocess 76567 00:16:34.629 15:20:43 -- common/autotest_common.sh@936 -- # '[' -z 76567 ']' 00:16:34.629 15:20:43 -- common/autotest_common.sh@940 -- # kill -0 76567 00:16:34.629 15:20:43 -- common/autotest_common.sh@941 -- # uname 00:16:34.629 15:20:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.629 15:20:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76567 00:16:34.629 killing process with pid 76567 00:16:34.629 Received shutdown signal, test time was about 2.000000 seconds 00:16:34.629 00:16:34.629 Latency(us) 00:16:34.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.629 =================================================================================================================== 00:16:34.629 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:34.629 15:20:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:34.629 15:20:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:34.629 15:20:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76567' 00:16:34.629 15:20:43 -- common/autotest_common.sh@955 -- # kill 76567 00:16:34.629 15:20:43 -- common/autotest_common.sh@960 -- # wait 76567 00:16:34.888 15:20:43 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:16:34.888 15:20:43 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:34.888 15:20:43 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:34.888 15:20:43 -- host/digest.sh@80 -- # rw=randwrite 00:16:34.888 15:20:43 -- host/digest.sh@80 -- # bs=4096 00:16:34.888 15:20:43 -- host/digest.sh@80 -- # qd=128 00:16:34.888 15:20:43 -- host/digest.sh@80 -- # scan_dsa=false 00:16:34.888 15:20:43 -- host/digest.sh@83 -- # bperfpid=76627 00:16:34.888 15:20:43 -- host/digest.sh@84 -- # waitforlisten 76627 /var/tmp/bperf.sock 00:16:34.888 15:20:43 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:34.888 15:20:43 -- common/autotest_common.sh@817 -- # '[' -z 76627 ']' 00:16:34.888 15:20:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:34.888 15:20:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:34.888 15:20:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:34.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:34.888 15:20:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:34.888 15:20:43 -- common/autotest_common.sh@10 -- # set +x 00:16:34.888 [2024-04-24 15:20:43.935340] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:16:34.888 [2024-04-24 15:20:43.935690] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76627 ] 00:16:34.888 [2024-04-24 15:20:44.075232] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.147 [2024-04-24 15:20:44.190118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.713 15:20:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:35.713 15:20:44 -- common/autotest_common.sh@850 -- # return 0 00:16:35.713 15:20:44 -- host/digest.sh@86 -- # false 00:16:35.713 15:20:44 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:35.713 15:20:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:36.280 15:20:45 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:36.280 15:20:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:36.537 nvme0n1 00:16:36.537 15:20:45 -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:36.537 15:20:45 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:36.537 Running I/O for 2 seconds... 00:16:39.069 00:16:39.069 Latency(us) 00:16:39.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.069 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.069 nvme0n1 : 2.00 15098.52 58.98 0.00 0.00 8470.95 7447.27 24784.52 00:16:39.069 =================================================================================================================== 00:16:39.069 Total : 15098.52 58.98 0.00 0.00 8470.95 7447.27 24784.52 00:16:39.069 0 00:16:39.069 15:20:47 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:39.069 15:20:47 -- host/digest.sh@93 -- # get_accel_stats 00:16:39.069 15:20:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:39.069 15:20:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:39.069 15:20:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:39.069 | select(.opcode=="crc32c") 00:16:39.069 | "\(.module_name) \(.executed)"' 00:16:39.069 15:20:48 -- host/digest.sh@94 -- # false 00:16:39.069 15:20:48 -- host/digest.sh@94 -- # exp_module=software 00:16:39.069 15:20:48 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:39.069 15:20:48 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:39.069 15:20:48 -- host/digest.sh@98 -- # killprocess 76627 00:16:39.069 15:20:48 -- common/autotest_common.sh@936 -- # '[' -z 76627 ']' 00:16:39.069 15:20:48 -- common/autotest_common.sh@940 -- # kill -0 76627 00:16:39.069 15:20:48 -- common/autotest_common.sh@941 -- # uname 00:16:39.069 15:20:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:39.069 15:20:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76627 00:16:39.069 killing process with pid 76627 00:16:39.069 Received shutdown signal, test time was about 2.000000 seconds 00:16:39.069 00:16:39.069 Latency(us) 00:16:39.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.069 =================================================================================================================== 00:16:39.069 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:39.069 15:20:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:39.069 15:20:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:39.069 15:20:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76627' 00:16:39.069 15:20:48 -- common/autotest_common.sh@955 -- # kill 76627 00:16:39.069 15:20:48 -- common/autotest_common.sh@960 -- # wait 76627 00:16:39.069 15:20:48 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:16:39.069 15:20:48 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:39.069 15:20:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:39.069 15:20:48 -- host/digest.sh@80 -- # rw=randwrite 00:16:39.069 15:20:48 -- host/digest.sh@80 -- # bs=131072 00:16:39.069 15:20:48 -- host/digest.sh@80 -- # qd=16 00:16:39.069 15:20:48 -- host/digest.sh@80 -- # scan_dsa=false 00:16:39.069 15:20:48 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:39.069 15:20:48 -- host/digest.sh@83 -- # bperfpid=76687 00:16:39.069 15:20:48 -- host/digest.sh@84 -- # waitforlisten 76687 /var/tmp/bperf.sock 00:16:39.069 15:20:48 -- common/autotest_common.sh@817 -- # '[' -z 76687 ']' 00:16:39.069 15:20:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:39.069 15:20:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:39.069 15:20:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:39.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:39.069 15:20:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:39.069 15:20:48 -- common/autotest_common.sh@10 -- # set +x 00:16:39.329 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:39.329 Zero copy mechanism will not be used. 00:16:39.329 [2024-04-24 15:20:48.350554] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:16:39.329 [2024-04-24 15:20:48.350654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76687 ] 00:16:39.329 [2024-04-24 15:20:48.490153] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.587 [2024-04-24 15:20:48.594154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.152 15:20:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:40.152 15:20:49 -- common/autotest_common.sh@850 -- # return 0 00:16:40.152 15:20:49 -- host/digest.sh@86 -- # false 00:16:40.152 15:20:49 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:40.152 15:20:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:40.718 15:20:49 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:40.718 15:20:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:40.976 nvme0n1 00:16:40.976 15:20:50 -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:40.976 15:20:50 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:40.976 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:40.976 Zero copy mechanism will not be used. 00:16:40.976 Running I/O for 2 seconds... 00:16:43.504 00:16:43.504 Latency(us) 00:16:43.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.504 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:43.504 nvme0n1 : 2.00 6316.87 789.61 0.00 0.00 2527.20 1563.93 4051.32 00:16:43.504 =================================================================================================================== 00:16:43.504 Total : 6316.87 789.61 0.00 0.00 2527.20 1563.93 4051.32 00:16:43.504 0 00:16:43.504 15:20:52 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:43.504 15:20:52 -- host/digest.sh@93 -- # get_accel_stats 00:16:43.504 15:20:52 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:43.504 15:20:52 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:43.504 | select(.opcode=="crc32c") 00:16:43.504 | "\(.module_name) \(.executed)"' 00:16:43.504 15:20:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:43.504 15:20:52 -- host/digest.sh@94 -- # false 00:16:43.504 15:20:52 -- host/digest.sh@94 -- # exp_module=software 00:16:43.504 15:20:52 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:43.504 15:20:52 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:43.504 15:20:52 -- host/digest.sh@98 -- # killprocess 76687 00:16:43.504 15:20:52 -- common/autotest_common.sh@936 -- # '[' -z 76687 ']' 00:16:43.504 15:20:52 -- common/autotest_common.sh@940 -- # kill -0 76687 00:16:43.504 15:20:52 -- common/autotest_common.sh@941 -- # uname 00:16:43.504 15:20:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:43.504 15:20:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76687 00:16:43.504 killing process with pid 76687 00:16:43.504 Received shutdown signal, test time was about 2.000000 seconds 00:16:43.504 00:16:43.504 Latency(us) 00:16:43.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.504 =================================================================================================================== 00:16:43.504 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:43.504 15:20:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:43.504 15:20:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:43.504 15:20:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76687' 00:16:43.504 15:20:52 -- common/autotest_common.sh@955 -- # kill 76687 00:16:43.504 15:20:52 -- common/autotest_common.sh@960 -- # wait 76687 00:16:43.504 15:20:52 -- host/digest.sh@132 -- # killprocess 76465 00:16:43.504 15:20:52 -- common/autotest_common.sh@936 -- # '[' -z 76465 ']' 00:16:43.504 15:20:52 -- common/autotest_common.sh@940 -- # kill -0 76465 00:16:43.504 15:20:52 -- common/autotest_common.sh@941 -- # uname 00:16:43.504 15:20:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:43.504 15:20:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76465 00:16:43.504 killing process with pid 76465 00:16:43.504 15:20:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:43.504 15:20:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:43.504 15:20:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76465' 00:16:43.504 15:20:52 -- common/autotest_common.sh@955 -- # kill 76465 00:16:43.504 15:20:52 -- common/autotest_common.sh@960 -- # wait 76465 00:16:43.798 00:16:43.798 real 0m19.093s 00:16:43.798 user 0m37.118s 00:16:43.798 sys 0m4.700s 00:16:43.798 15:20:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:43.798 15:20:52 -- common/autotest_common.sh@10 -- # set +x 00:16:43.798 ************************************ 00:16:43.798 END TEST nvmf_digest_clean 00:16:43.798 ************************************ 00:16:43.798 15:20:52 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:16:43.798 15:20:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:43.798 15:20:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.798 15:20:52 -- common/autotest_common.sh@10 -- # set +x 00:16:44.077 ************************************ 00:16:44.077 START TEST nvmf_digest_error 00:16:44.077 ************************************ 00:16:44.077 15:20:53 -- common/autotest_common.sh@1111 -- # run_digest_error 00:16:44.077 15:20:53 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:16:44.077 15:20:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:44.077 15:20:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:44.077 15:20:53 -- common/autotest_common.sh@10 -- # set +x 00:16:44.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.077 15:20:53 -- nvmf/common.sh@470 -- # nvmfpid=76780 00:16:44.077 15:20:53 -- nvmf/common.sh@471 -- # waitforlisten 76780 00:16:44.077 15:20:53 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:44.077 15:20:53 -- common/autotest_common.sh@817 -- # '[' -z 76780 ']' 00:16:44.077 15:20:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.077 15:20:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:44.077 15:20:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.077 15:20:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:44.077 15:20:53 -- common/autotest_common.sh@10 -- # set +x 00:16:44.077 [2024-04-24 15:20:53.140792] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:16:44.077 [2024-04-24 15:20:53.141201] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.077 [2024-04-24 15:20:53.282610] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.335 [2024-04-24 15:20:53.397726] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.335 [2024-04-24 15:20:53.398002] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.335 [2024-04-24 15:20:53.398141] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.335 [2024-04-24 15:20:53.398195] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.335 [2024-04-24 15:20:53.398292] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.335 [2024-04-24 15:20:53.398366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.900 15:20:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:44.900 15:20:54 -- common/autotest_common.sh@850 -- # return 0 00:16:44.900 15:20:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:44.900 15:20:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:44.900 15:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:44.900 15:20:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.900 15:20:54 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:44.900 15:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:44.900 15:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:44.900 [2024-04-24 15:20:54.074883] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:44.900 15:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:44.900 15:20:54 -- host/digest.sh@105 -- # common_target_config 00:16:44.900 15:20:54 -- host/digest.sh@43 -- # rpc_cmd 00:16:44.900 15:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:44.900 15:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:45.159 null0 00:16:45.159 [2024-04-24 15:20:54.185034] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.159 [2024-04-24 15:20:54.209147] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.159 15:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.159 15:20:54 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:16:45.159 15:20:54 -- host/digest.sh@54 -- # local rw bs qd 00:16:45.159 15:20:54 -- host/digest.sh@56 -- # rw=randread 00:16:45.159 15:20:54 -- host/digest.sh@56 -- # bs=4096 00:16:45.159 15:20:54 -- host/digest.sh@56 -- # qd=128 00:16:45.159 15:20:54 -- host/digest.sh@58 -- # bperfpid=76812 00:16:45.159 15:20:54 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:45.159 15:20:54 -- host/digest.sh@60 -- # waitforlisten 76812 /var/tmp/bperf.sock 00:16:45.159 15:20:54 -- common/autotest_common.sh@817 -- # '[' -z 76812 ']' 00:16:45.159 15:20:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:45.159 15:20:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:45.159 15:20:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:45.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:45.159 15:20:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:45.159 15:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:45.159 [2024-04-24 15:20:54.261154] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:16:45.159 [2024-04-24 15:20:54.261499] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76812 ] 00:16:45.159 [2024-04-24 15:20:54.392579] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.417 [2024-04-24 15:20:54.537366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.350 15:20:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:46.350 15:20:55 -- common/autotest_common.sh@850 -- # return 0 00:16:46.350 15:20:55 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:46.350 15:20:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:46.350 15:20:55 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:46.350 15:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.350 15:20:55 -- common/autotest_common.sh@10 -- # set +x 00:16:46.350 15:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.350 15:20:55 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:46.350 15:20:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:46.608 nvme0n1 00:16:46.865 15:20:55 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:46.865 15:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.865 15:20:55 -- common/autotest_common.sh@10 -- # set +x 00:16:46.865 15:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.865 15:20:55 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:46.865 15:20:55 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:46.865 Running I/O for 2 seconds... 00:16:46.865 [2024-04-24 15:20:56.052282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:46.865 [2024-04-24 15:20:56.052356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.865 [2024-04-24 15:20:56.052373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.865 [2024-04-24 15:20:56.069488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:46.866 [2024-04-24 15:20:56.069546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.866 [2024-04-24 15:20:56.069562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.866 [2024-04-24 15:20:56.086596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:46.866 [2024-04-24 15:20:56.086667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.866 [2024-04-24 15:20:56.086683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.866 [2024-04-24 15:20:56.103718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:46.866 [2024-04-24 15:20:56.103771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.866 [2024-04-24 15:20:56.103787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.120836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.120884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.120899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.138016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.138064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.138079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.155156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.155207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.155222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.172276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.172322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.172337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.189424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.189543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.189560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.206697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.206748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.206763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.223795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.223853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.223869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.241056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.241112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.241127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.258121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.258171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.258186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.275230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.275280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.275295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.292333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.292390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.292405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.309521] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.309577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.309592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.326622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.326671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.326686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.343672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.343722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.343736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.126 [2024-04-24 15:20:56.360738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.126 [2024-04-24 15:20:56.360791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.126 [2024-04-24 15:20:56.360805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.391 [2024-04-24 15:20:56.378011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.391 [2024-04-24 15:20:56.378073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.391 [2024-04-24 15:20:56.378089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.391 [2024-04-24 15:20:56.395184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.391 [2024-04-24 15:20:56.395239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.391 [2024-04-24 15:20:56.395254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.391 [2024-04-24 15:20:56.412320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.391 [2024-04-24 15:20:56.412372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.391 [2024-04-24 15:20:56.412387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.391 [2024-04-24 15:20:56.429384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.391 [2024-04-24 15:20:56.429443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.391 [2024-04-24 15:20:56.429459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.391 [2024-04-24 15:20:56.446467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.391 [2024-04-24 15:20:56.446520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.391 [2024-04-24 15:20:56.446535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.391 [2024-04-24 15:20:56.463553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.391 [2024-04-24 15:20:56.463603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.391 [2024-04-24 15:20:56.463617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.391 [2024-04-24 15:20:56.480682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.391 [2024-04-24 15:20:56.480742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.391 [2024-04-24 15:20:56.480758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.391 [2024-04-24 15:20:56.497784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.391 [2024-04-24 15:20:56.497833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.391 [2024-04-24 15:20:56.497847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.391 [2024-04-24 15:20:56.514835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.391 [2024-04-24 15:20:56.514886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.391 [2024-04-24 15:20:56.514901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.391 [2024-04-24 15:20:56.532117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.391 [2024-04-24 15:20:56.532189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.391 [2024-04-24 15:20:56.532205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.391 [2024-04-24 15:20:56.549941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.391 [2024-04-24 15:20:56.549994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.391 [2024-04-24 15:20:56.550009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.391 [2024-04-24 15:20:56.567874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.391 [2024-04-24 15:20:56.567936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.391 [2024-04-24 15:20:56.567953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.391 [2024-04-24 15:20:56.585744] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.391 [2024-04-24 15:20:56.585818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.391 [2024-04-24 15:20:56.585833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.391 [2024-04-24 15:20:56.603484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.392 [2024-04-24 15:20:56.603556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.392 [2024-04-24 15:20:56.603573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.392 [2024-04-24 15:20:56.621516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.392 [2024-04-24 15:20:56.621591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.392 [2024-04-24 15:20:56.621606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.639269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.639343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.639359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.657100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.657174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.657190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.674959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.675030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.675045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.692579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.692649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.692665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.710175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.710248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.710263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.728070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.728145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.728161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.746064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.746137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.746153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.763856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.763928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.763944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.781687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.781760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.781776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.799448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.799520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.799535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.816935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.817003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.817018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.834126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.834191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.834206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.851611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.851683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.851699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.869445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.869513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.869527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.650 [2024-04-24 15:20:56.886633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.650 [2024-04-24 15:20:56.886693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.650 [2024-04-24 15:20:56.886709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.909 [2024-04-24 15:20:56.904101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.909 [2024-04-24 15:20:56.904173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.909 [2024-04-24 15:20:56.904188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.909 [2024-04-24 15:20:56.921964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.909 [2024-04-24 15:20:56.922031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.909 [2024-04-24 15:20:56.922046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.909 [2024-04-24 15:20:56.939267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.909 [2024-04-24 15:20:56.939325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.909 [2024-04-24 15:20:56.939340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.909 [2024-04-24 15:20:56.956313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.909 [2024-04-24 15:20:56.956363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.909 [2024-04-24 15:20:56.956377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.909 [2024-04-24 15:20:56.973394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.909 [2024-04-24 15:20:56.973449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.909 [2024-04-24 15:20:56.973466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.909 [2024-04-24 15:20:56.990411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.909 [2024-04-24 15:20:56.990462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.909 [2024-04-24 15:20:56.990477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.909 [2024-04-24 15:20:57.007411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.909 [2024-04-24 15:20:57.007466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.909 [2024-04-24 15:20:57.007480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.909 [2024-04-24 15:20:57.024437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.909 [2024-04-24 15:20:57.024482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.909 [2024-04-24 15:20:57.024496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.909 [2024-04-24 15:20:57.041459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.909 [2024-04-24 15:20:57.041508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.909 [2024-04-24 15:20:57.041524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.909 [2024-04-24 15:20:57.058514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.909 [2024-04-24 15:20:57.058566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.909 [2024-04-24 15:20:57.058580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.909 [2024-04-24 15:20:57.075582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.909 [2024-04-24 15:20:57.075635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.909 [2024-04-24 15:20:57.075649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.909 [2024-04-24 15:20:57.092616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.909 [2024-04-24 15:20:57.092667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.909 [2024-04-24 15:20:57.092683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.909 [2024-04-24 15:20:57.109760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.909 [2024-04-24 15:20:57.109818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.909 [2024-04-24 15:20:57.109832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.909 [2024-04-24 15:20:57.126847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.909 [2024-04-24 15:20:57.126925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.910 [2024-04-24 15:20:57.126942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.910 [2024-04-24 15:20:57.151453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:47.910 [2024-04-24 15:20:57.151513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.910 [2024-04-24 15:20:57.151529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.168540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.168591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.168605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.185652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.185705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.185720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.202693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.202746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.202760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.219737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.219794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.219809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.236852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.236908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.236922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.254170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.254240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.254255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.271635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.271695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.271710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.288973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.289026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.289042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.306303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.306365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.306380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.323993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.324057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.324072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.341396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.341467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.341483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.359090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.359153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.359168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.376677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.376757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.376774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.394564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.394626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.394642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.168 [2024-04-24 15:20:57.412341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.168 [2024-04-24 15:20:57.412423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.168 [2024-04-24 15:20:57.412455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.426 [2024-04-24 15:20:57.430237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.426 [2024-04-24 15:20:57.430308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.426 [2024-04-24 15:20:57.430325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.426 [2024-04-24 15:20:57.448028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.426 [2024-04-24 15:20:57.448100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.426 [2024-04-24 15:20:57.448115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.426 [2024-04-24 15:20:57.465510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.426 [2024-04-24 15:20:57.465578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.426 [2024-04-24 15:20:57.465594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.426 [2024-04-24 15:20:57.482981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.426 [2024-04-24 15:20:57.483050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.426 [2024-04-24 15:20:57.483065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.426 [2024-04-24 15:20:57.500718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.426 [2024-04-24 15:20:57.500805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.426 [2024-04-24 15:20:57.500820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.426 [2024-04-24 15:20:57.518463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.426 [2024-04-24 15:20:57.518535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.426 [2024-04-24 15:20:57.518550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.426 [2024-04-24 15:20:57.536286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.426 [2024-04-24 15:20:57.536360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.427 [2024-04-24 15:20:57.536376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.427 [2024-04-24 15:20:57.554061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.427 [2024-04-24 15:20:57.554131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.427 [2024-04-24 15:20:57.554148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.427 [2024-04-24 15:20:57.572032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.427 [2024-04-24 15:20:57.572087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.427 [2024-04-24 15:20:57.572103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.427 [2024-04-24 15:20:57.589933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.427 [2024-04-24 15:20:57.589998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.427 [2024-04-24 15:20:57.590014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.427 [2024-04-24 15:20:57.607125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.427 [2024-04-24 15:20:57.607179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.427 [2024-04-24 15:20:57.607194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.427 [2024-04-24 15:20:57.624253] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.427 [2024-04-24 15:20:57.624307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.427 [2024-04-24 15:20:57.624322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.427 [2024-04-24 15:20:57.641391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.427 [2024-04-24 15:20:57.641462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.427 [2024-04-24 15:20:57.641478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.427 [2024-04-24 15:20:57.658543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.427 [2024-04-24 15:20:57.658602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.427 [2024-04-24 15:20:57.658616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.685 [2024-04-24 15:20:57.675652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.685 [2024-04-24 15:20:57.675716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.685 [2024-04-24 15:20:57.675731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.685 [2024-04-24 15:20:57.692708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.685 [2024-04-24 15:20:57.692768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.685 [2024-04-24 15:20:57.692783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.685 [2024-04-24 15:20:57.709834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.685 [2024-04-24 15:20:57.709886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.685 [2024-04-24 15:20:57.709901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.685 [2024-04-24 15:20:57.726915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.685 [2024-04-24 15:20:57.726961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.685 [2024-04-24 15:20:57.726975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.685 [2024-04-24 15:20:57.743929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.685 [2024-04-24 15:20:57.743972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.685 [2024-04-24 15:20:57.743986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.685 [2024-04-24 15:20:57.761049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.685 [2024-04-24 15:20:57.761095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.685 [2024-04-24 15:20:57.761110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.685 [2024-04-24 15:20:57.778063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.685 [2024-04-24 15:20:57.778108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.685 [2024-04-24 15:20:57.778122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.685 [2024-04-24 15:20:57.795075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.685 [2024-04-24 15:20:57.795128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.685 [2024-04-24 15:20:57.795143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.685 [2024-04-24 15:20:57.812179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.685 [2024-04-24 15:20:57.812238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.685 [2024-04-24 15:20:57.812252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.685 [2024-04-24 15:20:57.829385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.685 [2024-04-24 15:20:57.829456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.685 [2024-04-24 15:20:57.829472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.685 [2024-04-24 15:20:57.846483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.685 [2024-04-24 15:20:57.846535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.685 [2024-04-24 15:20:57.846549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.685 [2024-04-24 15:20:57.863662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.685 [2024-04-24 15:20:57.863723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.686 [2024-04-24 15:20:57.863738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.686 [2024-04-24 15:20:57.880859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.686 [2024-04-24 15:20:57.880920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.686 [2024-04-24 15:20:57.880935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.686 [2024-04-24 15:20:57.898009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.686 [2024-04-24 15:20:57.898065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.686 [2024-04-24 15:20:57.898080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.686 [2024-04-24 15:20:57.915078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.686 [2024-04-24 15:20:57.915131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.686 [2024-04-24 15:20:57.915146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.944 [2024-04-24 15:20:57.932135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.944 [2024-04-24 15:20:57.932188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.944 [2024-04-24 15:20:57.932203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.944 [2024-04-24 15:20:57.949207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.944 [2024-04-24 15:20:57.949258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.944 [2024-04-24 15:20:57.949273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.944 [2024-04-24 15:20:57.966377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.944 [2024-04-24 15:20:57.966466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.944 [2024-04-24 15:20:57.966486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.944 [2024-04-24 15:20:57.983535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.944 [2024-04-24 15:20:57.983604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.944 [2024-04-24 15:20:57.983619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.944 [2024-04-24 15:20:58.000685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.944 [2024-04-24 15:20:58.000751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.944 [2024-04-24 15:20:58.000766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.944 [2024-04-24 15:20:58.017800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.944 [2024-04-24 15:20:58.017851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.944 [2024-04-24 15:20:58.017865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.944 [2024-04-24 15:20:58.034583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf37460) 00:16:48.944 [2024-04-24 15:20:58.034640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:48.944 [2024-04-24 15:20:58.034655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:48.944 00:16:48.944 Latency(us) 00:16:48.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.944 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:48.944 nvme0n1 : 2.01 14612.07 57.08 0.00 0.00 8754.63 8102.63 33125.47 00:16:48.944 =================================================================================================================== 00:16:48.944 Total : 14612.07 57.08 0.00 0.00 8754.63 8102.63 33125.47 00:16:48.944 0 00:16:48.944 15:20:58 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:48.944 15:20:58 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:48.944 15:20:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:48.944 15:20:58 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:48.944 | .driver_specific 00:16:48.944 | .nvme_error 00:16:48.944 | .status_code 00:16:48.944 | .command_transient_transport_error' 00:16:49.203 15:20:58 -- host/digest.sh@71 -- # (( 115 > 0 )) 00:16:49.203 15:20:58 -- host/digest.sh@73 -- # killprocess 76812 00:16:49.203 15:20:58 -- common/autotest_common.sh@936 -- # '[' -z 76812 ']' 00:16:49.203 15:20:58 -- common/autotest_common.sh@940 -- # kill -0 76812 00:16:49.203 15:20:58 -- common/autotest_common.sh@941 -- # uname 00:16:49.203 15:20:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:49.203 15:20:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76812 00:16:49.203 killing process with pid 76812 00:16:49.203 Received shutdown signal, test time was about 2.000000 seconds 00:16:49.203 00:16:49.203 Latency(us) 00:16:49.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.203 =================================================================================================================== 00:16:49.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:49.203 15:20:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:49.203 15:20:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:49.203 15:20:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76812' 00:16:49.203 15:20:58 -- common/autotest_common.sh@955 -- # kill 76812 00:16:49.203 15:20:58 -- common/autotest_common.sh@960 -- # wait 76812 00:16:49.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:49.461 15:20:58 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:16:49.461 15:20:58 -- host/digest.sh@54 -- # local rw bs qd 00:16:49.461 15:20:58 -- host/digest.sh@56 -- # rw=randread 00:16:49.461 15:20:58 -- host/digest.sh@56 -- # bs=131072 00:16:49.461 15:20:58 -- host/digest.sh@56 -- # qd=16 00:16:49.461 15:20:58 -- host/digest.sh@58 -- # bperfpid=76877 00:16:49.461 15:20:58 -- host/digest.sh@60 -- # waitforlisten 76877 /var/tmp/bperf.sock 00:16:49.461 15:20:58 -- common/autotest_common.sh@817 -- # '[' -z 76877 ']' 00:16:49.461 15:20:58 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:49.461 15:20:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:49.461 15:20:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:49.461 15:20:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:49.461 15:20:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:49.461 15:20:58 -- common/autotest_common.sh@10 -- # set +x 00:16:49.461 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:49.461 Zero copy mechanism will not be used. 00:16:49.461 [2024-04-24 15:20:58.660101] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:16:49.461 [2024-04-24 15:20:58.660246] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76877 ] 00:16:49.718 [2024-04-24 15:20:58.806992] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.718 [2024-04-24 15:20:58.923236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.654 15:20:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:50.654 15:20:59 -- common/autotest_common.sh@850 -- # return 0 00:16:50.654 15:20:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:50.654 15:20:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:50.654 15:20:59 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:50.654 15:20:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.654 15:20:59 -- common/autotest_common.sh@10 -- # set +x 00:16:50.654 15:20:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.654 15:20:59 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:50.654 15:20:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:50.912 nvme0n1 00:16:50.912 15:21:00 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:50.912 15:21:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.912 15:21:00 -- common/autotest_common.sh@10 -- # set +x 00:16:51.172 15:21:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.172 15:21:00 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:51.172 15:21:00 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:51.172 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:51.172 Zero copy mechanism will not be used. 00:16:51.172 Running I/O for 2 seconds... 00:16:51.172 [2024-04-24 15:21:00.267710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.267776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.267794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.271996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.272041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.272057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.276313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.276357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.276373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.280531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.280573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.280588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.284929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.284971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.284986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.289274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.289316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.289331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.293644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.293685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.293700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.297948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.297990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.298005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.302328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.302370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.302384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.306715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.306757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.306771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.311106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.311148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.311163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.315337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.315380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.315395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.319717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.319759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.319774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.324069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.324111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.324125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.328364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.328406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.328421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.332649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.332690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.332705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.336991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.337031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.337046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.341329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.341371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.341386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.345733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.345774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.345789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.350083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.172 [2024-04-24 15:21:00.350125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.172 [2024-04-24 15:21:00.350140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.172 [2024-04-24 15:21:00.354507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.354542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.354555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.173 [2024-04-24 15:21:00.358735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.358771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.358785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.173 [2024-04-24 15:21:00.363110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.363147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.363161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.173 [2024-04-24 15:21:00.367503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.367539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.367553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.173 [2024-04-24 15:21:00.371865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.371903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.371916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.173 [2024-04-24 15:21:00.376251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.376289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.376303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.173 [2024-04-24 15:21:00.380553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.380589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.380602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.173 [2024-04-24 15:21:00.384899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.384934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.384947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.173 [2024-04-24 15:21:00.389229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.389266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.389279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.173 [2024-04-24 15:21:00.393596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.393633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.393646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.173 [2024-04-24 15:21:00.397961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.397999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.398013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.173 [2024-04-24 15:21:00.402349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.402387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.402400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.173 [2024-04-24 15:21:00.406730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.406766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.406780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.173 [2024-04-24 15:21:00.411009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.411046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.411059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.173 [2024-04-24 15:21:00.415341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.173 [2024-04-24 15:21:00.415378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.173 [2024-04-24 15:21:00.415391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.419656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.419693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.419706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.424034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.424070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.424084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.428383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.428420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.428449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.432745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.432780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.432793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.437081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.437117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.437131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.441489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.441524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.441538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.445824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.445862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.445876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.450157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.450194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.450208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.454622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.454681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.454697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.459054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.459092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.459105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.463388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.463438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.463453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.467790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.467830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.467843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.472120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.472162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.472176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.476397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.476452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.476467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.480962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.481012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.481028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.433 [2024-04-24 15:21:00.485449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.433 [2024-04-24 15:21:00.485489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.433 [2024-04-24 15:21:00.485504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.489891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.489932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.489946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.494326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.494368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.494382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.498759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.498798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.498812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.503147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.503186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.503200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.507523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.507562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.507576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.511931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.511972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.511985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.516305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.516344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.516358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.520695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.520744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.520759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.525083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.525132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.525146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.529513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.529551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.529565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.533820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.533857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.533870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.538246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.538284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.538297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.542535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.542571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.542586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.546860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.546897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.546910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.551159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.551197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.551210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.555544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.555580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.555593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.559774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.559809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.559822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.564101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.564136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.564150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.568555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.568591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.568605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.572824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.572859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.572873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.577161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.577198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.577212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.581579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.581621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.581635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.585946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.585982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.585996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.590354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.590392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.590405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.594743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.594780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.594793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.599026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.599063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.434 [2024-04-24 15:21:00.599077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.434 [2024-04-24 15:21:00.603394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.434 [2024-04-24 15:21:00.603445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.603461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.435 [2024-04-24 15:21:00.607602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.435 [2024-04-24 15:21:00.607644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.607658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.435 [2024-04-24 15:21:00.611873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.435 [2024-04-24 15:21:00.611909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.611922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.435 [2024-04-24 15:21:00.616150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.435 [2024-04-24 15:21:00.616187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.616200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.435 [2024-04-24 15:21:00.620510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.435 [2024-04-24 15:21:00.620545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.620558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.435 [2024-04-24 15:21:00.624826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.435 [2024-04-24 15:21:00.624869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.624890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.435 [2024-04-24 15:21:00.630723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.435 [2024-04-24 15:21:00.630788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.630812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.435 [2024-04-24 15:21:00.635932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.435 [2024-04-24 15:21:00.635984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.636001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.435 [2024-04-24 15:21:00.640368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.435 [2024-04-24 15:21:00.640416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.640444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.435 [2024-04-24 15:21:00.644822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.435 [2024-04-24 15:21:00.644865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.644888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.435 [2024-04-24 15:21:00.649864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.435 [2024-04-24 15:21:00.649929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.649957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.435 [2024-04-24 15:21:00.655708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.435 [2024-04-24 15:21:00.655777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.655802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.435 [2024-04-24 15:21:00.661665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.435 [2024-04-24 15:21:00.661726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.661752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.435 [2024-04-24 15:21:00.667612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.435 [2024-04-24 15:21:00.667677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.667702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.435 [2024-04-24 15:21:00.673499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.435 [2024-04-24 15:21:00.673574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.435 [2024-04-24 15:21:00.673599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.679174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.679242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.679268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.685143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.685220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.685246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.691177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.691242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.691266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.696990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.697053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.697076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.702798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.702859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.702885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.708717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.708790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.708814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.714640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.714699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.714724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.720306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.720369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.720392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.726174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.726235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.726259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.731725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.731786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.731808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.737388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.737468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.737492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.743020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.743081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.743104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.748773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.748835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.748858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.754035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.754083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.754099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.758447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.758485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.758499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.762736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.762774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.762788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.767082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.767125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.767139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.771366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.771404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.771417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.775753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.775793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.775806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.780136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.695 [2024-04-24 15:21:00.780178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-04-24 15:21:00.780192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.695 [2024-04-24 15:21:00.784378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.784416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.784441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.788675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.788713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.788727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.792924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.792962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.792975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.797196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.797233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.797247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.801611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.801646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.801660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.806061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.806099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.806112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.810419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.810465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.810479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.814721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.814759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.814773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.819105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.819142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.819155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.823366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.823402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.823415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.827753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.827789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.827803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.832037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.832074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.832087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.836407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.836456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.836470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.840776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.840810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.840823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.845054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.845091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.845104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.849394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.849443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.849458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.853709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.853747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.853761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.858089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.858126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.858141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.862356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.862392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.862405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.866701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.866739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.866754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.871146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.871190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.871203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.875423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.875473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.875486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.879802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.879840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.879854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.884110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.884150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.884163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.888555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.888593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.888607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.892858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.892894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.892907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.696 [2024-04-24 15:21:00.897060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.696 [2024-04-24 15:21:00.897099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.696 [2024-04-24 15:21:00.897113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.697 [2024-04-24 15:21:00.901420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.697 [2024-04-24 15:21:00.901469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.697 [2024-04-24 15:21:00.901483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.697 [2024-04-24 15:21:00.905748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.697 [2024-04-24 15:21:00.905784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.697 [2024-04-24 15:21:00.905797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.697 [2024-04-24 15:21:00.910142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.697 [2024-04-24 15:21:00.910179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.697 [2024-04-24 15:21:00.910192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.697 [2024-04-24 15:21:00.914535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.697 [2024-04-24 15:21:00.914572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.697 [2024-04-24 15:21:00.914585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.697 [2024-04-24 15:21:00.918808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.697 [2024-04-24 15:21:00.918843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.697 [2024-04-24 15:21:00.918856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.697 [2024-04-24 15:21:00.923196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.697 [2024-04-24 15:21:00.923235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.697 [2024-04-24 15:21:00.923248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.697 [2024-04-24 15:21:00.927600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.697 [2024-04-24 15:21:00.927636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.697 [2024-04-24 15:21:00.927651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.697 [2024-04-24 15:21:00.932026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.697 [2024-04-24 15:21:00.932064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.697 [2024-04-24 15:21:00.932078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.697 [2024-04-24 15:21:00.936441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.697 [2024-04-24 15:21:00.936479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.697 [2024-04-24 15:21:00.936492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.956 [2024-04-24 15:21:00.940652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.956 [2024-04-24 15:21:00.940689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-04-24 15:21:00.940702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.956 [2024-04-24 15:21:00.944960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.956 [2024-04-24 15:21:00.944997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-04-24 15:21:00.945011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.956 [2024-04-24 15:21:00.949327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.956 [2024-04-24 15:21:00.949363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-04-24 15:21:00.949376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.956 [2024-04-24 15:21:00.953657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.956 [2024-04-24 15:21:00.953693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-04-24 15:21:00.953707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.956 [2024-04-24 15:21:00.958021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.956 [2024-04-24 15:21:00.958057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-04-24 15:21:00.958070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.956 [2024-04-24 15:21:00.962409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.956 [2024-04-24 15:21:00.962457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-04-24 15:21:00.962472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.956 [2024-04-24 15:21:00.966830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.956 [2024-04-24 15:21:00.966867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-04-24 15:21:00.966880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.956 [2024-04-24 15:21:00.971063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.956 [2024-04-24 15:21:00.971099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-04-24 15:21:00.971113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.956 [2024-04-24 15:21:00.975454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.956 [2024-04-24 15:21:00.975491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-04-24 15:21:00.975504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.956 [2024-04-24 15:21:00.979841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.956 [2024-04-24 15:21:00.979877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-04-24 15:21:00.979891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.956 [2024-04-24 15:21:00.984182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:00.984218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:00.984232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:00.988509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:00.988545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:00.988558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:00.992885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:00.992921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:00.992934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:00.997246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:00.997291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:00.997305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.001542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.001577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.001591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.005844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.005881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.005894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.010182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.010219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.010233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.014590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.014626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.014640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.018899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.018937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.018951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.023334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.023378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.023393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.027768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.027805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.027818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.032069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.032106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.032119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.036312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.036348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.036361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.040651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.040687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.040700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.045062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.045097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.045111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.049414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.049463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.049477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.053673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.053708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.053722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.058037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.058074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.058088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.062401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.062449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.062464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.066763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.066800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.066813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.071147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.071183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.071197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.075455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.075490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.075503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.079823] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.079859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.079872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.084211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.084248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.084261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.088526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.088561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.088574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.092837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.092872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-04-24 15:21:01.092886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.957 [2024-04-24 15:21:01.097134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.957 [2024-04-24 15:21:01.097169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.097183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.101546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.101581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.101595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.105920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.105956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.105970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.110342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.110379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.110392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.114698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.114735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.114748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.118944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.118980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.118993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.123234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.123271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.123285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.127549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.127586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.127600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.131824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.131862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.131875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.136167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.136204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.136218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.140522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.140558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.140571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.144867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.144902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.144915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.149267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.149305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.149319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.153593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.153629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.153644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.157818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.157854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.157867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.162100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.162137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.162151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.166382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.166419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.166445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.170652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.170687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.170701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.174960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.174997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.175011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.179326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.179362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.179376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.183632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.183669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.183682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.187974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.188011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.188024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.192208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.192252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.192266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.958 [2024-04-24 15:21:01.196465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:51.958 [2024-04-24 15:21:01.196503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-04-24 15:21:01.196517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.218 [2024-04-24 15:21:01.200891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.218 [2024-04-24 15:21:01.200927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.218 [2024-04-24 15:21:01.200941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.218 [2024-04-24 15:21:01.205301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.218 [2024-04-24 15:21:01.205340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.218 [2024-04-24 15:21:01.205354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.218 [2024-04-24 15:21:01.209571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.218 [2024-04-24 15:21:01.209608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.218 [2024-04-24 15:21:01.209621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.218 [2024-04-24 15:21:01.213881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.218 [2024-04-24 15:21:01.213919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.218 [2024-04-24 15:21:01.213933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.218 [2024-04-24 15:21:01.218166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.218 [2024-04-24 15:21:01.218202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.218 [2024-04-24 15:21:01.218216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.222523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.222558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.222572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.226681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.226718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.226732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.230934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.230971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.230984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.235350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.235387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.235401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.239675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.239712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.239725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.244034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.244072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.244085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.248449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.248484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.248497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.252854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.252889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.252903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.257134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.257170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.257183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.261558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.261594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.261607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.265839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.265875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.265889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.270155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.270192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.270205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.274530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.274566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.274581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.278878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.278914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.278927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.283223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.283260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.283274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.287630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.287670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.287685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.292028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.292066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.292079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.296346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.296385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.296400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.300671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.300709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.300723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.305005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.305043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.305056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.309397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.309449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.309464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.313669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.313709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.313723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.317884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.317921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.317935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.322170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.322207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.322220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.326497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.326533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.326547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.330930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.330968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.330981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.219 [2024-04-24 15:21:01.335295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.219 [2024-04-24 15:21:01.335335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.219 [2024-04-24 15:21:01.335348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.339717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.339754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.339768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.344043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.344081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.344094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.348390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.348443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.348458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.352716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.352762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.352776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.357000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.357036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.357049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.361349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.361385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.361399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.365625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.365660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.365674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.369994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.370032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.370045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.374393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.374444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.374459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.378690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.378726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.378739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.383007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.383044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.383057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.387385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.387424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.387454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.391709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.391746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.391759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.395987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.396023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.396036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.400379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.400417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.400447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.404709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.404752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.404766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.408936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.408971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.408984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.413244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.413280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.413293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.417646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.417682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.417696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.421923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.421959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.421972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.426204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.426241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.426254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.430483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.430518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.430532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.434783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.434820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.434833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.439077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.439114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.439128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.443319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.443354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.443367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.447739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.447774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.447788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.220 [2024-04-24 15:21:01.452131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.220 [2024-04-24 15:21:01.452167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.220 [2024-04-24 15:21:01.452180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.221 [2024-04-24 15:21:01.456495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.221 [2024-04-24 15:21:01.456531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.221 [2024-04-24 15:21:01.456545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.221 [2024-04-24 15:21:01.460820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.221 [2024-04-24 15:21:01.460855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.221 [2024-04-24 15:21:01.460869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.481 [2024-04-24 15:21:01.465170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.481 [2024-04-24 15:21:01.465208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.481 [2024-04-24 15:21:01.465221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.481 [2024-04-24 15:21:01.469578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.481 [2024-04-24 15:21:01.469615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.481 [2024-04-24 15:21:01.469629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.481 [2024-04-24 15:21:01.473866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.481 [2024-04-24 15:21:01.473903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.481 [2024-04-24 15:21:01.473916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.481 [2024-04-24 15:21:01.478219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.481 [2024-04-24 15:21:01.478256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.481 [2024-04-24 15:21:01.478270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.481 [2024-04-24 15:21:01.482633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.481 [2024-04-24 15:21:01.482669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.481 [2024-04-24 15:21:01.482682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.481 [2024-04-24 15:21:01.486913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.481 [2024-04-24 15:21:01.486953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.481 [2024-04-24 15:21:01.486966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.481 [2024-04-24 15:21:01.491211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.481 [2024-04-24 15:21:01.491249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.481 [2024-04-24 15:21:01.491262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.481 [2024-04-24 15:21:01.495549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.481 [2024-04-24 15:21:01.495585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.481 [2024-04-24 15:21:01.495598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.481 [2024-04-24 15:21:01.499888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.481 [2024-04-24 15:21:01.499926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.481 [2024-04-24 15:21:01.499941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.481 [2024-04-24 15:21:01.504240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.481 [2024-04-24 15:21:01.504281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.481 [2024-04-24 15:21:01.504295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.481 [2024-04-24 15:21:01.508591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.481 [2024-04-24 15:21:01.508630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.481 [2024-04-24 15:21:01.508645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.481 [2024-04-24 15:21:01.513004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.481 [2024-04-24 15:21:01.513043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.481 [2024-04-24 15:21:01.513057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.481 [2024-04-24 15:21:01.517293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.481 [2024-04-24 15:21:01.517332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.481 [2024-04-24 15:21:01.517346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.521622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.521660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.521674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.525976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.526015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.526028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.530296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.530335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.530349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.534598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.534635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.534648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.538949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.538990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.539004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.543317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.543355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.543369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.547646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.547683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.547697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.551969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.552006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.552020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.556351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.556390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.556403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.560778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.560818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.560833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.565132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.565168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.565182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.569540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.569576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.569590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.573849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.573887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.573900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.578154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.578191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.578204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.582517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.582554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.582568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.586890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.586927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.586940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.591246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.591284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.591297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.595677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.595713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.595727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.600056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.600095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.600109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.604408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.604458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.604472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.608665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.608700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.608714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.612975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.613011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.613024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.617254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.617291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.617304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.621454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.621490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.621503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.625840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.625876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.625889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.630152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.630188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.630201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.482 [2024-04-24 15:21:01.634506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.482 [2024-04-24 15:21:01.634542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.482 [2024-04-24 15:21:01.634555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.638755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.638792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.638805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.643079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.643115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.643129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.647448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.647482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.647495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.651769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.651804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.651817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.656129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.656165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.656179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.660540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.660575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.660588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.664864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.664899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.664912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.669182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.669217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.669230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.673580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.673615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.673628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.677917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.677953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.677967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.682153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.682189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.682203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.686555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.686588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.686601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.690746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.690781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.690795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.695151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.695187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.695200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.699497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.699532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.699546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.703883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.703919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.703933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.708170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.708206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.708218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.712559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.712594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.712607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.716878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.716912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.716926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.483 [2024-04-24 15:21:01.721247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.483 [2024-04-24 15:21:01.721283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.483 [2024-04-24 15:21:01.721296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.725596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.725632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.725646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.729907] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.729944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.729957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.734169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.734207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.734220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.738419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.738467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.738480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.742659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.742694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.742708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.746854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.746894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.746908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.751239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.751278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.751291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.755717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.755754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.755769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.760025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.760065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.760079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.764472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.764508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.764522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.768679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.768714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.768727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.773067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.773105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.773119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.777450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.777486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.777500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.781755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.781792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.781806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.786084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.786122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.786135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.790415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.790466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.790479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.794747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.794785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.794798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.799075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.799118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.799132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.803534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.803574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.803588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.807833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.807870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.807883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.812184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.812222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.812236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.816422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.816472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.816486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.820741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.820778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.820792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.825045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.825082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.744 [2024-04-24 15:21:01.825095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.744 [2024-04-24 15:21:01.829381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.744 [2024-04-24 15:21:01.829419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.829448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.833696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.833734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.833748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.837904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.837940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.837953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.842185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.842223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.842236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.846533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.846571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.846585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.850817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.850855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.850869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.855205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.855242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.855256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.859580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.859617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.859631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.863971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.864011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.864026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.868227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.868269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.868283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.872631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.872667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.872681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.876982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.877018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.877032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.881355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.881392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.881405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.885743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.885781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.885795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.890030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.890066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.890080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.894463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.894499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.894513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.898792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.898828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.898841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.903032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.903070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.903083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.907370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.907406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.907420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.911765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.911800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.911814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.916038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.916073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.916087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.920335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.920371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.920384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.924680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.924717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.924737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.929022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.929058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.929073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.933399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.933451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.933466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.937806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.937841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.937854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.942106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.942142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.745 [2024-04-24 15:21:01.942155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.745 [2024-04-24 15:21:01.946477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.745 [2024-04-24 15:21:01.946514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.746 [2024-04-24 15:21:01.946528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.746 [2024-04-24 15:21:01.950759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.746 [2024-04-24 15:21:01.950796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.746 [2024-04-24 15:21:01.950810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.746 [2024-04-24 15:21:01.954985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.746 [2024-04-24 15:21:01.955023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.746 [2024-04-24 15:21:01.955037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.746 [2024-04-24 15:21:01.959300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.746 [2024-04-24 15:21:01.959337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.746 [2024-04-24 15:21:01.959352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.746 [2024-04-24 15:21:01.963655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.746 [2024-04-24 15:21:01.963692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.746 [2024-04-24 15:21:01.963706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.746 [2024-04-24 15:21:01.967970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.746 [2024-04-24 15:21:01.968009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.746 [2024-04-24 15:21:01.968023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:52.746 [2024-04-24 15:21:01.972459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.746 [2024-04-24 15:21:01.972498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.746 [2024-04-24 15:21:01.972512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:52.746 [2024-04-24 15:21:01.976801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.746 [2024-04-24 15:21:01.976840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.746 [2024-04-24 15:21:01.976854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:52.746 [2024-04-24 15:21:01.981121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.746 [2024-04-24 15:21:01.981158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.746 [2024-04-24 15:21:01.981171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:52.746 [2024-04-24 15:21:01.985498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:52.746 [2024-04-24 15:21:01.985536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.746 [2024-04-24 15:21:01.985550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:01.989888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:01.989926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:01.989940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:01.994201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:01.994238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:01.994251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:01.998577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:01.998615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:01.998628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.002962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.003001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.003014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.007326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.007362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.007376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.011652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.011689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.011703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.015986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.016024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.016037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.020351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.020388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.020402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.024649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.024684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.024697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.028947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.028983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.028997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.033326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.033366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.033379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.037558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.037595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.037608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.041867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.041906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.041919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.046147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.046186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.046199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.050549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.050584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.050598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.055018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.055057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.055070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.059383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.059422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.059450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.063771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.063808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.063821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.068030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.068067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.068081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.072358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.072396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.072409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.076678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.076716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.076729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.080991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.081028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.081041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.085384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.085421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.007 [2024-04-24 15:21:02.085448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.007 [2024-04-24 15:21:02.089577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.007 [2024-04-24 15:21:02.089613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.089626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.093861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.093899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.093913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.098176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.098213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.098227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.102542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.102578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.102592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.106824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.106862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.106875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.111273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.111310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.111323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.115736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.115775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.115788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.120018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.120054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.120067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.124318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.124356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.124369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.128693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.128739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.128754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.133060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.133096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.133109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.137472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.137509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.137523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.141774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.141810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.141824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.146043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.146080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.146094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.150293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.150330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.150343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.154665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.154703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.154717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.159080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.159123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.159137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.163525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.163562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.163576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.167694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.167729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.167742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.172074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.172111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.172125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.176451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.176489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.176502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.180801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.180837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.180851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.185220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.185256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.185270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.189585] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.189623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.189637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.193788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.193825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.193839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.008 [2024-04-24 15:21:02.198197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.008 [2024-04-24 15:21:02.198235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.008 [2024-04-24 15:21:02.198248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.009 [2024-04-24 15:21:02.202622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.009 [2024-04-24 15:21:02.202658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-04-24 15:21:02.202672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.009 [2024-04-24 15:21:02.207046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.009 [2024-04-24 15:21:02.207082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-04-24 15:21:02.207095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.009 [2024-04-24 15:21:02.211509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.009 [2024-04-24 15:21:02.211545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-04-24 15:21:02.211559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.009 [2024-04-24 15:21:02.215795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.009 [2024-04-24 15:21:02.215832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-04-24 15:21:02.215845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.009 [2024-04-24 15:21:02.220160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.009 [2024-04-24 15:21:02.220200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-04-24 15:21:02.220213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.009 [2024-04-24 15:21:02.224477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.009 [2024-04-24 15:21:02.224514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-04-24 15:21:02.224527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.009 [2024-04-24 15:21:02.228907] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.009 [2024-04-24 15:21:02.228944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-04-24 15:21:02.228957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.009 [2024-04-24 15:21:02.233189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.009 [2024-04-24 15:21:02.233225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-04-24 15:21:02.233239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.009 [2024-04-24 15:21:02.237512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.009 [2024-04-24 15:21:02.237548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-04-24 15:21:02.237561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.009 [2024-04-24 15:21:02.241918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.009 [2024-04-24 15:21:02.241956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-04-24 15:21:02.241969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.009 [2024-04-24 15:21:02.246299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.009 [2024-04-24 15:21:02.246336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.009 [2024-04-24 15:21:02.246350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:53.268 [2024-04-24 15:21:02.250657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.268 [2024-04-24 15:21:02.250694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.268 [2024-04-24 15:21:02.250708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:53.268 [2024-04-24 15:21:02.254992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.268 [2024-04-24 15:21:02.255029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.268 [2024-04-24 15:21:02.255042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:53.268 [2024-04-24 15:21:02.259244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x213e530) 00:16:53.268 [2024-04-24 15:21:02.259282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.268 [2024-04-24 15:21:02.259295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.268 00:16:53.268 Latency(us) 00:16:53.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.268 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:53.268 nvme0n1 : 2.00 7017.21 877.15 0.00 0.00 2276.55 2010.76 7417.48 00:16:53.268 =================================================================================================================== 00:16:53.268 Total : 7017.21 877.15 0.00 0.00 2276.55 2010.76 7417.48 00:16:53.268 0 00:16:53.268 15:21:02 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:53.268 15:21:02 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:53.268 15:21:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:53.268 15:21:02 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:53.268 | .driver_specific 00:16:53.268 | .nvme_error 00:16:53.268 | .status_code 00:16:53.268 | .command_transient_transport_error' 00:16:53.527 15:21:02 -- host/digest.sh@71 -- # (( 453 > 0 )) 00:16:53.527 15:21:02 -- host/digest.sh@73 -- # killprocess 76877 00:16:53.527 15:21:02 -- common/autotest_common.sh@936 -- # '[' -z 76877 ']' 00:16:53.527 15:21:02 -- common/autotest_common.sh@940 -- # kill -0 76877 00:16:53.527 15:21:02 -- common/autotest_common.sh@941 -- # uname 00:16:53.527 15:21:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.527 15:21:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76877 00:16:53.527 15:21:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:53.527 15:21:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:53.527 killing process with pid 76877 00:16:53.527 15:21:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76877' 00:16:53.527 Received shutdown signal, test time was about 2.000000 seconds 00:16:53.527 00:16:53.527 Latency(us) 00:16:53.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.528 =================================================================================================================== 00:16:53.528 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.528 15:21:02 -- common/autotest_common.sh@955 -- # kill 76877 00:16:53.528 15:21:02 -- common/autotest_common.sh@960 -- # wait 76877 00:16:53.785 15:21:02 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:16:53.785 15:21:02 -- host/digest.sh@54 -- # local rw bs qd 00:16:53.785 15:21:02 -- host/digest.sh@56 -- # rw=randwrite 00:16:53.785 15:21:02 -- host/digest.sh@56 -- # bs=4096 00:16:53.785 15:21:02 -- host/digest.sh@56 -- # qd=128 00:16:53.785 15:21:02 -- host/digest.sh@58 -- # bperfpid=76933 00:16:53.785 15:21:02 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:53.785 15:21:02 -- host/digest.sh@60 -- # waitforlisten 76933 /var/tmp/bperf.sock 00:16:53.785 15:21:02 -- common/autotest_common.sh@817 -- # '[' -z 76933 ']' 00:16:53.785 15:21:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:53.785 15:21:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:53.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:53.786 15:21:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:53.786 15:21:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:53.786 15:21:02 -- common/autotest_common.sh@10 -- # set +x 00:16:53.786 [2024-04-24 15:21:02.896276] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:16:53.786 [2024-04-24 15:21:02.896384] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76933 ] 00:16:54.054 [2024-04-24 15:21:03.034031] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.054 [2024-04-24 15:21:03.151562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.986 15:21:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:54.986 15:21:03 -- common/autotest_common.sh@850 -- # return 0 00:16:54.986 15:21:03 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:54.986 15:21:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:54.986 15:21:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:54.986 15:21:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.986 15:21:04 -- common/autotest_common.sh@10 -- # set +x 00:16:54.986 15:21:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.986 15:21:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:54.986 15:21:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:55.244 nvme0n1 00:16:55.244 15:21:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:55.244 15:21:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:55.244 15:21:04 -- common/autotest_common.sh@10 -- # set +x 00:16:55.244 15:21:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:55.244 15:21:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:55.244 15:21:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:55.502 Running I/O for 2 seconds... 00:16:55.502 [2024-04-24 15:21:04.573376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190fef90 00:16:55.502 [2024-04-24 15:21:04.576020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.502 [2024-04-24 15:21:04.576075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:55.502 [2024-04-24 15:21:04.590000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190feb58 00:16:55.502 [2024-04-24 15:21:04.592601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.502 [2024-04-24 15:21:04.592639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:55.502 [2024-04-24 15:21:04.606383] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190fe2e8 00:16:55.502 [2024-04-24 15:21:04.608965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.502 [2024-04-24 15:21:04.609005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:55.502 [2024-04-24 15:21:04.622717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190fda78 00:16:55.502 [2024-04-24 15:21:04.625236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.502 [2024-04-24 15:21:04.625275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:55.502 [2024-04-24 15:21:04.638900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190fd208 00:16:55.502 [2024-04-24 15:21:04.641406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.502 [2024-04-24 15:21:04.641455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:55.502 [2024-04-24 15:21:04.655338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190fc998 00:16:55.502 [2024-04-24 15:21:04.657877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.502 [2024-04-24 15:21:04.657916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:55.502 [2024-04-24 15:21:04.672255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190fc128 00:16:55.502 [2024-04-24 15:21:04.674745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.502 [2024-04-24 15:21:04.674783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:55.502 [2024-04-24 15:21:04.688510] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190fb8b8 00:16:55.502 [2024-04-24 15:21:04.690964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.502 [2024-04-24 15:21:04.691003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:55.502 [2024-04-24 15:21:04.704681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190fb048 00:16:55.502 [2024-04-24 15:21:04.707091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.502 [2024-04-24 15:21:04.707127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:55.502 [2024-04-24 15:21:04.721093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190fa7d8 00:16:55.502 [2024-04-24 15:21:04.723553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.502 [2024-04-24 15:21:04.723592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:55.502 [2024-04-24 15:21:04.737593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f9f68 00:16:55.502 [2024-04-24 15:21:04.739984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.502 [2024-04-24 15:21:04.740023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:55.760 [2024-04-24 15:21:04.754044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f96f8 00:16:55.760 [2024-04-24 15:21:04.756466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.760 [2024-04-24 15:21:04.756513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:55.760 [2024-04-24 15:21:04.770816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f8e88 00:16:55.760 [2024-04-24 15:21:04.773238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.760 [2024-04-24 15:21:04.773276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:55.760 [2024-04-24 15:21:04.787180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f8618 00:16:55.760 [2024-04-24 15:21:04.789520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.760 [2024-04-24 15:21:04.789559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:55.760 [2024-04-24 15:21:04.803393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f7da8 00:16:55.760 [2024-04-24 15:21:04.805707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.760 [2024-04-24 15:21:04.805745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:55.760 [2024-04-24 15:21:04.819579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f7538 00:16:55.760 [2024-04-24 15:21:04.821863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.760 [2024-04-24 15:21:04.821900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:55.760 [2024-04-24 15:21:04.835793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f6cc8 00:16:55.760 [2024-04-24 15:21:04.838072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.760 [2024-04-24 15:21:04.838111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:55.760 [2024-04-24 15:21:04.852159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f6458 00:16:55.760 [2024-04-24 15:21:04.854445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.760 [2024-04-24 15:21:04.854482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:55.760 [2024-04-24 15:21:04.868733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f5be8 00:16:55.760 [2024-04-24 15:21:04.870979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.760 [2024-04-24 15:21:04.871020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:55.760 [2024-04-24 15:21:04.885249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f5378 00:16:55.760 [2024-04-24 15:21:04.887504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.760 [2024-04-24 15:21:04.887544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:55.760 [2024-04-24 15:21:04.901980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f4b08 00:16:55.761 [2024-04-24 15:21:04.904234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.761 [2024-04-24 15:21:04.904273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:55.761 [2024-04-24 15:21:04.918670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f4298 00:16:55.761 [2024-04-24 15:21:04.920858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.761 [2024-04-24 15:21:04.920896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:55.761 [2024-04-24 15:21:04.935158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f3a28 00:16:55.761 [2024-04-24 15:21:04.937332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.761 [2024-04-24 15:21:04.937373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:55.761 [2024-04-24 15:21:04.951484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f31b8 00:16:55.761 [2024-04-24 15:21:04.953613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.761 [2024-04-24 15:21:04.953651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:55.761 [2024-04-24 15:21:04.967653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f2948 00:16:55.761 [2024-04-24 15:21:04.969754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.761 [2024-04-24 15:21:04.969794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:55.761 [2024-04-24 15:21:04.983942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f20d8 00:16:55.761 [2024-04-24 15:21:04.986068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.761 [2024-04-24 15:21:04.986108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:55.761 [2024-04-24 15:21:05.000309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f1868 00:16:55.761 [2024-04-24 15:21:05.002412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.761 [2024-04-24 15:21:05.002460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.016576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f0ff8 00:16:56.019 [2024-04-24 15:21:05.018636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.018674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.033024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f0788 00:16:56.019 [2024-04-24 15:21:05.035053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.035091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.049315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190eff18 00:16:56.019 [2024-04-24 15:21:05.051327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.051365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.065672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190ef6a8 00:16:56.019 [2024-04-24 15:21:05.067648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.067689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.082042] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190eee38 00:16:56.019 [2024-04-24 15:21:05.084020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.084060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.098500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190ee5c8 00:16:56.019 [2024-04-24 15:21:05.100453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.100491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.114931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190edd58 00:16:56.019 [2024-04-24 15:21:05.116885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.116924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.131411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190ed4e8 00:16:56.019 [2024-04-24 15:21:05.133330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.133369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.147846] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190ecc78 00:16:56.019 [2024-04-24 15:21:05.149755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.149796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.164299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190ec408 00:16:56.019 [2024-04-24 15:21:05.166209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.166247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.180818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190ebb98 00:16:56.019 [2024-04-24 15:21:05.182672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.182712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.197322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190eb328 00:16:56.019 [2024-04-24 15:21:05.199167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.199209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.213816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190eaab8 00:16:56.019 [2024-04-24 15:21:05.215640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.215681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.230731] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190ea248 00:16:56.019 [2024-04-24 15:21:05.232602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.232649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:56.019 [2024-04-24 15:21:05.247551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e99d8 00:16:56.019 [2024-04-24 15:21:05.249329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.019 [2024-04-24 15:21:05.249371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:56.321 [2024-04-24 15:21:05.264028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e9168 00:16:56.321 [2024-04-24 15:21:05.265806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.321 [2024-04-24 15:21:05.265848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:56.321 [2024-04-24 15:21:05.280317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e88f8 00:16:56.321 [2024-04-24 15:21:05.282044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.321 [2024-04-24 15:21:05.282084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:56.321 [2024-04-24 15:21:05.296591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e8088 00:16:56.321 [2024-04-24 15:21:05.298300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.321 [2024-04-24 15:21:05.298341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:56.321 [2024-04-24 15:21:05.313394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e7818 00:16:56.321 [2024-04-24 15:21:05.315101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.321 [2024-04-24 15:21:05.315147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.329904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e6fa8 00:16:56.322 [2024-04-24 15:21:05.331602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.331647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.346267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e6738 00:16:56.322 [2024-04-24 15:21:05.347899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.347942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.362550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e5ec8 00:16:56.322 [2024-04-24 15:21:05.364174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.364214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.378925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e5658 00:16:56.322 [2024-04-24 15:21:05.380544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.380584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.395378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e4de8 00:16:56.322 [2024-04-24 15:21:05.396998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.397039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.411697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e4578 00:16:56.322 [2024-04-24 15:21:05.413268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.413307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.428404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e3d08 00:16:56.322 [2024-04-24 15:21:05.429993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.430031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.444928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e3498 00:16:56.322 [2024-04-24 15:21:05.446469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.446532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.461346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e2c28 00:16:56.322 [2024-04-24 15:21:05.462857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.462896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.477593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e23b8 00:16:56.322 [2024-04-24 15:21:05.479062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.479100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.494004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e1b48 00:16:56.322 [2024-04-24 15:21:05.495497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.495536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.510873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e12d8 00:16:56.322 [2024-04-24 15:21:05.512408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.512459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.527750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e0a68 00:16:56.322 [2024-04-24 15:21:05.529265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.529307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.544241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e01f8 00:16:56.322 [2024-04-24 15:21:05.545668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.545709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:56.322 [2024-04-24 15:21:05.560698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190df988 00:16:56.322 [2024-04-24 15:21:05.562101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.322 [2024-04-24 15:21:05.562140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:56.580 [2024-04-24 15:21:05.577322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190df118 00:16:56.580 [2024-04-24 15:21:05.578684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.580 [2024-04-24 15:21:05.578723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:56.580 [2024-04-24 15:21:05.593821] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190de8a8 00:16:56.580 [2024-04-24 15:21:05.595156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.580 [2024-04-24 15:21:05.595196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:56.580 [2024-04-24 15:21:05.610225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190de038 00:16:56.580 [2024-04-24 15:21:05.611544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.580 [2024-04-24 15:21:05.611582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:56.580 [2024-04-24 15:21:05.633440] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190de038 00:16:56.580 [2024-04-24 15:21:05.636014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.580 [2024-04-24 15:21:05.636054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.580 [2024-04-24 15:21:05.649913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190de8a8 00:16:56.580 [2024-04-24 15:21:05.652492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.580 [2024-04-24 15:21:05.652532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:56.580 [2024-04-24 15:21:05.666160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190df118 00:16:56.580 [2024-04-24 15:21:05.668706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.580 [2024-04-24 15:21:05.668751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:56.580 [2024-04-24 15:21:05.682625] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190df988 00:16:56.580 [2024-04-24 15:21:05.685207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.580 [2024-04-24 15:21:05.685260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:56.580 [2024-04-24 15:21:05.699107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e01f8 00:16:56.580 [2024-04-24 15:21:05.701647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.580 [2024-04-24 15:21:05.701687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:56.580 [2024-04-24 15:21:05.715563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e0a68 00:16:56.580 [2024-04-24 15:21:05.718107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.580 [2024-04-24 15:21:05.718147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:56.580 [2024-04-24 15:21:05.732070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e12d8 00:16:56.580 [2024-04-24 15:21:05.734554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.580 [2024-04-24 15:21:05.734591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:56.580 [2024-04-24 15:21:05.748869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e1b48 00:16:56.580 [2024-04-24 15:21:05.751359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.580 [2024-04-24 15:21:05.751400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:56.580 [2024-04-24 15:21:05.765504] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e23b8 00:16:56.580 [2024-04-24 15:21:05.767924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.580 [2024-04-24 15:21:05.767962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:56.580 [2024-04-24 15:21:05.781777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e2c28 00:16:56.580 [2024-04-24 15:21:05.784164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.580 [2024-04-24 15:21:05.784202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:56.580 [2024-04-24 15:21:05.798016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e3498 00:16:56.580 [2024-04-24 15:21:05.800401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.581 [2024-04-24 15:21:05.800452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:56.581 [2024-04-24 15:21:05.814505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e3d08 00:16:56.581 [2024-04-24 15:21:05.816908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.581 [2024-04-24 15:21:05.816946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:05.831025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e4578 00:16:56.839 [2024-04-24 15:21:05.833390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:05.833437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:05.847300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e4de8 00:16:56.839 [2024-04-24 15:21:05.849631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:05.849671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:05.863658] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e5658 00:16:56.839 [2024-04-24 15:21:05.865966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:05.866004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:05.880050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e5ec8 00:16:56.839 [2024-04-24 15:21:05.882394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:05.882444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:05.896425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e6738 00:16:56.839 [2024-04-24 15:21:05.898716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:05.898756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:05.912699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e6fa8 00:16:56.839 [2024-04-24 15:21:05.914958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:05.914998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:05.928969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e7818 00:16:56.839 [2024-04-24 15:21:05.931199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:05.931236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:05.945313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e8088 00:16:56.839 [2024-04-24 15:21:05.947509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:05.947546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:05.961640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e88f8 00:16:56.839 [2024-04-24 15:21:05.963873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:05.963912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:05.977961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e9168 00:16:56.839 [2024-04-24 15:21:05.980114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:05.980154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:05.994272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190e99d8 00:16:56.839 [2024-04-24 15:21:05.996417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:05.996462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:06.010474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190ea248 00:16:56.839 [2024-04-24 15:21:06.012599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:06.012640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:06.026798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190eaab8 00:16:56.839 [2024-04-24 15:21:06.028903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:06.028941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:06.043081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190eb328 00:16:56.839 [2024-04-24 15:21:06.045178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:06.045217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:06.059439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190ebb98 00:16:56.839 [2024-04-24 15:21:06.061512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:06.061552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:56.839 [2024-04-24 15:21:06.075769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190ec408 00:16:56.839 [2024-04-24 15:21:06.077800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:56.839 [2024-04-24 15:21:06.077838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:57.097 [2024-04-24 15:21:06.092311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190ecc78 00:16:57.097 [2024-04-24 15:21:06.094371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.097 [2024-04-24 15:21:06.094409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:57.097 [2024-04-24 15:21:06.108871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190ed4e8 00:16:57.098 [2024-04-24 15:21:06.110900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.110941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:57.098 [2024-04-24 15:21:06.125214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190edd58 00:16:57.098 [2024-04-24 15:21:06.127196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.127233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:57.098 [2024-04-24 15:21:06.141576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190ee5c8 00:16:57.098 [2024-04-24 15:21:06.143539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.143581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:57.098 [2024-04-24 15:21:06.157800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190eee38 00:16:57.098 [2024-04-24 15:21:06.159724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.159763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:57.098 [2024-04-24 15:21:06.174058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190ef6a8 00:16:57.098 [2024-04-24 15:21:06.175960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.176003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:57.098 [2024-04-24 15:21:06.190407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190eff18 00:16:57.098 [2024-04-24 15:21:06.192343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.192385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:57.098 [2024-04-24 15:21:06.206670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f0788 00:16:57.098 [2024-04-24 15:21:06.208543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.208586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:57.098 [2024-04-24 15:21:06.223184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f0ff8 00:16:57.098 [2024-04-24 15:21:06.225117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.225161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:57.098 [2024-04-24 15:21:06.239797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f1868 00:16:57.098 [2024-04-24 15:21:06.241661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.241706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:57.098 [2024-04-24 15:21:06.256102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f20d8 00:16:57.098 [2024-04-24 15:21:06.257941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.257985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:57.098 [2024-04-24 15:21:06.272517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f2948 00:16:57.098 [2024-04-24 15:21:06.274339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.274384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:57.098 [2024-04-24 15:21:06.288939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f31b8 00:16:57.098 [2024-04-24 15:21:06.290712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.290753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:57.098 [2024-04-24 15:21:06.305115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f3a28 00:16:57.098 [2024-04-24 15:21:06.306861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.306901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:57.098 [2024-04-24 15:21:06.321526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f4298 00:16:57.098 [2024-04-24 15:21:06.323262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.323308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:57.098 [2024-04-24 15:21:06.337872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f4b08 00:16:57.098 [2024-04-24 15:21:06.339600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.098 [2024-04-24 15:21:06.339643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:57.357 [2024-04-24 15:21:06.354188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f5378 00:16:57.357 [2024-04-24 15:21:06.355880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.357 [2024-04-24 15:21:06.355921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:57.357 [2024-04-24 15:21:06.370722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f5be8 00:16:57.357 [2024-04-24 15:21:06.372444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.357 [2024-04-24 15:21:06.372488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:57.357 [2024-04-24 15:21:06.387229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f6458 00:16:57.357 [2024-04-24 15:21:06.388948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.357 [2024-04-24 15:21:06.388993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:57.357 [2024-04-24 15:21:06.404043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f6cc8 00:16:57.357 [2024-04-24 15:21:06.405729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.357 [2024-04-24 15:21:06.405774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:57.357 [2024-04-24 15:21:06.420450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f7538 00:16:57.357 [2024-04-24 15:21:06.422064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.357 [2024-04-24 15:21:06.422108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:57.357 [2024-04-24 15:21:06.436640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f7da8 00:16:57.357 [2024-04-24 15:21:06.438232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.357 [2024-04-24 15:21:06.438275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:57.357 [2024-04-24 15:21:06.452989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f8618 00:16:57.357 [2024-04-24 15:21:06.454606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.357 [2024-04-24 15:21:06.454646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:57.357 [2024-04-24 15:21:06.469538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f8e88 00:16:57.357 [2024-04-24 15:21:06.471119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.357 [2024-04-24 15:21:06.471164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:57.357 [2024-04-24 15:21:06.486074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f96f8 00:16:57.357 [2024-04-24 15:21:06.487619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.357 [2024-04-24 15:21:06.487661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:57.357 [2024-04-24 15:21:06.502611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190f9f68 00:16:57.357 [2024-04-24 15:21:06.504133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.357 [2024-04-24 15:21:06.504178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:57.357 [2024-04-24 15:21:06.519055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190fa7d8 00:16:57.357 [2024-04-24 15:21:06.520577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.357 [2024-04-24 15:21:06.520622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:57.357 [2024-04-24 15:21:06.535773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190fb048 00:16:57.357 [2024-04-24 15:21:06.537280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.357 [2024-04-24 15:21:06.537323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:57.357 [2024-04-24 15:21:06.552285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x895030) with pdu=0x2000190fb8b8 00:16:57.357 [2024-04-24 15:21:06.553807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.357 [2024-04-24 15:21:06.553849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:57.357 00:16:57.357 Latency(us) 00:16:57.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.357 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.357 nvme0n1 : 2.01 15371.41 60.04 0.00 0.00 8318.78 7328.12 31695.59 00:16:57.357 =================================================================================================================== 00:16:57.357 Total : 15371.41 60.04 0.00 0.00 8318.78 7328.12 31695.59 00:16:57.357 0 00:16:57.357 15:21:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:57.357 15:21:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:57.357 15:21:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:57.357 15:21:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:57.357 | .driver_specific 00:16:57.357 | .nvme_error 00:16:57.357 | .status_code 00:16:57.357 | .command_transient_transport_error' 00:16:57.924 15:21:06 -- host/digest.sh@71 -- # (( 121 > 0 )) 00:16:57.924 15:21:06 -- host/digest.sh@73 -- # killprocess 76933 00:16:57.924 15:21:06 -- common/autotest_common.sh@936 -- # '[' -z 76933 ']' 00:16:57.924 15:21:06 -- common/autotest_common.sh@940 -- # kill -0 76933 00:16:57.924 15:21:06 -- common/autotest_common.sh@941 -- # uname 00:16:57.924 15:21:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:57.924 15:21:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76933 00:16:57.924 killing process with pid 76933 00:16:57.924 Received shutdown signal, test time was about 2.000000 seconds 00:16:57.924 00:16:57.924 Latency(us) 00:16:57.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.924 =================================================================================================================== 00:16:57.924 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:57.924 15:21:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:57.924 15:21:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:57.924 15:21:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76933' 00:16:57.924 15:21:06 -- common/autotest_common.sh@955 -- # kill 76933 00:16:57.924 15:21:06 -- common/autotest_common.sh@960 -- # wait 76933 00:16:58.182 15:21:07 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:16:58.182 15:21:07 -- host/digest.sh@54 -- # local rw bs qd 00:16:58.182 15:21:07 -- host/digest.sh@56 -- # rw=randwrite 00:16:58.182 15:21:07 -- host/digest.sh@56 -- # bs=131072 00:16:58.182 15:21:07 -- host/digest.sh@56 -- # qd=16 00:16:58.182 15:21:07 -- host/digest.sh@58 -- # bperfpid=76993 00:16:58.182 15:21:07 -- host/digest.sh@60 -- # waitforlisten 76993 /var/tmp/bperf.sock 00:16:58.182 15:21:07 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:58.182 15:21:07 -- common/autotest_common.sh@817 -- # '[' -z 76993 ']' 00:16:58.182 15:21:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:58.182 15:21:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:58.182 15:21:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:58.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:58.182 15:21:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:58.182 15:21:07 -- common/autotest_common.sh@10 -- # set +x 00:16:58.182 [2024-04-24 15:21:07.262528] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:16:58.182 [2024-04-24 15:21:07.263189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76993 ] 00:16:58.182 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:58.182 Zero copy mechanism will not be used. 00:16:58.182 [2024-04-24 15:21:07.401679] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.440 [2024-04-24 15:21:07.521972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.007 15:21:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:59.007 15:21:08 -- common/autotest_common.sh@850 -- # return 0 00:16:59.007 15:21:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:59.007 15:21:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:59.575 15:21:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:59.575 15:21:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.575 15:21:08 -- common/autotest_common.sh@10 -- # set +x 00:16:59.575 15:21:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.575 15:21:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:59.575 15:21:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:59.834 nvme0n1 00:16:59.834 15:21:08 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:59.834 15:21:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.834 15:21:08 -- common/autotest_common.sh@10 -- # set +x 00:16:59.834 15:21:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.834 15:21:08 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:59.834 15:21:08 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:59.834 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:59.834 Zero copy mechanism will not be used. 00:16:59.834 Running I/O for 2 seconds... 00:16:59.834 [2024-04-24 15:21:09.041188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:16:59.834 [2024-04-24 15:21:09.041526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.834 [2024-04-24 15:21:09.041560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.834 [2024-04-24 15:21:09.046491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:16:59.834 [2024-04-24 15:21:09.046785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.834 [2024-04-24 15:21:09.046818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.834 [2024-04-24 15:21:09.051802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:16:59.834 [2024-04-24 15:21:09.052096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.834 [2024-04-24 15:21:09.052128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.834 [2024-04-24 15:21:09.057149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:16:59.834 [2024-04-24 15:21:09.057466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.834 [2024-04-24 15:21:09.057499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.834 [2024-04-24 15:21:09.062457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:16:59.834 [2024-04-24 15:21:09.062753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.834 [2024-04-24 15:21:09.062786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:59.834 [2024-04-24 15:21:09.067802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:16:59.834 [2024-04-24 15:21:09.068103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.834 [2024-04-24 15:21:09.068131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:59.834 [2024-04-24 15:21:09.073075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:16:59.834 [2024-04-24 15:21:09.073375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.834 [2024-04-24 15:21:09.073407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:59.834 [2024-04-24 15:21:09.078706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:16:59.834 [2024-04-24 15:21:09.079019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.079052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.084317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.084631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.084664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.089757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.090056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.090089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.095067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.095369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.095402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.100382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.100705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.100749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.105742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.106037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.106070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.111033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.111329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.111361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.116310] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.116627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.116661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.121621] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.121919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.121951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.126937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.127235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.127268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.132211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.132521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.132553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.137486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.137786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.137816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.142792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.143089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.143120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.148070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.148367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.148399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.153519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.153822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.153853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.159127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.159446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.159478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.164551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.164861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.164894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.169865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.170161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.170192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.175178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.175489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.175521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.180494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.180803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.180834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.185778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.186075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.186106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.191096] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.191390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.191421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.196365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.196671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.196701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.201779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.202077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.202108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.207147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.207460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.094 [2024-04-24 15:21:09.207490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.094 [2024-04-24 15:21:09.212509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.094 [2024-04-24 15:21:09.212817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.212848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.218048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.218327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.218360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.223040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.223112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.223137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.228303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.228371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.228396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.233527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.233606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.233631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.238759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.238832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.238856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.243944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.244015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.244039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.249307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.249373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.249397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.254685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.254753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.254778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.259865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.259933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.259958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.265109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.265178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.265204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.270391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.270472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.270499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.275601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.275671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.275697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.280795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.280871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.280896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.286008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.286079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.286104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.291218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.291286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.291311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.296471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.296539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.296566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.301671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.301745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.301770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.306886] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.306959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.306985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.312049] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.312122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.312148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.317284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.317350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.317376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.322547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.322615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.322640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.327748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.327815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.327839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.095 [2024-04-24 15:21:09.332984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.095 [2024-04-24 15:21:09.333050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.095 [2024-04-24 15:21:09.333076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.338598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.338674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.338699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.344003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.344074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.344099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.349259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.349327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.349352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.354611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.354682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.354707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.360079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.360147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.360172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.365359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.365441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.365466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.370625] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.370692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.370717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.375784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.375855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.375880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.380996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.381076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.381103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.386228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.386300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.386326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.391477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.391549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.391575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.396725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.396811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.396836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.401912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.401983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.402011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.407160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.407240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.407268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.412572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.412645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.412671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.417970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.418050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.418076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.423189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.423262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.423288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.428417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.428507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.428531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.433745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.433832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.433861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.438978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.439052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.439079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.444194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.444265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.444290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.449447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.449514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.355 [2024-04-24 15:21:09.449541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.355 [2024-04-24 15:21:09.454626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.355 [2024-04-24 15:21:09.454698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.454723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.459843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.459914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.459940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.465135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.465208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.465234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.470635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.470708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.470735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.476011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.476089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.476115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.481327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.481403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.481442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.486561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.486628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.486653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.491789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.491861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.491886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.496989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.497060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.497085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.502146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.502215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.502240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.507556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.507626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.507652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.513079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.513158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.513184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.518296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.518370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.518395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.523537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.523605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.523631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.528766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.528834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.528860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.533974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.534041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.534066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.539193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.539258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.539283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.544368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.544454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.544479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.549571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.549638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.549663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.554736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.554804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.554828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.559927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.559993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.560017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.565135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.565202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.565227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.570363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.570443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.570468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.575546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.575613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.575638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.580732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.580808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.580832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.585971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.586035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.586060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.591156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.591226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.356 [2024-04-24 15:21:09.591251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.356 [2024-04-24 15:21:09.596524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.356 [2024-04-24 15:21:09.596592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.357 [2024-04-24 15:21:09.596617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.616 [2024-04-24 15:21:09.602101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.616 [2024-04-24 15:21:09.602174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.602199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.607423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.607507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.607534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.612721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.612806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.612832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.618166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.618243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.618268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.623405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.623483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.623509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.628620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.628687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.628711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.633865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.633935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.633959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.639061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.639128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.639152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.644306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.644375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.644400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.649574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.649649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.649675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.654793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.654866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.654892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.660024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.660104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.660130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.665278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.665346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.665371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.670636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.670720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.670745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.676005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.676081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.676107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.681254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.681321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.681347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.686453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.686521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.686546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.691667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.691733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.691757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.696903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.696972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.696996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.702109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.702175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.702200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.707310] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.707376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.707401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.712585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.712652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.712678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.717814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.717881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.717905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.723003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.723073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.723097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.728296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.728368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.728393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.733779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.733851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.733876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.617 [2024-04-24 15:21:09.738966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.617 [2024-04-24 15:21:09.739032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.617 [2024-04-24 15:21:09.739057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.744200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.744266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.744291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.749417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.749499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.749523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.754606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.754676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.754700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.759758] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.759829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.759852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.765087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.765162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.765187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.770746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.770815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.770840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.776024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.776095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.776120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.781247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.781315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.781339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.786478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.786544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.786568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.791627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.791692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.791716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.796722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.796798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.796822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.802018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.802086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.802111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.807170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.807233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.807257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.812363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.812447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.812471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.817693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.817760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.817783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.822940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.823012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.823036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.828133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.828199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.828223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.833462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.833530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.833554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.838857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.838927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.838951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.844123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.844194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.844218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.849310] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.849382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.849407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.854553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.854621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.854645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.618 [2024-04-24 15:21:09.860024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.618 [2024-04-24 15:21:09.860101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.618 [2024-04-24 15:21:09.860126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.865395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.865483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.865508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.870783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.870855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.870879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.876181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.876257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.876281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.881449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.881516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.881542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.886642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.886711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.886736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.891864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.891934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.891958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.897117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.897184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.897209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.902320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.902388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.902414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.907727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.907800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.907824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.913202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.913276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.913301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.918420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.918524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.918549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.923700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.923770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.923795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.929147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.929217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.929243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.934446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.934520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.934546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.878 [2024-04-24 15:21:09.939636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.878 [2024-04-24 15:21:09.939703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.878 [2024-04-24 15:21:09.939727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:09.944889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:09.944960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:09.944985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:09.950074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:09.950142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:09.950166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:09.955296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:09.955364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:09.955389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:09.960519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:09.960588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:09.960613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:09.965774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:09.965844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:09.965869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:09.970970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:09.971039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:09.971065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:09.976211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:09.976281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:09.976306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:09.981417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:09.981498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:09.981524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:09.986734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:09.986805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:09.986830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:09.993000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:09.993092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:09.993120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.000110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.000213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.000241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.006004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.006088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.006114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.011202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.011274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.011300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.016472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.016551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.016576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.021918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.022000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.022025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.029157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.029254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.029281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.035359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.035457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.035484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.040698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.040781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.040808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.046040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.046120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.046145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.051331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.051414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.051454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.056534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.056619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.056645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.061951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.062035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.062061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.067257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.067341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.067367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.072482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.072566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.072596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.077759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.077830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.077854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.879 [2024-04-24 15:21:10.082975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.879 [2024-04-24 15:21:10.083055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.879 [2024-04-24 15:21:10.083081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.880 [2024-04-24 15:21:10.088159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.880 [2024-04-24 15:21:10.088240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.880 [2024-04-24 15:21:10.088266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.880 [2024-04-24 15:21:10.093472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.880 [2024-04-24 15:21:10.093561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.880 [2024-04-24 15:21:10.093588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.880 [2024-04-24 15:21:10.098703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.880 [2024-04-24 15:21:10.098772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.880 [2024-04-24 15:21:10.098797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.880 [2024-04-24 15:21:10.103908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.880 [2024-04-24 15:21:10.103989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.880 [2024-04-24 15:21:10.104015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:00.880 [2024-04-24 15:21:10.109141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.880 [2024-04-24 15:21:10.109217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.880 [2024-04-24 15:21:10.109242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:00.880 [2024-04-24 15:21:10.114368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.880 [2024-04-24 15:21:10.114454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.880 [2024-04-24 15:21:10.114480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:00.880 [2024-04-24 15:21:10.119607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:00.880 [2024-04-24 15:21:10.119688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.880 [2024-04-24 15:21:10.119711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.125015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.125096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.125120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.130519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.130592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.130617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.135984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.136062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.136089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.141242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.141318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.141345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.146495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.146586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.146614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.151727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.151796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.151822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.156988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.157058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.157083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.162221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.162307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.162334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.167641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.167741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.167766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.172946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.173019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.173044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.178213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.178293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.178317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.183568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.183639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.183665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.189030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.189099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.189126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.194314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.194400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.194438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.199585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.199673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.199709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.204789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.204857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.204882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.210008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.210086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.210111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.215223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.215298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.139 [2024-04-24 15:21:10.215322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.139 [2024-04-24 15:21:10.220482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.139 [2024-04-24 15:21:10.220553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.220579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.225680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.225746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.225772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.230939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.231005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.231030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.236128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.236206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.236233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.242121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.242209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.242235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.248355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.248444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.248470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.253622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.253702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.253726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.258913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.258983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.259007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.264120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.264187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.264214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.269359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.269440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.269466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.274681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.274766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.274791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.280067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.280135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.280161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.285290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.285357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.285382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.290521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.290589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.290614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.295746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.295816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.295842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.300963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.301036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.301061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.306290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.306361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.306386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.311732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.311817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.311842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.317084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.317154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.317179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.322375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.322460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.322486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.327583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.327652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.327676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.332814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.332888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.332912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.338073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.338154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.338180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.343239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.343306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.343331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.348467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.140 [2024-04-24 15:21:10.348532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.140 [2024-04-24 15:21:10.348557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.140 [2024-04-24 15:21:10.353629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.141 [2024-04-24 15:21:10.353711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.141 [2024-04-24 15:21:10.353735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.141 [2024-04-24 15:21:10.358858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.141 [2024-04-24 15:21:10.358924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.141 [2024-04-24 15:21:10.358948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.141 [2024-04-24 15:21:10.364155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.141 [2024-04-24 15:21:10.364223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.141 [2024-04-24 15:21:10.364248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.141 [2024-04-24 15:21:10.370055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.141 [2024-04-24 15:21:10.370127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.141 [2024-04-24 15:21:10.370154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.141 [2024-04-24 15:21:10.375337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.141 [2024-04-24 15:21:10.375405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.141 [2024-04-24 15:21:10.375444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.141 [2024-04-24 15:21:10.380693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.141 [2024-04-24 15:21:10.380772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.141 [2024-04-24 15:21:10.380797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.400 [2024-04-24 15:21:10.386200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.400 [2024-04-24 15:21:10.386272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.400 [2024-04-24 15:21:10.386297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.400 [2024-04-24 15:21:10.391777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.400 [2024-04-24 15:21:10.391852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.400 [2024-04-24 15:21:10.391885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.400 [2024-04-24 15:21:10.397039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.400 [2024-04-24 15:21:10.397116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.400 [2024-04-24 15:21:10.397141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.402209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.402277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.402301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.407456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.407527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.407551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.412680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.412766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.412791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.417937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.418004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.418028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.423305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.423375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.423399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.428712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.428793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.428817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.433944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.434023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.434047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.439234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.439302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.439326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.444690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.444780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.444806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.450047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.450130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.450155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.455294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.455360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.455385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.460574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.460641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.460666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.465809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.465876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.465900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.471040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.471106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.471131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.476283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.476350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.476377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.481524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.481594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.481620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.486753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.486821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.486846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.491947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.492016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.492042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.497185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.497251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.497276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.502424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.502506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.502530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.507652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.507736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.507762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.512921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.512993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.513019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.518110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.518185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.518212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.523336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.523406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.523445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.528534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.528603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.528627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.533751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.533823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.533849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.539023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.539092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.539118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.544323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.544391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.544418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.549598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.549678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.549703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.554837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.554916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.554943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.560116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.560184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.560209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.565513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.565590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.565616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.570953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.571023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.571048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.576153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.576222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.576248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.581387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.581468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.581493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.586622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.586689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.586714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.591801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.591869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.591894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.597062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.597133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.597159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.602274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.602340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.602366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.607507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.607574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.607600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.612775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.612844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.612867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.401 [2024-04-24 15:21:10.617958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.401 [2024-04-24 15:21:10.618031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.401 [2024-04-24 15:21:10.618055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.402 [2024-04-24 15:21:10.623183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.402 [2024-04-24 15:21:10.623250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.402 [2024-04-24 15:21:10.623275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.402 [2024-04-24 15:21:10.628397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.402 [2024-04-24 15:21:10.628478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.402 [2024-04-24 15:21:10.628503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.402 [2024-04-24 15:21:10.633674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.402 [2024-04-24 15:21:10.633741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.402 [2024-04-24 15:21:10.633765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.402 [2024-04-24 15:21:10.638838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.402 [2024-04-24 15:21:10.638925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.402 [2024-04-24 15:21:10.638949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.663 [2024-04-24 15:21:10.644713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.663 [2024-04-24 15:21:10.644799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.663 [2024-04-24 15:21:10.644825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.663 [2024-04-24 15:21:10.650311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.663 [2024-04-24 15:21:10.650388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.663 [2024-04-24 15:21:10.650414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.663 [2024-04-24 15:21:10.655606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.663 [2024-04-24 15:21:10.655674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.663 [2024-04-24 15:21:10.655700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.663 [2024-04-24 15:21:10.660838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.663 [2024-04-24 15:21:10.660919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.663 [2024-04-24 15:21:10.660944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.663 [2024-04-24 15:21:10.666102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.663 [2024-04-24 15:21:10.666170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.663 [2024-04-24 15:21:10.666195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.663 [2024-04-24 15:21:10.671303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.663 [2024-04-24 15:21:10.671371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.663 [2024-04-24 15:21:10.671396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.676564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.676648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.676676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.681993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.682062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.682089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.687364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.687444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.687470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.692624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.692695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.692720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.697933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.698003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.698029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.703395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.703488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.703514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.708692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.708788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.708813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.713954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.714030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.714053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.719196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.719274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.719301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.724447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.724514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.724539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.729717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.729795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.729819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.734980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.735055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.735080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.740189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.740255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.740280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.745458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.745522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.745547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.750647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.750721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.750747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.755905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.755980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.756005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.761165] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.761236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.761261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.766399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.766481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.766506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.771683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.771751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.771776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.776936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.777013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.777038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.782140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.782208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.782233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.787397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.787479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.787505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.792608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.792674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.792698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.797842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.797918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.797943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.803289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.803357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.803381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.808637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.808709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.808734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.813901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.664 [2024-04-24 15:21:10.813983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.664 [2024-04-24 15:21:10.814008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.664 [2024-04-24 15:21:10.819151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.819220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.819245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.824582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.824654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.824679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.829897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.829977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.830002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.835141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.835210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.835235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.840361] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.840442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.840467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.845630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.845700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.845725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.850860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.850931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.850959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.856029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.856106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.856143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.861229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.861306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.861331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.866442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.866519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.866543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.871696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.871769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.871794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.876921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.876991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.877016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.882230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.882310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.882334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.887401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.887501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.887529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.892624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.892700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.892726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.897864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.897930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.897956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.665 [2024-04-24 15:21:10.903271] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.665 [2024-04-24 15:21:10.903362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.665 [2024-04-24 15:21:10.903398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.926 [2024-04-24 15:21:10.909014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.926 [2024-04-24 15:21:10.909094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.926 [2024-04-24 15:21:10.909131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.914463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.914534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.914559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.919637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.919719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.919745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.924871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.924940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.924965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.930052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.930128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.930152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.935243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.935323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.935347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.940450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.940522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.940546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.945618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.945702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.945727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.950862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.950927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.950952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.956083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.956157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.956182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.961598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.961685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.961709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.966976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.967066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.967091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.972227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.972318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.972344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.977468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.977547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.977572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.982715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.982780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.982804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.987959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.988029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.988058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.993457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.993539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.993566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:10.998726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:10.998803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:10.998828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:11.003967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:11.004045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:11.004070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:11.009276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:11.009350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:11.009376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:11.014693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:11.014772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:11.014797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:11.019908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:11.019986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:11.020010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:01.927 [2024-04-24 15:21:11.025109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x893b60) with pdu=0x2000190fef90 00:17:01.927 [2024-04-24 15:21:11.025188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.927 [2024-04-24 15:21:11.025213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:01.927 00:17:01.927 Latency(us) 00:17:01.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.927 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:01.927 nvme0n1 : 2.00 5817.83 727.23 0.00 0.00 2744.20 2070.34 12213.53 00:17:01.927 =================================================================================================================== 00:17:01.927 Total : 5817.83 727.23 0.00 0.00 2744.20 2070.34 12213.53 00:17:01.927 0 00:17:01.927 15:21:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:01.927 15:21:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:01.927 15:21:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:01.927 15:21:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:01.927 | .driver_specific 00:17:01.927 | .nvme_error 00:17:01.927 | .status_code 00:17:01.927 | .command_transient_transport_error' 00:17:02.187 15:21:11 -- host/digest.sh@71 -- # (( 375 > 0 )) 00:17:02.187 15:21:11 -- host/digest.sh@73 -- # killprocess 76993 00:17:02.187 15:21:11 -- common/autotest_common.sh@936 -- # '[' -z 76993 ']' 00:17:02.187 15:21:11 -- common/autotest_common.sh@940 -- # kill -0 76993 00:17:02.187 15:21:11 -- common/autotest_common.sh@941 -- # uname 00:17:02.187 15:21:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:02.187 15:21:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76993 00:17:02.187 15:21:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:02.187 killing process with pid 76993 00:17:02.187 15:21:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:02.187 15:21:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76993' 00:17:02.187 Received shutdown signal, test time was about 2.000000 seconds 00:17:02.187 00:17:02.187 Latency(us) 00:17:02.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.187 =================================================================================================================== 00:17:02.187 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:02.187 15:21:11 -- common/autotest_common.sh@955 -- # kill 76993 00:17:02.187 15:21:11 -- common/autotest_common.sh@960 -- # wait 76993 00:17:02.446 15:21:11 -- host/digest.sh@116 -- # killprocess 76780 00:17:02.446 15:21:11 -- common/autotest_common.sh@936 -- # '[' -z 76780 ']' 00:17:02.446 15:21:11 -- common/autotest_common.sh@940 -- # kill -0 76780 00:17:02.446 15:21:11 -- common/autotest_common.sh@941 -- # uname 00:17:02.446 15:21:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:02.446 15:21:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76780 00:17:02.446 15:21:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:02.446 15:21:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:02.446 15:21:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76780' 00:17:02.446 killing process with pid 76780 00:17:02.446 15:21:11 -- common/autotest_common.sh@955 -- # kill 76780 00:17:02.446 15:21:11 -- common/autotest_common.sh@960 -- # wait 76780 00:17:02.704 00:17:02.704 real 0m18.780s 00:17:02.704 user 0m36.489s 00:17:02.704 sys 0m4.785s 00:17:02.704 15:21:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:02.704 15:21:11 -- common/autotest_common.sh@10 -- # set +x 00:17:02.704 ************************************ 00:17:02.704 END TEST nvmf_digest_error 00:17:02.704 ************************************ 00:17:02.704 15:21:11 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:02.704 15:21:11 -- host/digest.sh@150 -- # nvmftestfini 00:17:02.704 15:21:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:02.704 15:21:11 -- nvmf/common.sh@117 -- # sync 00:17:02.704 15:21:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:02.704 15:21:11 -- nvmf/common.sh@120 -- # set +e 00:17:02.704 15:21:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:02.704 15:21:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:02.704 rmmod nvme_tcp 00:17:02.963 rmmod nvme_fabrics 00:17:02.963 rmmod nvme_keyring 00:17:02.963 15:21:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:02.963 15:21:11 -- nvmf/common.sh@124 -- # set -e 00:17:02.963 15:21:11 -- nvmf/common.sh@125 -- # return 0 00:17:02.963 15:21:11 -- nvmf/common.sh@478 -- # '[' -n 76780 ']' 00:17:02.963 15:21:11 -- nvmf/common.sh@479 -- # killprocess 76780 00:17:02.963 15:21:11 -- common/autotest_common.sh@936 -- # '[' -z 76780 ']' 00:17:02.963 15:21:11 -- common/autotest_common.sh@940 -- # kill -0 76780 00:17:02.963 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (76780) - No such process 00:17:02.963 Process with pid 76780 is not found 00:17:02.963 15:21:11 -- common/autotest_common.sh@963 -- # echo 'Process with pid 76780 is not found' 00:17:02.963 15:21:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:02.963 15:21:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:02.963 15:21:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:02.963 15:21:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:02.963 15:21:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:02.963 15:21:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.963 15:21:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.963 15:21:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.963 15:21:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:02.963 00:17:02.963 real 0m38.699s 00:17:02.963 user 1m13.797s 00:17:02.963 sys 0m9.880s 00:17:02.963 15:21:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:02.963 15:21:12 -- common/autotest_common.sh@10 -- # set +x 00:17:02.963 ************************************ 00:17:02.963 END TEST nvmf_digest 00:17:02.963 ************************************ 00:17:02.963 15:21:12 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:17:02.963 15:21:12 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:17:02.963 15:21:12 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:02.963 15:21:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:02.963 15:21:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:02.963 15:21:12 -- common/autotest_common.sh@10 -- # set +x 00:17:02.963 ************************************ 00:17:02.963 START TEST nvmf_multipath 00:17:02.963 ************************************ 00:17:02.963 15:21:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:02.963 * Looking for test storage... 00:17:03.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:03.222 15:21:12 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:03.222 15:21:12 -- nvmf/common.sh@7 -- # uname -s 00:17:03.222 15:21:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.222 15:21:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.222 15:21:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.222 15:21:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.222 15:21:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.222 15:21:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.222 15:21:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.222 15:21:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.222 15:21:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.222 15:21:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.222 15:21:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:17:03.222 15:21:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:17:03.222 15:21:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.222 15:21:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.222 15:21:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:03.222 15:21:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.222 15:21:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:03.222 15:21:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.222 15:21:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.222 15:21:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.222 15:21:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.222 15:21:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.222 15:21:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.222 15:21:12 -- paths/export.sh@5 -- # export PATH 00:17:03.222 15:21:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.222 15:21:12 -- nvmf/common.sh@47 -- # : 0 00:17:03.222 15:21:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.222 15:21:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.222 15:21:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.222 15:21:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.222 15:21:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.222 15:21:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.222 15:21:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.222 15:21:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.222 15:21:12 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:03.222 15:21:12 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:03.222 15:21:12 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.222 15:21:12 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:03.222 15:21:12 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:03.222 15:21:12 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:03.222 15:21:12 -- host/multipath.sh@30 -- # nvmftestinit 00:17:03.222 15:21:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:03.222 15:21:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.223 15:21:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:03.223 15:21:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:03.223 15:21:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:03.223 15:21:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.223 15:21:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.223 15:21:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.223 15:21:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:03.223 15:21:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:03.223 15:21:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:03.223 15:21:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:03.223 15:21:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:03.223 15:21:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:03.223 15:21:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.223 15:21:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.223 15:21:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:03.223 15:21:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:03.223 15:21:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:03.223 15:21:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:03.223 15:21:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:03.223 15:21:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.223 15:21:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:03.223 15:21:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:03.223 15:21:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:03.223 15:21:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:03.223 15:21:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:03.223 15:21:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:03.223 Cannot find device "nvmf_tgt_br" 00:17:03.223 15:21:12 -- nvmf/common.sh@155 -- # true 00:17:03.223 15:21:12 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:03.223 Cannot find device "nvmf_tgt_br2" 00:17:03.223 15:21:12 -- nvmf/common.sh@156 -- # true 00:17:03.223 15:21:12 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:03.223 15:21:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:03.223 Cannot find device "nvmf_tgt_br" 00:17:03.223 15:21:12 -- nvmf/common.sh@158 -- # true 00:17:03.223 15:21:12 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:03.223 Cannot find device "nvmf_tgt_br2" 00:17:03.223 15:21:12 -- nvmf/common.sh@159 -- # true 00:17:03.223 15:21:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:03.223 15:21:12 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:03.223 15:21:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:03.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.223 15:21:12 -- nvmf/common.sh@162 -- # true 00:17:03.223 15:21:12 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:03.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.223 15:21:12 -- nvmf/common.sh@163 -- # true 00:17:03.223 15:21:12 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:03.223 15:21:12 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:03.223 15:21:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:03.223 15:21:12 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:03.223 15:21:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:03.223 15:21:12 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:03.223 15:21:12 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:03.223 15:21:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:03.223 15:21:12 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:03.223 15:21:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:03.223 15:21:12 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:03.223 15:21:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:03.223 15:21:12 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:03.223 15:21:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:03.481 15:21:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:03.481 15:21:12 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:03.481 15:21:12 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:03.481 15:21:12 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:03.481 15:21:12 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:03.481 15:21:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:03.481 15:21:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:03.481 15:21:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:03.481 15:21:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:03.481 15:21:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:03.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:17:03.482 00:17:03.482 --- 10.0.0.2 ping statistics --- 00:17:03.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.482 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:03.482 15:21:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:03.482 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:03.482 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:17:03.482 00:17:03.482 --- 10.0.0.3 ping statistics --- 00:17:03.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.482 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:03.482 15:21:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:03.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:17:03.482 00:17:03.482 --- 10.0.0.1 ping statistics --- 00:17:03.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.482 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:03.482 15:21:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.482 15:21:12 -- nvmf/common.sh@422 -- # return 0 00:17:03.482 15:21:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:03.482 15:21:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.482 15:21:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:03.482 15:21:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:03.482 15:21:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.482 15:21:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:03.482 15:21:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:03.482 15:21:12 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:03.482 15:21:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:03.482 15:21:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:03.482 15:21:12 -- common/autotest_common.sh@10 -- # set +x 00:17:03.482 15:21:12 -- nvmf/common.sh@470 -- # nvmfpid=77258 00:17:03.482 15:21:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:03.482 15:21:12 -- nvmf/common.sh@471 -- # waitforlisten 77258 00:17:03.482 15:21:12 -- common/autotest_common.sh@817 -- # '[' -z 77258 ']' 00:17:03.482 15:21:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.482 15:21:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:03.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.482 15:21:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.482 15:21:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:03.482 15:21:12 -- common/autotest_common.sh@10 -- # set +x 00:17:03.482 [2024-04-24 15:21:12.630950] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:17:03.482 [2024-04-24 15:21:12.631051] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.740 [2024-04-24 15:21:12.765175] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:03.740 [2024-04-24 15:21:12.882124] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.740 [2024-04-24 15:21:12.882175] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.740 [2024-04-24 15:21:12.882186] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.740 [2024-04-24 15:21:12.882195] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.740 [2024-04-24 15:21:12.882203] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.740 [2024-04-24 15:21:12.882359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.740 [2024-04-24 15:21:12.882367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.677 15:21:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:04.677 15:21:13 -- common/autotest_common.sh@850 -- # return 0 00:17:04.677 15:21:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:04.677 15:21:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:04.677 15:21:13 -- common/autotest_common.sh@10 -- # set +x 00:17:04.677 15:21:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.677 15:21:13 -- host/multipath.sh@33 -- # nvmfapp_pid=77258 00:17:04.677 15:21:13 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:04.677 [2024-04-24 15:21:13.900126] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.934 15:21:13 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:04.934 Malloc0 00:17:05.240 15:21:14 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:05.240 15:21:14 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:05.498 15:21:14 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.756 [2024-04-24 15:21:14.900445] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.756 15:21:14 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:06.014 [2024-04-24 15:21:15.156616] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:06.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:06.014 15:21:15 -- host/multipath.sh@44 -- # bdevperf_pid=77318 00:17:06.014 15:21:15 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:06.014 15:21:15 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:06.014 15:21:15 -- host/multipath.sh@47 -- # waitforlisten 77318 /var/tmp/bdevperf.sock 00:17:06.014 15:21:15 -- common/autotest_common.sh@817 -- # '[' -z 77318 ']' 00:17:06.014 15:21:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:06.014 15:21:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:06.014 15:21:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:06.014 15:21:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:06.014 15:21:15 -- common/autotest_common.sh@10 -- # set +x 00:17:06.949 15:21:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:06.949 15:21:16 -- common/autotest_common.sh@850 -- # return 0 00:17:06.949 15:21:16 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:07.513 15:21:16 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:07.771 Nvme0n1 00:17:07.771 15:21:16 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:08.028 Nvme0n1 00:17:08.028 15:21:17 -- host/multipath.sh@78 -- # sleep 1 00:17:08.028 15:21:17 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:09.403 15:21:18 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:09.403 15:21:18 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:09.403 15:21:18 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:09.663 15:21:18 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:09.663 15:21:18 -- host/multipath.sh@65 -- # dtrace_pid=77363 00:17:09.663 15:21:18 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77258 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:09.663 15:21:18 -- host/multipath.sh@66 -- # sleep 6 00:17:16.239 15:21:24 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:16.239 15:21:24 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:16.239 15:21:24 -- host/multipath.sh@67 -- # active_port=4421 00:17:16.239 15:21:24 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:16.239 Attaching 4 probes... 00:17:16.239 @path[10.0.0.2, 4421]: 16925 00:17:16.239 @path[10.0.0.2, 4421]: 17481 00:17:16.239 @path[10.0.0.2, 4421]: 17148 00:17:16.239 @path[10.0.0.2, 4421]: 17008 00:17:16.240 @path[10.0.0.2, 4421]: 17176 00:17:16.240 15:21:24 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:16.240 15:21:24 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:16.240 15:21:24 -- host/multipath.sh@69 -- # sed -n 1p 00:17:16.240 15:21:24 -- host/multipath.sh@69 -- # port=4421 00:17:16.240 15:21:24 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:16.240 15:21:24 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:16.240 15:21:24 -- host/multipath.sh@72 -- # kill 77363 00:17:16.240 15:21:24 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:16.240 15:21:24 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:16.240 15:21:24 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:16.240 15:21:25 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:16.497 15:21:25 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:16.497 15:21:25 -- host/multipath.sh@65 -- # dtrace_pid=77476 00:17:16.497 15:21:25 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77258 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:16.497 15:21:25 -- host/multipath.sh@66 -- # sleep 6 00:17:23.055 15:21:31 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:23.056 15:21:31 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:23.056 15:21:31 -- host/multipath.sh@67 -- # active_port=4420 00:17:23.056 15:21:31 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:23.056 Attaching 4 probes... 00:17:23.056 @path[10.0.0.2, 4420]: 17125 00:17:23.056 @path[10.0.0.2, 4420]: 17388 00:17:23.056 @path[10.0.0.2, 4420]: 17289 00:17:23.056 @path[10.0.0.2, 4420]: 17432 00:17:23.056 @path[10.0.0.2, 4420]: 17581 00:17:23.056 15:21:31 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:23.056 15:21:31 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:23.056 15:21:31 -- host/multipath.sh@69 -- # sed -n 1p 00:17:23.056 15:21:31 -- host/multipath.sh@69 -- # port=4420 00:17:23.056 15:21:31 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:23.056 15:21:31 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:23.056 15:21:31 -- host/multipath.sh@72 -- # kill 77476 00:17:23.056 15:21:31 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:23.056 15:21:31 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:23.056 15:21:31 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:23.056 15:21:32 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:23.314 15:21:32 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:23.314 15:21:32 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77258 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:23.314 15:21:32 -- host/multipath.sh@65 -- # dtrace_pid=77594 00:17:23.314 15:21:32 -- host/multipath.sh@66 -- # sleep 6 00:17:29.913 15:21:38 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:29.913 15:21:38 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:29.913 15:21:38 -- host/multipath.sh@67 -- # active_port=4421 00:17:29.913 15:21:38 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:29.913 Attaching 4 probes... 00:17:29.913 @path[10.0.0.2, 4421]: 13379 00:17:29.913 @path[10.0.0.2, 4421]: 17202 00:17:29.913 @path[10.0.0.2, 4421]: 17141 00:17:29.913 @path[10.0.0.2, 4421]: 17107 00:17:29.913 @path[10.0.0.2, 4421]: 17010 00:17:29.913 15:21:38 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:29.913 15:21:38 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:29.913 15:21:38 -- host/multipath.sh@69 -- # sed -n 1p 00:17:29.913 15:21:38 -- host/multipath.sh@69 -- # port=4421 00:17:29.913 15:21:38 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:29.913 15:21:38 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:29.913 15:21:38 -- host/multipath.sh@72 -- # kill 77594 00:17:29.913 15:21:38 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:29.913 15:21:38 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:29.913 15:21:38 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:29.913 15:21:38 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:30.172 15:21:39 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:30.172 15:21:39 -- host/multipath.sh@65 -- # dtrace_pid=77706 00:17:30.172 15:21:39 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77258 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:30.172 15:21:39 -- host/multipath.sh@66 -- # sleep 6 00:17:36.795 15:21:45 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:36.795 15:21:45 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:36.795 15:21:45 -- host/multipath.sh@67 -- # active_port= 00:17:36.795 15:21:45 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:36.795 Attaching 4 probes... 00:17:36.795 00:17:36.795 00:17:36.795 00:17:36.795 00:17:36.795 00:17:36.795 15:21:45 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:36.795 15:21:45 -- host/multipath.sh@69 -- # sed -n 1p 00:17:36.795 15:21:45 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:36.795 15:21:45 -- host/multipath.sh@69 -- # port= 00:17:36.795 15:21:45 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:36.795 15:21:45 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:36.795 15:21:45 -- host/multipath.sh@72 -- # kill 77706 00:17:36.795 15:21:45 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:36.795 15:21:45 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:36.795 15:21:45 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:36.795 15:21:45 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:36.795 15:21:45 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:36.795 15:21:45 -- host/multipath.sh@65 -- # dtrace_pid=77824 00:17:36.795 15:21:45 -- host/multipath.sh@66 -- # sleep 6 00:17:36.795 15:21:45 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77258 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:43.353 15:21:52 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:43.353 15:21:52 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:43.353 15:21:52 -- host/multipath.sh@67 -- # active_port=4421 00:17:43.353 15:21:52 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:43.353 Attaching 4 probes... 00:17:43.353 @path[10.0.0.2, 4421]: 16697 00:17:43.353 @path[10.0.0.2, 4421]: 16902 00:17:43.353 @path[10.0.0.2, 4421]: 16888 00:17:43.353 @path[10.0.0.2, 4421]: 16813 00:17:43.353 @path[10.0.0.2, 4421]: 16238 00:17:43.353 15:21:52 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:43.353 15:21:52 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:43.353 15:21:52 -- host/multipath.sh@69 -- # sed -n 1p 00:17:43.353 15:21:52 -- host/multipath.sh@69 -- # port=4421 00:17:43.353 15:21:52 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:43.353 15:21:52 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:43.353 15:21:52 -- host/multipath.sh@72 -- # kill 77824 00:17:43.353 15:21:52 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:43.353 15:21:52 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:43.353 [2024-04-24 15:21:52.536803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.536882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.536894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.536903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.536912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.536921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.536931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.536940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.536948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.536957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.536965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.536973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.536982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.536990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.536998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.537007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.537015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.537024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.537032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.537040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.537048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.537056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.537064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 [2024-04-24 15:21:52.537072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c60 is same with the state(5) to be set 00:17:43.353 15:21:52 -- host/multipath.sh@101 -- # sleep 1 00:17:44.727 15:21:53 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:44.727 15:21:53 -- host/multipath.sh@65 -- # dtrace_pid=77942 00:17:44.727 15:21:53 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77258 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:44.727 15:21:53 -- host/multipath.sh@66 -- # sleep 6 00:17:51.390 15:21:59 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:51.390 15:21:59 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:51.390 15:21:59 -- host/multipath.sh@67 -- # active_port=4420 00:17:51.390 15:21:59 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:51.390 Attaching 4 probes... 00:17:51.390 @path[10.0.0.2, 4420]: 16666 00:17:51.390 @path[10.0.0.2, 4420]: 16948 00:17:51.390 @path[10.0.0.2, 4420]: 17058 00:17:51.390 @path[10.0.0.2, 4420]: 17062 00:17:51.390 @path[10.0.0.2, 4420]: 17050 00:17:51.390 15:21:59 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:51.390 15:21:59 -- host/multipath.sh@69 -- # sed -n 1p 00:17:51.390 15:21:59 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:51.390 15:21:59 -- host/multipath.sh@69 -- # port=4420 00:17:51.390 15:21:59 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:51.390 15:21:59 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:51.390 15:21:59 -- host/multipath.sh@72 -- # kill 77942 00:17:51.390 15:21:59 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:51.390 15:21:59 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:51.390 [2024-04-24 15:22:00.021650] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:51.390 15:22:00 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:51.390 15:22:00 -- host/multipath.sh@111 -- # sleep 6 00:17:57.996 15:22:06 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:57.996 15:22:06 -- host/multipath.sh@65 -- # dtrace_pid=78122 00:17:57.996 15:22:06 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 77258 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:57.996 15:22:06 -- host/multipath.sh@66 -- # sleep 6 00:18:03.310 15:22:12 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:03.310 15:22:12 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:03.310 15:22:12 -- host/multipath.sh@67 -- # active_port=4421 00:18:03.310 15:22:12 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:03.310 Attaching 4 probes... 00:18:03.310 @path[10.0.0.2, 4421]: 16786 00:18:03.310 @path[10.0.0.2, 4421]: 17018 00:18:03.310 @path[10.0.0.2, 4421]: 17020 00:18:03.310 @path[10.0.0.2, 4421]: 17008 00:18:03.310 @path[10.0.0.2, 4421]: 16981 00:18:03.310 15:22:12 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:03.310 15:22:12 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:03.310 15:22:12 -- host/multipath.sh@69 -- # sed -n 1p 00:18:03.310 15:22:12 -- host/multipath.sh@69 -- # port=4421 00:18:03.310 15:22:12 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:03.310 15:22:12 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:03.310 15:22:12 -- host/multipath.sh@72 -- # kill 78122 00:18:03.310 15:22:12 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:03.310 15:22:12 -- host/multipath.sh@114 -- # killprocess 77318 00:18:03.310 15:22:12 -- common/autotest_common.sh@936 -- # '[' -z 77318 ']' 00:18:03.310 15:22:12 -- common/autotest_common.sh@940 -- # kill -0 77318 00:18:03.310 15:22:12 -- common/autotest_common.sh@941 -- # uname 00:18:03.310 15:22:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.310 15:22:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77318 00:18:03.569 killing process with pid 77318 00:18:03.569 15:22:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:03.569 15:22:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:03.569 15:22:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77318' 00:18:03.569 15:22:12 -- common/autotest_common.sh@955 -- # kill 77318 00:18:03.569 15:22:12 -- common/autotest_common.sh@960 -- # wait 77318 00:18:03.569 Connection closed with partial response: 00:18:03.570 00:18:03.570 00:18:03.835 15:22:12 -- host/multipath.sh@116 -- # wait 77318 00:18:03.835 15:22:12 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:03.835 [2024-04-24 15:21:15.214869] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:18:03.835 [2024-04-24 15:21:15.214993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77318 ] 00:18:03.835 [2024-04-24 15:21:15.350037] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.835 [2024-04-24 15:21:15.479612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.835 Running I/O for 90 seconds... 00:18:03.835 [2024-04-24 15:21:25.506530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.835 [2024-04-24 15:21:25.506619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:03.835 [2024-04-24 15:21:25.506682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.835 [2024-04-24 15:21:25.506703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:03.835 [2024-04-24 15:21:25.506726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.835 [2024-04-24 15:21:25.506741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:03.835 [2024-04-24 15:21:25.506763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.835 [2024-04-24 15:21:25.506777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:03.835 [2024-04-24 15:21:25.506799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.835 [2024-04-24 15:21:25.506813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:03.835 [2024-04-24 15:21:25.506835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.835 [2024-04-24 15:21:25.506849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.835 [2024-04-24 15:21:25.506870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.835 [2024-04-24 15:21:25.506885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.835 [2024-04-24 15:21:25.506906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.835 [2024-04-24 15:21:25.506920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:03.835 [2024-04-24 15:21:25.506941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.835 [2024-04-24 15:21:25.506956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:03.835 [2024-04-24 15:21:25.506977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.835 [2024-04-24 15:21:25.506992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:03.835 [2024-04-24 15:21:25.507013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.835 [2024-04-24 15:21:25.507053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:03.835 [2024-04-24 15:21:25.507077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.835 [2024-04-24 15:21:25.507092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:03.835 [2024-04-24 15:21:25.507113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.835 [2024-04-24 15:21:25.507127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.507162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.507201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.507237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.507272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.507308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.507344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.507380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.507415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.507467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.507502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.507550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.507610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.507646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.507682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.507718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.507754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.507790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.507827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.507863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.507900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.507935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.507958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.507973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.508003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.508019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.508040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.508055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.508076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.508090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.508111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.508126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.508147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.508162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.508183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.508197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.508218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.508232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.508253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.508268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.508289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.508304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.508325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.508339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.508360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.508375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.508396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.508411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.508445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.836 [2024-04-24 15:21:25.508474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.509219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.509248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.509276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.509293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.509315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.509331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:03.836 [2024-04-24 15:21:25.509352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.836 [2024-04-24 15:21:25.509367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.509942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.509964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.509978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.837 [2024-04-24 15:21:25.510543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.510579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.510622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.510663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.510699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.510736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.510791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.510828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.510864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:03.837 [2024-04-24 15:21:25.510885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.837 [2024-04-24 15:21:25.510900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.510922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:25.510937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.510958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:25.510973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.510994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:25.511009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:25.511046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.838 [2024-04-24 15:21:25.511974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.511996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:25.512010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.512038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:25.512054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.512076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:25.512091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.512111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:25.512126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.512147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:25.512162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.512183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:25.512198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.512219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:25.512234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:25.512255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:25.512269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:32.113426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:32.113505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:32.113567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:32.113588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:32.113611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.838 [2024-04-24 15:21:32.113626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:03.838 [2024-04-24 15:21:32.113648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.113663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.113683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.113698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.113751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.113767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.113788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.113802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.113824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.113838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.113859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.113873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.113894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.113908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.113929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.113944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.113965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.113979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.114014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.114049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.114084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.114119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.839 [2024-04-24 15:21:32.114739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.114799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.114837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.114873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.114909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.114945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.114967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.839 [2024-04-24 15:21:32.114981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:03.839 [2024-04-24 15:21:32.115003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.840 [2024-04-24 15:21:32.115388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.840 [2024-04-24 15:21:32.115424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.840 [2024-04-24 15:21:32.115476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.840 [2024-04-24 15:21:32.115511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.840 [2024-04-24 15:21:32.115547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.840 [2024-04-24 15:21:32.115582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.840 [2024-04-24 15:21:32.115626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.840 [2024-04-24 15:21:32.115662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.115967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.115982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.116004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.116018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.116040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.116143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.116167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.116182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.116204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.116219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.116248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.116263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.116285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.116299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.116321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.116335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.116373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.116392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.116413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.116440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.116465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.840 [2024-04-24 15:21:32.116480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:03.840 [2024-04-24 15:21:32.116501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.116515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.116536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.116551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.116572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.116587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.116608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.116632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.116655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.116670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.116691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.116705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.116728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.116743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.116764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.116791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.116813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.116828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.116849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.116864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.116890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.116906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.116927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.116941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.116962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.116977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.117015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.117050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.117086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.117130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.117166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.117201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.117237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.117273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.117309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.117346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.117382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.117417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.117468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.117512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.117548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.117596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.117632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.117668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.117703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.117739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.117761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.117775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.118496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.841 [2024-04-24 15:21:32.118524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.118557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.118573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.118601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.118617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.118645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.118660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.118688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.118703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.118731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.118746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.841 [2024-04-24 15:21:32.118774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.841 [2024-04-24 15:21:32.118800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:32.118830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:32.118846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:32.118896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:32.118917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:32.118946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:32.118961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:32.118999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:32.119014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:32.119043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:32.119057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:32.119094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:32.119109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:32.119137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:32.119152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:32.119179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:32.119194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:32.119222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:32.119237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:32.119268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:32.119284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:32.119312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:32.119327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:32.119355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:32.119378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.164935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.165019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.165102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.165141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.165177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.165214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.165250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.165286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.165323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.842 [2024-04-24 15:21:39.165359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.842 [2024-04-24 15:21:39.165395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.842 [2024-04-24 15:21:39.165447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.842 [2024-04-24 15:21:39.165513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.842 [2024-04-24 15:21:39.165553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.842 [2024-04-24 15:21:39.165589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.842 [2024-04-24 15:21:39.165625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.842 [2024-04-24 15:21:39.165661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.165912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.165955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.165978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.165992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.166015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.166029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.166052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.166066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.166088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.166106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.166129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.166144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.166166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.166181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.166216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.166232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:03.842 [2024-04-24 15:21:39.166254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.842 [2024-04-24 15:21:39.166269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.166306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.166344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.166381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.166418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.166474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.166512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.166549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.166586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.166624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.166662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.166709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.166747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.166785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.166822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.166859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.166895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.166933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.166970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.166992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.167007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.167044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.167081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.167119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.843 [2024-04-24 15:21:39.167753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.167790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:03.843 [2024-04-24 15:21:39.167812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.843 [2024-04-24 15:21:39.167827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.167850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.167871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.167894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.167909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.167931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.167946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.167968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.167983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.168021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.168059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.168102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.168141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.168179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.168216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.168253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.168290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.168327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.168365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.168976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.168999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.844 [2024-04-24 15:21:39.169013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.169035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.169050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.169080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.169096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.169118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.844 [2024-04-24 15:21:39.169134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:03.844 [2024-04-24 15:21:39.169158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.169173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.169210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.169259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.169296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.169333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.169370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.169408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.169460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.169497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.169535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.169579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.169617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.169654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.845 [2024-04-24 15:21:39.169696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.845 [2024-04-24 15:21:39.169734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.845 [2024-04-24 15:21:39.169770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.845 [2024-04-24 15:21:39.169807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.845 [2024-04-24 15:21:39.169843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.845 [2024-04-24 15:21:39.169886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.845 [2024-04-24 15:21:39.169924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.845 [2024-04-24 15:21:39.169961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.169984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.169998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.170021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.170043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.170066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.170081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.170103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.170118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.170141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.170156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.170178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.170193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.170215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.170230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:39.170252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:39.170267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:52.537176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:52.537254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:52.537300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:52.537332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:52.537362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:52.537394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:52.537423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:52.537475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:52.537507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:52.537533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:52.537561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:52.537586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:52.537646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:52.537674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:52.537703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:52.537730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:52.537757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.845 [2024-04-24 15:21:52.537782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:52.537811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.845 [2024-04-24 15:21:52.537836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:52.537862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.845 [2024-04-24 15:21:52.537887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:52.537914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.845 [2024-04-24 15:21:52.537938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.845 [2024-04-24 15:21:52.537966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.845 [2024-04-24 15:21:52.538004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.538073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.538124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.538174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.538228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.538281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.538350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.538405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.538474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.538530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.846 [2024-04-24 15:21:52.538582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.846 [2024-04-24 15:21:52.538633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.846 [2024-04-24 15:21:52.538684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.846 [2024-04-24 15:21:52.538736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.846 [2024-04-24 15:21:52.538786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.846 [2024-04-24 15:21:52.538837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.846 [2024-04-24 15:21:52.538888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.846 [2024-04-24 15:21:52.538938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.538965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.538990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.846 [2024-04-24 15:21:52.539826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.846 [2024-04-24 15:21:52.539878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.846 [2024-04-24 15:21:52.539928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.539954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.846 [2024-04-24 15:21:52.539980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.540007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.846 [2024-04-24 15:21:52.540032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.540058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.846 [2024-04-24 15:21:52.540082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.540108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.846 [2024-04-24 15:21:52.540132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.846 [2024-04-24 15:21:52.540158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.540209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.540267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.540317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.540368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.540452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.540508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.540563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.540615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.540665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.540717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.540778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.540846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.540899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.540949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.540975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.541026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.541077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.541145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.847 [2024-04-24 15:21:52.541197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.847 [2024-04-24 15:21:52.541259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.847 [2024-04-24 15:21:52.541311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.847 [2024-04-24 15:21:52.541364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.847 [2024-04-24 15:21:52.541416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.847 [2024-04-24 15:21:52.541488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.847 [2024-04-24 15:21:52.541543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.847 [2024-04-24 15:21:52.541595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.847 [2024-04-24 15:21:52.541646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.847 [2024-04-24 15:21:52.541697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.847 [2024-04-24 15:21:52.541750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.847 [2024-04-24 15:21:52.541814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.847 [2024-04-24 15:21:52.541868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.847 [2024-04-24 15:21:52.541927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.541953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.541977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.542005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.542030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.542056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.542081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.542109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.542135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.542162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.542196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.542224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.542249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.542276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.542301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.542327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.542352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.542379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.542403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.847 [2024-04-24 15:21:52.542449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.847 [2024-04-24 15:21:52.542477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.542518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.542545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.542571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.542596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.542621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.542646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.542672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.542696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.542724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.542748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.542776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.542801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.542827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.542852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.542881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.542906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.542932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.542956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.542982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.543008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.543063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.543114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.543166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.543231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.543282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.543331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.543382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.543450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.543505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.543558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.543608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.543659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.848 [2024-04-24 15:21:52.543710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.543760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.543808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.543872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.543928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.543956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.543980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.544006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.544028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.544055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.544080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.544112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.544137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.544163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.848 [2024-04-24 15:21:52.544187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.848 [2024-04-24 15:21:52.544213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.849 [2024-04-24 15:21:52.544242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.849 [2024-04-24 15:21:52.544325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:03.849 [2024-04-24 15:21:52.544351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:03.849 [2024-04-24 15:21:52.544371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11272 len:8 PRP1 0x0 PRP2 0x0 00:18:03.849 [2024-04-24 15:21:52.544395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.849 [2024-04-24 15:21:52.544498] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe1d6d0 was disconnected and freed. reset controller. 00:18:03.849 [2024-04-24 15:21:52.544650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.849 [2024-04-24 15:21:52.544695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.849 [2024-04-24 15:21:52.544722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.849 [2024-04-24 15:21:52.544747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.849 [2024-04-24 15:21:52.544772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.849 [2024-04-24 15:21:52.544797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.849 [2024-04-24 15:21:52.544858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.849 [2024-04-24 15:21:52.544884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.849 [2024-04-24 15:21:52.544907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe23a20 is same with the state(5) to be set 00:18:03.849 [2024-04-24 15:21:52.546446] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:03.849 [2024-04-24 15:21:52.546505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe23a20 (9): Bad file descriptor 00:18:03.849 [2024-04-24 15:21:52.547019] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.849 [2024-04-24 15:21:52.547133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.849 [2024-04-24 15:21:52.547216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.849 [2024-04-24 15:21:52.547253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe23a20 with addr=10.0.0.2, port=4421 00:18:03.849 [2024-04-24 15:21:52.547282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe23a20 is same with the state(5) to be set 00:18:03.849 [2024-04-24 15:21:52.547495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe23a20 (9): Bad file descriptor 00:18:03.849 [2024-04-24 15:21:52.547587] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:03.849 [2024-04-24 15:21:52.547622] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:03.849 [2024-04-24 15:21:52.547648] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:03.849 [2024-04-24 15:21:52.547700] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:03.849 [2024-04-24 15:21:52.547729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:03.849 [2024-04-24 15:22:02.603315] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:03.849 Received shutdown signal, test time was about 55.214848 seconds 00:18:03.849 00:18:03.849 Latency(us) 00:18:03.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.849 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:03.849 Verification LBA range: start 0x0 length 0x4000 00:18:03.849 Nvme0n1 : 55.21 7260.55 28.36 0.00 0.00 17597.21 696.32 7046430.72 00:18:03.849 =================================================================================================================== 00:18:03.849 Total : 7260.55 28.36 0.00 0.00 17597.21 696.32 7046430.72 00:18:03.849 15:22:12 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.107 15:22:13 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:04.107 15:22:13 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:04.107 15:22:13 -- host/multipath.sh@125 -- # nvmftestfini 00:18:04.107 15:22:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:04.107 15:22:13 -- nvmf/common.sh@117 -- # sync 00:18:04.107 15:22:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:04.107 15:22:13 -- nvmf/common.sh@120 -- # set +e 00:18:04.107 15:22:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.107 15:22:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:04.107 rmmod nvme_tcp 00:18:04.107 rmmod nvme_fabrics 00:18:04.107 rmmod nvme_keyring 00:18:04.107 15:22:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.107 15:22:13 -- nvmf/common.sh@124 -- # set -e 00:18:04.107 15:22:13 -- nvmf/common.sh@125 -- # return 0 00:18:04.107 15:22:13 -- nvmf/common.sh@478 -- # '[' -n 77258 ']' 00:18:04.107 15:22:13 -- nvmf/common.sh@479 -- # killprocess 77258 00:18:04.107 15:22:13 -- common/autotest_common.sh@936 -- # '[' -z 77258 ']' 00:18:04.107 15:22:13 -- common/autotest_common.sh@940 -- # kill -0 77258 00:18:04.107 15:22:13 -- common/autotest_common.sh@941 -- # uname 00:18:04.107 15:22:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.107 15:22:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77258 00:18:04.107 killing process with pid 77258 00:18:04.107 15:22:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:04.107 15:22:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:04.107 15:22:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77258' 00:18:04.107 15:22:13 -- common/autotest_common.sh@955 -- # kill 77258 00:18:04.107 15:22:13 -- common/autotest_common.sh@960 -- # wait 77258 00:18:04.364 15:22:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:04.364 15:22:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:04.364 15:22:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:04.364 15:22:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:04.364 15:22:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:04.364 15:22:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.364 15:22:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.364 15:22:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.364 15:22:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:04.364 00:18:04.364 real 1m1.415s 00:18:04.364 user 2m50.843s 00:18:04.364 sys 0m18.035s 00:18:04.364 15:22:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:04.364 15:22:13 -- common/autotest_common.sh@10 -- # set +x 00:18:04.364 ************************************ 00:18:04.364 END TEST nvmf_multipath 00:18:04.364 ************************************ 00:18:04.364 15:22:13 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:04.364 15:22:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:04.364 15:22:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:04.364 15:22:13 -- common/autotest_common.sh@10 -- # set +x 00:18:04.621 ************************************ 00:18:04.621 START TEST nvmf_timeout 00:18:04.621 ************************************ 00:18:04.621 15:22:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:04.621 * Looking for test storage... 00:18:04.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:04.621 15:22:13 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:04.621 15:22:13 -- nvmf/common.sh@7 -- # uname -s 00:18:04.621 15:22:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.621 15:22:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.621 15:22:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.621 15:22:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.621 15:22:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.622 15:22:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.622 15:22:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.622 15:22:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.622 15:22:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.622 15:22:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.622 15:22:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:18:04.622 15:22:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:18:04.622 15:22:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.622 15:22:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.622 15:22:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:04.622 15:22:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.622 15:22:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:04.622 15:22:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.622 15:22:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.622 15:22:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.622 15:22:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.622 15:22:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.622 15:22:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.622 15:22:13 -- paths/export.sh@5 -- # export PATH 00:18:04.622 15:22:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.622 15:22:13 -- nvmf/common.sh@47 -- # : 0 00:18:04.622 15:22:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:04.622 15:22:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:04.622 15:22:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.622 15:22:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.622 15:22:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.622 15:22:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:04.622 15:22:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:04.622 15:22:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:04.622 15:22:13 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:04.622 15:22:13 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:04.622 15:22:13 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.622 15:22:13 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:04.622 15:22:13 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.622 15:22:13 -- host/timeout.sh@19 -- # nvmftestinit 00:18:04.622 15:22:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:04.622 15:22:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.622 15:22:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:04.622 15:22:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:04.622 15:22:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:04.622 15:22:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.622 15:22:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.622 15:22:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.622 15:22:13 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:04.622 15:22:13 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:04.622 15:22:13 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:04.622 15:22:13 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:04.622 15:22:13 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:04.622 15:22:13 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:04.622 15:22:13 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.622 15:22:13 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:04.622 15:22:13 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:04.622 15:22:13 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:04.622 15:22:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:04.622 15:22:13 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:04.622 15:22:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:04.622 15:22:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.622 15:22:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:04.622 15:22:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:04.622 15:22:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:04.622 15:22:13 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:04.622 15:22:13 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:04.622 15:22:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:04.622 Cannot find device "nvmf_tgt_br" 00:18:04.622 15:22:13 -- nvmf/common.sh@155 -- # true 00:18:04.622 15:22:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:04.622 Cannot find device "nvmf_tgt_br2" 00:18:04.622 15:22:13 -- nvmf/common.sh@156 -- # true 00:18:04.622 15:22:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:04.622 15:22:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:04.622 Cannot find device "nvmf_tgt_br" 00:18:04.622 15:22:13 -- nvmf/common.sh@158 -- # true 00:18:04.622 15:22:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:04.622 Cannot find device "nvmf_tgt_br2" 00:18:04.622 15:22:13 -- nvmf/common.sh@159 -- # true 00:18:04.622 15:22:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:04.622 15:22:13 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:04.943 15:22:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:04.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:04.943 15:22:13 -- nvmf/common.sh@162 -- # true 00:18:04.943 15:22:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:04.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:04.943 15:22:13 -- nvmf/common.sh@163 -- # true 00:18:04.943 15:22:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:04.943 15:22:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:04.943 15:22:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:04.943 15:22:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:04.943 15:22:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:04.943 15:22:13 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:04.943 15:22:13 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:04.943 15:22:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:04.943 15:22:13 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:04.943 15:22:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:04.943 15:22:13 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:04.943 15:22:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:04.943 15:22:13 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:04.943 15:22:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:04.943 15:22:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:04.943 15:22:13 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:04.943 15:22:13 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:04.943 15:22:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:04.943 15:22:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:04.943 15:22:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:04.943 15:22:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:04.943 15:22:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:04.943 15:22:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:04.943 15:22:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:04.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:18:04.943 00:18:04.943 --- 10.0.0.2 ping statistics --- 00:18:04.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.943 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:18:04.943 15:22:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:04.943 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:04.943 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:18:04.943 00:18:04.943 --- 10.0.0.3 ping statistics --- 00:18:04.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.943 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:18:04.943 15:22:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:04.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:04.943 00:18:04.943 --- 10.0.0.1 ping statistics --- 00:18:04.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.943 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:04.943 15:22:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.943 15:22:14 -- nvmf/common.sh@422 -- # return 0 00:18:04.943 15:22:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:04.943 15:22:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.943 15:22:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:04.943 15:22:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:04.943 15:22:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.943 15:22:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:04.943 15:22:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:04.943 15:22:14 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:04.943 15:22:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:04.943 15:22:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:04.943 15:22:14 -- common/autotest_common.sh@10 -- # set +x 00:18:04.943 15:22:14 -- nvmf/common.sh@470 -- # nvmfpid=78442 00:18:04.943 15:22:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:04.943 15:22:14 -- nvmf/common.sh@471 -- # waitforlisten 78442 00:18:04.943 15:22:14 -- common/autotest_common.sh@817 -- # '[' -z 78442 ']' 00:18:04.943 15:22:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.943 15:22:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:04.943 15:22:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.943 15:22:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:04.943 15:22:14 -- common/autotest_common.sh@10 -- # set +x 00:18:04.943 [2024-04-24 15:22:14.144779] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:18:04.943 [2024-04-24 15:22:14.144910] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.201 [2024-04-24 15:22:14.284529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:05.201 [2024-04-24 15:22:14.399377] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.201 [2024-04-24 15:22:14.399477] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.201 [2024-04-24 15:22:14.399490] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.201 [2024-04-24 15:22:14.399500] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.201 [2024-04-24 15:22:14.399507] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.201 [2024-04-24 15:22:14.399619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.201 [2024-04-24 15:22:14.399623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.133 15:22:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:06.133 15:22:15 -- common/autotest_common.sh@850 -- # return 0 00:18:06.133 15:22:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:06.133 15:22:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:06.133 15:22:15 -- common/autotest_common.sh@10 -- # set +x 00:18:06.133 15:22:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.133 15:22:15 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:06.133 15:22:15 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:06.390 [2024-04-24 15:22:15.404238] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.390 15:22:15 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:06.649 Malloc0 00:18:06.649 15:22:15 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:06.907 15:22:15 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:06.907 15:22:16 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:07.167 [2024-04-24 15:22:16.350315] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.167 15:22:16 -- host/timeout.sh@32 -- # bdevperf_pid=78492 00:18:07.167 15:22:16 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:07.167 15:22:16 -- host/timeout.sh@34 -- # waitforlisten 78492 /var/tmp/bdevperf.sock 00:18:07.167 15:22:16 -- common/autotest_common.sh@817 -- # '[' -z 78492 ']' 00:18:07.167 15:22:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.167 15:22:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:07.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.167 15:22:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.167 15:22:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:07.167 15:22:16 -- common/autotest_common.sh@10 -- # set +x 00:18:07.426 [2024-04-24 15:22:16.418802] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:18:07.426 [2024-04-24 15:22:16.418895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78492 ] 00:18:07.426 [2024-04-24 15:22:16.556896] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.685 [2024-04-24 15:22:16.684557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.252 15:22:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:08.252 15:22:17 -- common/autotest_common.sh@850 -- # return 0 00:18:08.252 15:22:17 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:08.511 15:22:17 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:08.769 NVMe0n1 00:18:08.769 15:22:17 -- host/timeout.sh@51 -- # rpc_pid=78510 00:18:08.769 15:22:17 -- host/timeout.sh@53 -- # sleep 1 00:18:08.769 15:22:17 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:08.769 Running I/O for 10 seconds... 00:18:09.702 15:22:18 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.271 [2024-04-24 15:22:19.213721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a76e60 is same with the state(5) to be set 00:18:10.271 [2024-04-24 15:22:19.213788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a76e60 is same with the state(5) to be set 00:18:10.271 [2024-04-24 15:22:19.213801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a76e60 is same with the state(5) to be set 00:18:10.271 [2024-04-24 15:22:19.213811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a76e60 is same with the state(5) to be set 00:18:10.271 [2024-04-24 15:22:19.213820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a76e60 is same with the state(5) to be set 00:18:10.271 [2024-04-24 15:22:19.213829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a76e60 is same with the state(5) to be set 00:18:10.271 [2024-04-24 15:22:19.213838] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a76e60 is same with the state(5) to be set 00:18:10.271 [2024-04-24 15:22:19.213908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.271 [2024-04-24 15:22:19.213940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.271 [2024-04-24 15:22:19.213963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.271 [2024-04-24 15:22:19.213974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.271 [2024-04-24 15:22:19.213986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.271 [2024-04-24 15:22:19.213995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.271 [2024-04-24 15:22:19.214007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.272 [2024-04-24 15:22:19.214556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.272 [2024-04-24 15:22:19.214825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.272 [2024-04-24 15:22:19.214835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.214846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.273 [2024-04-24 15:22:19.214856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.214867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.273 [2024-04-24 15:22:19.214877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.214888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.273 [2024-04-24 15:22:19.214897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.214918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.214927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.214939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.214949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.214960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.214969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.214981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.214990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.273 [2024-04-24 15:22:19.215244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.273 [2024-04-24 15:22:19.215266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.273 [2024-04-24 15:22:19.215287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.273 [2024-04-24 15:22:19.215308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.273 [2024-04-24 15:22:19.215330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.273 [2024-04-24 15:22:19.215355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.273 [2024-04-24 15:22:19.215377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.273 [2024-04-24 15:22:19.215398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.273 [2024-04-24 15:22:19.215591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.273 [2024-04-24 15:22:19.215601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.274 [2024-04-24 15:22:19.215630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.274 [2024-04-24 15:22:19.215650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.274 [2024-04-24 15:22:19.215672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.274 [2024-04-24 15:22:19.215701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.274 [2024-04-24 15:22:19.215722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.215744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.215765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.215786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.215807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.215827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.215848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.215869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.215891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.215920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.215941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.215963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.215984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.215995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.216005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.216026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.216054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.216075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.216096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.216117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.216138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.216158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.216179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.216200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.216221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.274 [2024-04-24 15:22:19.216242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.274 [2024-04-24 15:22:19.216268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.274 [2024-04-24 15:22:19.216289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.274 [2024-04-24 15:22:19.216309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.274 [2024-04-24 15:22:19.216331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.274 [2024-04-24 15:22:19.216352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.274 [2024-04-24 15:22:19.216372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.274 [2024-04-24 15:22:19.216398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.274 [2024-04-24 15:22:19.216419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.274 [2024-04-24 15:22:19.216440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.275 [2024-04-24 15:22:19.216752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f558a0 is same with the state(5) to be set 00:18:10.275 [2024-04-24 15:22:19.216775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:10.275 [2024-04-24 15:22:19.216784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:10.275 [2024-04-24 15:22:19.216792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70776 len:8 PRP1 0x0 PRP2 0x0 00:18:10.275 [2024-04-24 15:22:19.216801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.275 [2024-04-24 15:22:19.216856] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f558a0 was disconnected and freed. reset controller. 00:18:10.275 [2024-04-24 15:22:19.217132] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:10.275 [2024-04-24 15:22:19.217220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eeddc0 (9): Bad file descriptor 00:18:10.275 [2024-04-24 15:22:19.217337] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.275 [2024-04-24 15:22:19.217403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.275 [2024-04-24 15:22:19.217461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.275 [2024-04-24 15:22:19.217480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eeddc0 with addr=10.0.0.2, port=4420 00:18:10.275 [2024-04-24 15:22:19.217491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eeddc0 is same with the state(5) to be set 00:18:10.275 [2024-04-24 15:22:19.217511] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eeddc0 (9): Bad file descriptor 00:18:10.275 [2024-04-24 15:22:19.217527] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:10.275 [2024-04-24 15:22:19.217537] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:10.275 [2024-04-24 15:22:19.217548] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:10.275 [2024-04-24 15:22:19.217568] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:10.275 [2024-04-24 15:22:19.217579] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:10.275 15:22:19 -- host/timeout.sh@56 -- # sleep 2 00:18:12.176 [2024-04-24 15:22:21.217738] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.176 [2024-04-24 15:22:21.217853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.176 [2024-04-24 15:22:21.217899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.176 [2024-04-24 15:22:21.217915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eeddc0 with addr=10.0.0.2, port=4420 00:18:12.176 [2024-04-24 15:22:21.217929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eeddc0 is same with the state(5) to be set 00:18:12.176 [2024-04-24 15:22:21.217956] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eeddc0 (9): Bad file descriptor 00:18:12.176 [2024-04-24 15:22:21.217990] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:12.176 [2024-04-24 15:22:21.218002] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:12.176 [2024-04-24 15:22:21.218013] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:12.176 [2024-04-24 15:22:21.218041] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.176 [2024-04-24 15:22:21.218053] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:12.176 15:22:21 -- host/timeout.sh@57 -- # get_controller 00:18:12.176 15:22:21 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:12.176 15:22:21 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:12.433 15:22:21 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:12.433 15:22:21 -- host/timeout.sh@58 -- # get_bdev 00:18:12.433 15:22:21 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:12.433 15:22:21 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:12.691 15:22:21 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:12.691 15:22:21 -- host/timeout.sh@61 -- # sleep 5 00:18:14.065 [2024-04-24 15:22:23.218240] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.065 [2024-04-24 15:22:23.218353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.065 [2024-04-24 15:22:23.218399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.065 [2024-04-24 15:22:23.218417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eeddc0 with addr=10.0.0.2, port=4420 00:18:14.065 [2024-04-24 15:22:23.218444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eeddc0 is same with the state(5) to be set 00:18:14.065 [2024-04-24 15:22:23.218476] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eeddc0 (9): Bad file descriptor 00:18:14.065 [2024-04-24 15:22:23.218497] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:14.065 [2024-04-24 15:22:23.218507] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:14.065 [2024-04-24 15:22:23.218519] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:14.065 [2024-04-24 15:22:23.218560] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:14.065 [2024-04-24 15:22:23.218573] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:16.594 [2024-04-24 15:22:25.218627] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.159 00:18:17.159 Latency(us) 00:18:17.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.159 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:17.159 Verification LBA range: start 0x0 length 0x4000 00:18:17.159 NVMe0n1 : 8.21 1069.47 4.18 15.60 0.00 117800.74 3813.00 7015926.69 00:18:17.159 =================================================================================================================== 00:18:17.159 Total : 1069.47 4.18 15.60 0.00 117800.74 3813.00 7015926.69 00:18:17.159 0 00:18:17.726 15:22:26 -- host/timeout.sh@62 -- # get_controller 00:18:17.726 15:22:26 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:17.726 15:22:26 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:17.985 15:22:27 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:17.985 15:22:27 -- host/timeout.sh@63 -- # get_bdev 00:18:17.985 15:22:27 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:17.985 15:22:27 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:18.262 15:22:27 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:18.262 15:22:27 -- host/timeout.sh@65 -- # wait 78510 00:18:18.262 15:22:27 -- host/timeout.sh@67 -- # killprocess 78492 00:18:18.262 15:22:27 -- common/autotest_common.sh@936 -- # '[' -z 78492 ']' 00:18:18.262 15:22:27 -- common/autotest_common.sh@940 -- # kill -0 78492 00:18:18.262 15:22:27 -- common/autotest_common.sh@941 -- # uname 00:18:18.263 15:22:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.263 15:22:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78492 00:18:18.263 15:22:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:18.263 15:22:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:18.263 15:22:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78492' 00:18:18.263 killing process with pid 78492 00:18:18.263 Received shutdown signal, test time was about 9.306641 seconds 00:18:18.263 00:18:18.263 Latency(us) 00:18:18.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.263 =================================================================================================================== 00:18:18.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:18.263 15:22:27 -- common/autotest_common.sh@955 -- # kill 78492 00:18:18.263 15:22:27 -- common/autotest_common.sh@960 -- # wait 78492 00:18:18.537 15:22:27 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.794 [2024-04-24 15:22:27.822916] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.794 15:22:27 -- host/timeout.sh@74 -- # bdevperf_pid=78633 00:18:18.794 15:22:27 -- host/timeout.sh@76 -- # waitforlisten 78633 /var/tmp/bdevperf.sock 00:18:18.794 15:22:27 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:18.794 15:22:27 -- common/autotest_common.sh@817 -- # '[' -z 78633 ']' 00:18:18.794 15:22:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.794 15:22:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:18.794 15:22:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.794 15:22:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:18.794 15:22:27 -- common/autotest_common.sh@10 -- # set +x 00:18:18.794 [2024-04-24 15:22:27.896790] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:18:18.794 [2024-04-24 15:22:27.896907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78633 ] 00:18:18.794 [2024-04-24 15:22:28.035421] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.053 [2024-04-24 15:22:28.167857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.620 15:22:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:19.620 15:22:28 -- common/autotest_common.sh@850 -- # return 0 00:18:19.620 15:22:28 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:20.187 15:22:29 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:20.187 NVMe0n1 00:18:20.445 15:22:29 -- host/timeout.sh@84 -- # rpc_pid=78662 00:18:20.445 15:22:29 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:20.445 15:22:29 -- host/timeout.sh@86 -- # sleep 1 00:18:20.445 Running I/O for 10 seconds... 00:18:21.377 15:22:30 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.638 [2024-04-24 15:22:30.722748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.638 [2024-04-24 15:22:30.722815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.722842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.722854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.722867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.722876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.722888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.722897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.722909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.722918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.722929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.722939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.722949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.722959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.722970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.722979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.722990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.722999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.638 [2024-04-24 15:22:30.723466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.638 [2024-04-24 15:22:30.723475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.723985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.723996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.724016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.724036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.724057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.724076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.724097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.724117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.724137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.724157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.724177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.724198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.724217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.724237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.724257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.639 [2024-04-24 15:22:30.724277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.639 [2024-04-24 15:22:30.724286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.724984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.724993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.725004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.725013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.725023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.725032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.640 [2024-04-24 15:22:30.725042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.640 [2024-04-24 15:22:30.725051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.641 [2024-04-24 15:22:30.725071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.641 [2024-04-24 15:22:30.725091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.641 [2024-04-24 15:22:30.725116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.641 [2024-04-24 15:22:30.725426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.641 [2024-04-24 15:22:30.725462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x617af0 is same with the state(5) to be set 00:18:21.641 [2024-04-24 15:22:30.725487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.641 [2024-04-24 15:22:30.725494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.641 [2024-04-24 15:22:30.725502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63640 len:8 PRP1 0x0 PRP2 0x0 00:18:21.641 [2024-04-24 15:22:30.725511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.641 [2024-04-24 15:22:30.725569] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x617af0 was disconnected and freed. reset controller. 00:18:21.641 [2024-04-24 15:22:30.725817] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:21.641 [2024-04-24 15:22:30.725908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5afdc0 (9): Bad file descriptor 00:18:21.641 [2024-04-24 15:22:30.726013] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:21.641 [2024-04-24 15:22:30.726076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:21.641 [2024-04-24 15:22:30.726117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:21.641 [2024-04-24 15:22:30.726132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5afdc0 with addr=10.0.0.2, port=4420 00:18:21.641 [2024-04-24 15:22:30.726142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5afdc0 is same with the state(5) to be set 00:18:21.641 [2024-04-24 15:22:30.726160] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5afdc0 (9): Bad file descriptor 00:18:21.641 [2024-04-24 15:22:30.726176] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:21.641 [2024-04-24 15:22:30.726185] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:21.641 [2024-04-24 15:22:30.726195] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:21.641 [2024-04-24 15:22:30.726214] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:21.641 [2024-04-24 15:22:30.726226] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:21.641 15:22:30 -- host/timeout.sh@90 -- # sleep 1 00:18:22.574 [2024-04-24 15:22:31.726393] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:22.574 [2024-04-24 15:22:31.726529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:22.574 [2024-04-24 15:22:31.726575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:22.574 [2024-04-24 15:22:31.726592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5afdc0 with addr=10.0.0.2, port=4420 00:18:22.574 [2024-04-24 15:22:31.726605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5afdc0 is same with the state(5) to be set 00:18:22.574 [2024-04-24 15:22:31.726631] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5afdc0 (9): Bad file descriptor 00:18:22.574 [2024-04-24 15:22:31.726650] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:22.574 [2024-04-24 15:22:31.726660] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:22.575 [2024-04-24 15:22:31.726671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:22.575 [2024-04-24 15:22:31.726699] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:22.575 [2024-04-24 15:22:31.726710] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:22.575 15:22:31 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.832 [2024-04-24 15:22:32.039657] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.832 15:22:32 -- host/timeout.sh@92 -- # wait 78662 00:18:23.766 [2024-04-24 15:22:32.741513] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:30.323 00:18:30.323 Latency(us) 00:18:30.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.323 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:30.323 Verification LBA range: start 0x0 length 0x4000 00:18:30.323 NVMe0n1 : 10.01 6045.70 23.62 0.00 0.00 21129.15 1228.80 3019898.88 00:18:30.323 =================================================================================================================== 00:18:30.323 Total : 6045.70 23.62 0.00 0.00 21129.15 1228.80 3019898.88 00:18:30.323 0 00:18:30.580 15:22:39 -- host/timeout.sh@97 -- # rpc_pid=78767 00:18:30.580 15:22:39 -- host/timeout.sh@98 -- # sleep 1 00:18:30.580 15:22:39 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:30.580 Running I/O for 10 seconds... 00:18:31.514 15:22:40 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.775 [2024-04-24 15:22:40.846750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.846807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.846830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.846841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.846853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.846863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.846874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.846883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.846895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.846914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.846925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.846934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.846945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.846954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.846965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.846974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.846985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.846994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.775 [2024-04-24 15:22:40.847392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.775 [2024-04-24 15:22:40.847401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.847988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.847997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.848008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.848017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.848028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.848037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.848048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.848057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.848068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.848077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.848089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.776 [2024-04-24 15:22:40.848098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.848109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.776 [2024-04-24 15:22:40.848118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.848130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.776 [2024-04-24 15:22:40.848139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.848151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.776 [2024-04-24 15:22:40.848160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.848171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.776 [2024-04-24 15:22:40.848180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.848191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.776 [2024-04-24 15:22:40.848200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.776 [2024-04-24 15:22:40.848211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.777 [2024-04-24 15:22:40.848435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.777 [2024-04-24 15:22:40.848458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.777 [2024-04-24 15:22:40.848620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.848986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.848996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.849007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.849016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.849026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.777 [2024-04-24 15:22:40.849040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.777 [2024-04-24 15:22:40.849051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.778 [2024-04-24 15:22:40.849463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618400 is same with the state(5) to be set 00:18:31.778 [2024-04-24 15:22:40.849485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.778 [2024-04-24 15:22:40.849493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.778 [2024-04-24 15:22:40.849501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62096 len:8 PRP1 0x0 PRP2 0x0 00:18:31.778 [2024-04-24 15:22:40.849511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.778 [2024-04-24 15:22:40.849562] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x618400 was disconnected and freed. reset controller. 00:18:31.778 [2024-04-24 15:22:40.849794] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:31.778 [2024-04-24 15:22:40.849880] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5afdc0 (9): Bad file descriptor 00:18:31.778 [2024-04-24 15:22:40.849982] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.778 [2024-04-24 15:22:40.850031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.778 [2024-04-24 15:22:40.850071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:31.778 [2024-04-24 15:22:40.850086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5afdc0 with addr=10.0.0.2, port=4420 00:18:31.778 [2024-04-24 15:22:40.850097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5afdc0 is same with the state(5) to be set 00:18:31.778 [2024-04-24 15:22:40.850114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5afdc0 (9): Bad file descriptor 00:18:31.778 [2024-04-24 15:22:40.850129] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:31.778 [2024-04-24 15:22:40.850139] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:31.778 [2024-04-24 15:22:40.850149] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:31.778 [2024-04-24 15:22:40.850168] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:31.778 [2024-04-24 15:22:40.850184] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:31.778 15:22:40 -- host/timeout.sh@101 -- # sleep 3 00:18:32.712 [2024-04-24 15:22:41.850328] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.712 [2024-04-24 15:22:41.850478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.712 [2024-04-24 15:22:41.850522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.712 [2024-04-24 15:22:41.850539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5afdc0 with addr=10.0.0.2, port=4420 00:18:32.712 [2024-04-24 15:22:41.850552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5afdc0 is same with the state(5) to be set 00:18:32.712 [2024-04-24 15:22:41.850579] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5afdc0 (9): Bad file descriptor 00:18:32.712 [2024-04-24 15:22:41.850597] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:32.712 [2024-04-24 15:22:41.850607] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:32.712 [2024-04-24 15:22:41.850617] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:32.712 [2024-04-24 15:22:41.850644] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:32.712 [2024-04-24 15:22:41.850656] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:33.710 [2024-04-24 15:22:42.850825] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:33.710 [2024-04-24 15:22:42.850934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:33.710 [2024-04-24 15:22:42.850976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:33.710 [2024-04-24 15:22:42.850992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5afdc0 with addr=10.0.0.2, port=4420 00:18:33.710 [2024-04-24 15:22:42.851005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5afdc0 is same with the state(5) to be set 00:18:33.710 [2024-04-24 15:22:42.851033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5afdc0 (9): Bad file descriptor 00:18:33.710 [2024-04-24 15:22:42.851052] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:33.710 [2024-04-24 15:22:42.851061] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:33.710 [2024-04-24 15:22:42.851072] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:33.710 [2024-04-24 15:22:42.851114] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:33.710 [2024-04-24 15:22:42.851130] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.658 [2024-04-24 15:22:43.854722] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.658 [2024-04-24 15:22:43.854842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.658 [2024-04-24 15:22:43.854884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.658 [2024-04-24 15:22:43.854900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5afdc0 with addr=10.0.0.2, port=4420 00:18:34.658 [2024-04-24 15:22:43.854914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5afdc0 is same with the state(5) to be set 00:18:34.658 [2024-04-24 15:22:43.855163] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5afdc0 (9): Bad file descriptor 00:18:34.658 [2024-04-24 15:22:43.855411] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.658 [2024-04-24 15:22:43.855442] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.658 [2024-04-24 15:22:43.855455] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.658 [2024-04-24 15:22:43.859327] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.658 [2024-04-24 15:22:43.859369] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.658 15:22:43 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.224 [2024-04-24 15:22:44.164722] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.224 15:22:44 -- host/timeout.sh@103 -- # wait 78767 00:18:35.791 [2024-04-24 15:22:44.896248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:41.050 00:18:41.050 Latency(us) 00:18:41.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.050 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:41.050 Verification LBA range: start 0x0 length 0x4000 00:18:41.050 NVMe0n1 : 10.01 5154.83 20.14 3655.62 0.00 14497.77 685.15 3019898.88 00:18:41.050 =================================================================================================================== 00:18:41.050 Total : 5154.83 20.14 3655.62 0.00 14497.77 0.00 3019898.88 00:18:41.050 0 00:18:41.050 15:22:49 -- host/timeout.sh@105 -- # killprocess 78633 00:18:41.050 15:22:49 -- common/autotest_common.sh@936 -- # '[' -z 78633 ']' 00:18:41.050 15:22:49 -- common/autotest_common.sh@940 -- # kill -0 78633 00:18:41.050 15:22:49 -- common/autotest_common.sh@941 -- # uname 00:18:41.050 15:22:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:41.050 15:22:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78633 00:18:41.050 killing process with pid 78633 00:18:41.050 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.050 00:18:41.050 Latency(us) 00:18:41.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.050 =================================================================================================================== 00:18:41.050 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.050 15:22:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:41.050 15:22:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:41.050 15:22:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78633' 00:18:41.050 15:22:49 -- common/autotest_common.sh@955 -- # kill 78633 00:18:41.050 15:22:49 -- common/autotest_common.sh@960 -- # wait 78633 00:18:41.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.051 15:22:50 -- host/timeout.sh@110 -- # bdevperf_pid=78881 00:18:41.051 15:22:50 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:41.051 15:22:50 -- host/timeout.sh@112 -- # waitforlisten 78881 /var/tmp/bdevperf.sock 00:18:41.051 15:22:50 -- common/autotest_common.sh@817 -- # '[' -z 78881 ']' 00:18:41.051 15:22:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.051 15:22:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:41.051 15:22:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.051 15:22:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:41.051 15:22:50 -- common/autotest_common.sh@10 -- # set +x 00:18:41.051 [2024-04-24 15:22:50.073363] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:18:41.051 [2024-04-24 15:22:50.073505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78881 ] 00:18:41.051 [2024-04-24 15:22:50.215531] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.309 [2024-04-24 15:22:50.340550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.875 15:22:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:41.875 15:22:51 -- common/autotest_common.sh@850 -- # return 0 00:18:41.875 15:22:51 -- host/timeout.sh@116 -- # dtrace_pid=78897 00:18:41.875 15:22:51 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 78881 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:41.875 15:22:51 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:42.441 15:22:51 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:42.441 NVMe0n1 00:18:42.699 15:22:51 -- host/timeout.sh@124 -- # rpc_pid=78933 00:18:42.699 15:22:51 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:42.699 15:22:51 -- host/timeout.sh@125 -- # sleep 1 00:18:42.699 Running I/O for 10 seconds... 00:18:43.687 15:22:52 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.953 [2024-04-24 15:22:52.955314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.955999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.956007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.956015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.956023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.956031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.956039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.956048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.956056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.956064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.953 [2024-04-24 15:22:52.956072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acac10 is same with the state(5) to be set 00:18:43.954 [2024-04-24 15:22:52.956553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.956985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.954 [2024-04-24 15:22:52.956996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.954 [2024-04-24 15:22:52.957007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.955 [2024-04-24 15:22:52.957875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.955 [2024-04-24 15:22:52.957886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.957895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.957906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.957915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.957927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.957936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.957947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.957956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.957967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.957976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.957988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.957997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.956 [2024-04-24 15:22:52.958630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.956 [2024-04-24 15:22:52.958640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.958984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.958993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.959004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.959013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.959024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.959033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.959049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.959058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.959069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.959078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.959089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.959099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.959110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.959119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.959130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.959139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.959150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.959160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.959171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.959183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.959194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.959203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.959214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.959223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.959234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.959244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.957 [2024-04-24 15:22:52.959255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.957 [2024-04-24 15:22:52.959264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.958 [2024-04-24 15:22:52.959275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.958 [2024-04-24 15:22:52.959284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.958 [2024-04-24 15:22:52.959296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.958 [2024-04-24 15:22:52.959305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.958 [2024-04-24 15:22:52.959317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.958 [2024-04-24 15:22:52.959326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.958 [2024-04-24 15:22:52.959336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x527fa0 is same with the state(5) to be set 00:18:43.958 [2024-04-24 15:22:52.959349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:43.958 [2024-04-24 15:22:52.959357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:43.958 [2024-04-24 15:22:52.959365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45728 len:8 PRP1 0x0 PRP2 0x0 00:18:43.958 [2024-04-24 15:22:52.959379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.958 [2024-04-24 15:22:52.959451] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x527fa0 was disconnected and freed. reset controller. 00:18:43.958 [2024-04-24 15:22:52.959731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:43.958 [2024-04-24 15:22:52.959816] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e5030 (9): Bad file descriptor 00:18:43.958 [2024-04-24 15:22:52.959928] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:43.958 [2024-04-24 15:22:52.959996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:43.958 [2024-04-24 15:22:52.960039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:43.958 [2024-04-24 15:22:52.960054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e5030 with addr=10.0.0.2, port=4420 00:18:43.958 [2024-04-24 15:22:52.960065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e5030 is same with the state(5) to be set 00:18:43.958 [2024-04-24 15:22:52.960084] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e5030 (9): Bad file descriptor 00:18:43.958 [2024-04-24 15:22:52.960100] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:43.958 [2024-04-24 15:22:52.960110] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:43.958 [2024-04-24 15:22:52.960120] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:43.958 [2024-04-24 15:22:52.960143] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:43.958 [2024-04-24 15:22:52.960154] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:43.958 15:22:52 -- host/timeout.sh@128 -- # wait 78933 00:18:45.890 [2024-04-24 15:22:54.960347] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:45.890 [2024-04-24 15:22:54.960461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:45.890 [2024-04-24 15:22:54.960508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:45.890 [2024-04-24 15:22:54.960524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e5030 with addr=10.0.0.2, port=4420 00:18:45.890 [2024-04-24 15:22:54.960538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e5030 is same with the state(5) to be set 00:18:45.890 [2024-04-24 15:22:54.960566] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e5030 (9): Bad file descriptor 00:18:45.890 [2024-04-24 15:22:54.960599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:45.890 [2024-04-24 15:22:54.960611] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:45.890 [2024-04-24 15:22:54.960621] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:45.890 [2024-04-24 15:22:54.960648] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:45.890 [2024-04-24 15:22:54.960660] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:47.813 [2024-04-24 15:22:56.960846] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:47.813 [2024-04-24 15:22:56.960963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:47.813 [2024-04-24 15:22:56.961011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:47.813 [2024-04-24 15:22:56.961028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e5030 with addr=10.0.0.2, port=4420 00:18:47.813 [2024-04-24 15:22:56.961041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e5030 is same with the state(5) to be set 00:18:47.813 [2024-04-24 15:22:56.961069] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e5030 (9): Bad file descriptor 00:18:47.813 [2024-04-24 15:22:56.961095] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:47.813 [2024-04-24 15:22:56.961105] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:47.813 [2024-04-24 15:22:56.961116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:47.813 [2024-04-24 15:22:56.961144] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:47.813 [2024-04-24 15:22:56.961155] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.337 [2024-04-24 15:22:58.961238] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:50.903 00:18:50.903 Latency(us) 00:18:50.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.903 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:50.903 NVMe0n1 : 8.17 2067.43 8.08 15.67 0.00 61342.90 8519.68 7046430.72 00:18:50.903 =================================================================================================================== 00:18:50.903 Total : 2067.43 8.08 15.67 0.00 61342.90 8519.68 7046430.72 00:18:50.903 0 00:18:50.903 15:22:59 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:50.903 Attaching 5 probes... 00:18:50.903 1305.186801: reset bdev controller NVMe0 00:18:50.903 1305.325574: reconnect bdev controller NVMe0 00:18:50.903 3305.655540: reconnect delay bdev controller NVMe0 00:18:50.903 3305.677466: reconnect bdev controller NVMe0 00:18:50.903 5306.163575: reconnect delay bdev controller NVMe0 00:18:50.903 5306.187614: reconnect bdev controller NVMe0 00:18:50.903 7306.661872: reconnect delay bdev controller NVMe0 00:18:50.903 7306.685836: reconnect bdev controller NVMe0 00:18:50.903 15:22:59 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:50.903 15:22:59 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:50.903 15:22:59 -- host/timeout.sh@136 -- # kill 78897 00:18:50.903 15:22:59 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:50.903 15:22:59 -- host/timeout.sh@139 -- # killprocess 78881 00:18:50.903 15:22:59 -- common/autotest_common.sh@936 -- # '[' -z 78881 ']' 00:18:50.903 15:22:59 -- common/autotest_common.sh@940 -- # kill -0 78881 00:18:50.903 15:22:59 -- common/autotest_common.sh@941 -- # uname 00:18:50.903 15:22:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:50.903 15:22:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78881 00:18:50.903 killing process with pid 78881 00:18:50.903 Received shutdown signal, test time was about 8.232889 seconds 00:18:50.903 00:18:50.903 Latency(us) 00:18:50.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.903 =================================================================================================================== 00:18:50.903 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.903 15:23:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:50.903 15:23:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:50.903 15:23:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78881' 00:18:50.903 15:23:00 -- common/autotest_common.sh@955 -- # kill 78881 00:18:50.903 15:23:00 -- common/autotest_common.sh@960 -- # wait 78881 00:18:51.164 15:23:00 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:51.423 15:23:00 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:51.423 15:23:00 -- host/timeout.sh@145 -- # nvmftestfini 00:18:51.423 15:23:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:51.423 15:23:00 -- nvmf/common.sh@117 -- # sync 00:18:51.423 15:23:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:51.423 15:23:00 -- nvmf/common.sh@120 -- # set +e 00:18:51.423 15:23:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:51.423 15:23:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:51.423 rmmod nvme_tcp 00:18:51.423 rmmod nvme_fabrics 00:18:51.423 rmmod nvme_keyring 00:18:51.423 15:23:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:51.423 15:23:00 -- nvmf/common.sh@124 -- # set -e 00:18:51.423 15:23:00 -- nvmf/common.sh@125 -- # return 0 00:18:51.423 15:23:00 -- nvmf/common.sh@478 -- # '[' -n 78442 ']' 00:18:51.423 15:23:00 -- nvmf/common.sh@479 -- # killprocess 78442 00:18:51.423 15:23:00 -- common/autotest_common.sh@936 -- # '[' -z 78442 ']' 00:18:51.423 15:23:00 -- common/autotest_common.sh@940 -- # kill -0 78442 00:18:51.423 15:23:00 -- common/autotest_common.sh@941 -- # uname 00:18:51.423 15:23:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:51.423 15:23:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78442 00:18:51.682 killing process with pid 78442 00:18:51.682 15:23:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:51.682 15:23:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:51.682 15:23:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78442' 00:18:51.682 15:23:00 -- common/autotest_common.sh@955 -- # kill 78442 00:18:51.682 15:23:00 -- common/autotest_common.sh@960 -- # wait 78442 00:18:51.940 15:23:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:51.940 15:23:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:51.940 15:23:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:51.940 15:23:00 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.940 15:23:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:51.940 15:23:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.940 15:23:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.940 15:23:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.940 15:23:00 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:51.940 00:18:51.940 real 0m47.335s 00:18:51.940 user 2m19.308s 00:18:51.940 sys 0m5.713s 00:18:51.940 15:23:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:51.940 15:23:00 -- common/autotest_common.sh@10 -- # set +x 00:18:51.940 ************************************ 00:18:51.940 END TEST nvmf_timeout 00:18:51.940 ************************************ 00:18:51.940 15:23:01 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:18:51.940 15:23:01 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:18:51.940 15:23:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:51.940 15:23:01 -- common/autotest_common.sh@10 -- # set +x 00:18:51.940 15:23:01 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:18:51.940 ************************************ 00:18:51.940 END TEST nvmf_tcp 00:18:51.940 ************************************ 00:18:51.940 00:18:51.940 real 8m55.185s 00:18:51.940 user 21m6.198s 00:18:51.940 sys 2m24.997s 00:18:51.940 15:23:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:51.940 15:23:01 -- common/autotest_common.sh@10 -- # set +x 00:18:51.940 15:23:01 -- spdk/autotest.sh@286 -- # [[ 1 -eq 0 ]] 00:18:51.940 15:23:01 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:51.940 15:23:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:51.940 15:23:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:51.940 15:23:01 -- common/autotest_common.sh@10 -- # set +x 00:18:51.940 ************************************ 00:18:51.940 START TEST nvmf_dif 00:18:51.940 ************************************ 00:18:51.940 15:23:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:52.199 * Looking for test storage... 00:18:52.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:52.199 15:23:01 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:52.199 15:23:01 -- nvmf/common.sh@7 -- # uname -s 00:18:52.199 15:23:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.199 15:23:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.199 15:23:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.199 15:23:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.199 15:23:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.199 15:23:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.199 15:23:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.199 15:23:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.199 15:23:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.199 15:23:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.199 15:23:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:18:52.199 15:23:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:18:52.199 15:23:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.199 15:23:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.199 15:23:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:52.199 15:23:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.199 15:23:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:52.199 15:23:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.199 15:23:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.199 15:23:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.199 15:23:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.199 15:23:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.199 15:23:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.199 15:23:01 -- paths/export.sh@5 -- # export PATH 00:18:52.199 15:23:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.199 15:23:01 -- nvmf/common.sh@47 -- # : 0 00:18:52.199 15:23:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:52.199 15:23:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:52.199 15:23:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.199 15:23:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.199 15:23:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.199 15:23:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:52.199 15:23:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:52.199 15:23:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:52.199 15:23:01 -- target/dif.sh@15 -- # NULL_META=16 00:18:52.199 15:23:01 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:52.199 15:23:01 -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:52.199 15:23:01 -- target/dif.sh@15 -- # NULL_DIF=1 00:18:52.199 15:23:01 -- target/dif.sh@135 -- # nvmftestinit 00:18:52.199 15:23:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:52.199 15:23:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.199 15:23:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:52.199 15:23:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:52.199 15:23:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:52.199 15:23:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.199 15:23:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:52.199 15:23:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.199 15:23:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:52.199 15:23:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:52.199 15:23:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:52.199 15:23:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:52.199 15:23:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:52.199 15:23:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:52.199 15:23:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.199 15:23:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.199 15:23:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:52.199 15:23:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:52.199 15:23:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:52.199 15:23:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:52.199 15:23:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:52.199 15:23:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.199 15:23:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:52.200 15:23:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:52.200 15:23:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:52.200 15:23:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:52.200 15:23:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:52.200 15:23:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:52.200 Cannot find device "nvmf_tgt_br" 00:18:52.200 15:23:01 -- nvmf/common.sh@155 -- # true 00:18:52.200 15:23:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:52.200 Cannot find device "nvmf_tgt_br2" 00:18:52.200 15:23:01 -- nvmf/common.sh@156 -- # true 00:18:52.200 15:23:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:52.200 15:23:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:52.200 Cannot find device "nvmf_tgt_br" 00:18:52.200 15:23:01 -- nvmf/common.sh@158 -- # true 00:18:52.200 15:23:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:52.200 Cannot find device "nvmf_tgt_br2" 00:18:52.200 15:23:01 -- nvmf/common.sh@159 -- # true 00:18:52.200 15:23:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:52.200 15:23:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:52.200 15:23:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:52.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.200 15:23:01 -- nvmf/common.sh@162 -- # true 00:18:52.200 15:23:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:52.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.200 15:23:01 -- nvmf/common.sh@163 -- # true 00:18:52.200 15:23:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:52.200 15:23:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:52.200 15:23:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:52.458 15:23:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:52.458 15:23:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:52.458 15:23:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:52.458 15:23:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:52.458 15:23:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:52.458 15:23:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:52.458 15:23:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:52.458 15:23:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:52.458 15:23:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:52.458 15:23:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:52.458 15:23:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:52.458 15:23:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:52.458 15:23:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:52.458 15:23:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:52.458 15:23:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:52.458 15:23:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:52.458 15:23:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:52.459 15:23:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:52.459 15:23:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:52.459 15:23:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:52.459 15:23:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:52.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:18:52.459 00:18:52.459 --- 10.0.0.2 ping statistics --- 00:18:52.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.459 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:52.459 15:23:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:52.459 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:52.459 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:18:52.459 00:18:52.459 --- 10.0.0.3 ping statistics --- 00:18:52.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.459 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:52.459 15:23:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:52.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:52.459 00:18:52.459 --- 10.0.0.1 ping statistics --- 00:18:52.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.459 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:52.459 15:23:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.459 15:23:01 -- nvmf/common.sh@422 -- # return 0 00:18:52.459 15:23:01 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:18:52.459 15:23:01 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:52.717 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:52.717 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:52.717 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:52.976 15:23:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.976 15:23:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:52.976 15:23:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:52.976 15:23:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.976 15:23:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:52.976 15:23:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:52.976 15:23:01 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:52.976 15:23:01 -- target/dif.sh@137 -- # nvmfappstart 00:18:52.976 15:23:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:52.976 15:23:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:52.976 15:23:01 -- common/autotest_common.sh@10 -- # set +x 00:18:52.976 15:23:01 -- nvmf/common.sh@470 -- # nvmfpid=79381 00:18:52.976 15:23:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:52.976 15:23:01 -- nvmf/common.sh@471 -- # waitforlisten 79381 00:18:52.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.976 15:23:01 -- common/autotest_common.sh@817 -- # '[' -z 79381 ']' 00:18:52.976 15:23:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.976 15:23:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:52.976 15:23:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.976 15:23:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:52.976 15:23:01 -- common/autotest_common.sh@10 -- # set +x 00:18:52.976 [2024-04-24 15:23:02.034981] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:18:52.976 [2024-04-24 15:23:02.035312] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.976 [2024-04-24 15:23:02.170737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.234 [2024-04-24 15:23:02.298652] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.234 [2024-04-24 15:23:02.298717] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.234 [2024-04-24 15:23:02.298733] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.234 [2024-04-24 15:23:02.298744] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.234 [2024-04-24 15:23:02.298753] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.234 [2024-04-24 15:23:02.298796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.801 15:23:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:53.801 15:23:03 -- common/autotest_common.sh@850 -- # return 0 00:18:53.801 15:23:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:53.801 15:23:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:53.801 15:23:03 -- common/autotest_common.sh@10 -- # set +x 00:18:54.060 15:23:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.060 15:23:03 -- target/dif.sh@139 -- # create_transport 00:18:54.060 15:23:03 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:54.060 15:23:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.060 15:23:03 -- common/autotest_common.sh@10 -- # set +x 00:18:54.060 [2024-04-24 15:23:03.081234] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.060 15:23:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.060 15:23:03 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:54.060 15:23:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:54.060 15:23:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:54.060 15:23:03 -- common/autotest_common.sh@10 -- # set +x 00:18:54.060 ************************************ 00:18:54.060 START TEST fio_dif_1_default 00:18:54.060 ************************************ 00:18:54.060 15:23:03 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:18:54.060 15:23:03 -- target/dif.sh@86 -- # create_subsystems 0 00:18:54.060 15:23:03 -- target/dif.sh@28 -- # local sub 00:18:54.060 15:23:03 -- target/dif.sh@30 -- # for sub in "$@" 00:18:54.060 15:23:03 -- target/dif.sh@31 -- # create_subsystem 0 00:18:54.060 15:23:03 -- target/dif.sh@18 -- # local sub_id=0 00:18:54.060 15:23:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:54.060 15:23:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.060 15:23:03 -- common/autotest_common.sh@10 -- # set +x 00:18:54.060 bdev_null0 00:18:54.060 15:23:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.060 15:23:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:54.060 15:23:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.060 15:23:03 -- common/autotest_common.sh@10 -- # set +x 00:18:54.060 15:23:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.060 15:23:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:54.060 15:23:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.060 15:23:03 -- common/autotest_common.sh@10 -- # set +x 00:18:54.060 15:23:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.060 15:23:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:54.060 15:23:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.060 15:23:03 -- common/autotest_common.sh@10 -- # set +x 00:18:54.060 [2024-04-24 15:23:03.189366] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.060 15:23:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.060 15:23:03 -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:54.060 15:23:03 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:54.060 15:23:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:54.060 15:23:03 -- nvmf/common.sh@521 -- # config=() 00:18:54.060 15:23:03 -- nvmf/common.sh@521 -- # local subsystem config 00:18:54.060 15:23:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:54.060 15:23:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:54.060 { 00:18:54.060 "params": { 00:18:54.060 "name": "Nvme$subsystem", 00:18:54.060 "trtype": "$TEST_TRANSPORT", 00:18:54.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:54.060 "adrfam": "ipv4", 00:18:54.060 "trsvcid": "$NVMF_PORT", 00:18:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:54.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:54.060 "hdgst": ${hdgst:-false}, 00:18:54.060 "ddgst": ${ddgst:-false} 00:18:54.060 }, 00:18:54.060 "method": "bdev_nvme_attach_controller" 00:18:54.060 } 00:18:54.060 EOF 00:18:54.060 )") 00:18:54.060 15:23:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:54.060 15:23:03 -- target/dif.sh@82 -- # gen_fio_conf 00:18:54.060 15:23:03 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:54.060 15:23:03 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:18:54.060 15:23:03 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:54.060 15:23:03 -- target/dif.sh@54 -- # local file 00:18:54.060 15:23:03 -- target/dif.sh@56 -- # cat 00:18:54.060 15:23:03 -- common/autotest_common.sh@1325 -- # local sanitizers 00:18:54.060 15:23:03 -- nvmf/common.sh@543 -- # cat 00:18:54.060 15:23:03 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:54.060 15:23:03 -- common/autotest_common.sh@1327 -- # shift 00:18:54.060 15:23:03 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:18:54.060 15:23:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:54.060 15:23:03 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:54.060 15:23:03 -- nvmf/common.sh@545 -- # jq . 00:18:54.060 15:23:03 -- common/autotest_common.sh@1331 -- # grep libasan 00:18:54.060 15:23:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:54.060 15:23:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:54.060 15:23:03 -- target/dif.sh@72 -- # (( file <= files )) 00:18:54.060 15:23:03 -- nvmf/common.sh@546 -- # IFS=, 00:18:54.060 15:23:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:54.060 "params": { 00:18:54.060 "name": "Nvme0", 00:18:54.060 "trtype": "tcp", 00:18:54.060 "traddr": "10.0.0.2", 00:18:54.060 "adrfam": "ipv4", 00:18:54.060 "trsvcid": "4420", 00:18:54.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:54.060 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:54.060 "hdgst": false, 00:18:54.060 "ddgst": false 00:18:54.060 }, 00:18:54.060 "method": "bdev_nvme_attach_controller" 00:18:54.060 }' 00:18:54.060 15:23:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:54.060 15:23:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:54.060 15:23:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:54.060 15:23:03 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:54.060 15:23:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:54.060 15:23:03 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:18:54.060 15:23:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:54.060 15:23:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:54.060 15:23:03 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:54.060 15:23:03 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:54.319 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:54.319 fio-3.35 00:18:54.319 Starting 1 thread 00:19:06.593 00:19:06.593 filename0: (groupid=0, jobs=1): err= 0: pid=79450: Wed Apr 24 15:23:13 2024 00:19:06.593 read: IOPS=8380, BW=32.7MiB/s (34.3MB/s)(327MiB/10001msec) 00:19:06.593 slat (nsec): min=6238, max=74550, avg=8563.27, stdev=3084.41 00:19:06.593 clat (usec): min=364, max=4241, avg=451.77, stdev=43.20 00:19:06.593 lat (usec): min=371, max=4275, avg=460.34, stdev=43.65 00:19:06.593 clat percentiles (usec): 00:19:06.593 | 1.00th=[ 396], 5.00th=[ 412], 10.00th=[ 416], 20.00th=[ 429], 00:19:06.593 | 30.00th=[ 433], 40.00th=[ 441], 50.00th=[ 445], 60.00th=[ 453], 00:19:06.593 | 70.00th=[ 461], 80.00th=[ 474], 90.00th=[ 494], 95.00th=[ 515], 00:19:06.593 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 635], 99.95th=[ 685], 00:19:06.593 | 99.99th=[ 1057] 00:19:06.593 bw ( KiB/s): min=31360, max=34816, per=100.00%, avg=33610.11, stdev=1197.05, samples=19 00:19:06.593 iops : min= 7840, max= 8704, avg=8402.53, stdev=299.26, samples=19 00:19:06.593 lat (usec) : 500=91.49%, 750=8.47%, 1000=0.01% 00:19:06.593 lat (msec) : 2=0.01%, 10=0.01% 00:19:06.593 cpu : usr=83.88%, sys=14.20%, ctx=17, majf=0, minf=0 00:19:06.593 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.593 issued rwts: total=83812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.593 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:06.593 00:19:06.593 Run status group 0 (all jobs): 00:19:06.593 READ: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=327MiB (343MB), run=10001-10001msec 00:19:06.593 15:23:14 -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:06.593 15:23:14 -- target/dif.sh@43 -- # local sub 00:19:06.593 15:23:14 -- target/dif.sh@45 -- # for sub in "$@" 00:19:06.593 15:23:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:06.593 15:23:14 -- target/dif.sh@36 -- # local sub_id=0 00:19:06.593 15:23:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:06.593 15:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.593 15:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:06.593 15:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.593 15:23:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:06.593 15:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.593 15:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:06.593 ************************************ 00:19:06.593 END TEST fio_dif_1_default 00:19:06.593 ************************************ 00:19:06.593 15:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.593 00:19:06.593 real 0m11.010s 00:19:06.593 user 0m9.039s 00:19:06.593 sys 0m1.684s 00:19:06.593 15:23:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:06.593 15:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:06.593 15:23:14 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:06.593 15:23:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:06.593 15:23:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:06.593 15:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:06.593 ************************************ 00:19:06.593 START TEST fio_dif_1_multi_subsystems 00:19:06.593 ************************************ 00:19:06.593 15:23:14 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:19:06.593 15:23:14 -- target/dif.sh@92 -- # local files=1 00:19:06.593 15:23:14 -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:06.593 15:23:14 -- target/dif.sh@28 -- # local sub 00:19:06.593 15:23:14 -- target/dif.sh@30 -- # for sub in "$@" 00:19:06.593 15:23:14 -- target/dif.sh@31 -- # create_subsystem 0 00:19:06.593 15:23:14 -- target/dif.sh@18 -- # local sub_id=0 00:19:06.593 15:23:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:06.593 15:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.593 15:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:06.593 bdev_null0 00:19:06.593 15:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.593 15:23:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:06.593 15:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.593 15:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:06.593 15:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.593 15:23:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:06.593 15:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.593 15:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:06.593 15:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.593 15:23:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:06.593 15:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.593 15:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:06.593 [2024-04-24 15:23:14.328093] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.593 15:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.593 15:23:14 -- target/dif.sh@30 -- # for sub in "$@" 00:19:06.593 15:23:14 -- target/dif.sh@31 -- # create_subsystem 1 00:19:06.593 15:23:14 -- target/dif.sh@18 -- # local sub_id=1 00:19:06.593 15:23:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:06.593 15:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.593 15:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:06.593 bdev_null1 00:19:06.593 15:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.593 15:23:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:06.593 15:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.593 15:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:06.593 15:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.593 15:23:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:06.593 15:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.593 15:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:06.593 15:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.593 15:23:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:06.593 15:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.593 15:23:14 -- common/autotest_common.sh@10 -- # set +x 00:19:06.593 15:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.593 15:23:14 -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:06.593 15:23:14 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:06.593 15:23:14 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:06.593 15:23:14 -- nvmf/common.sh@521 -- # config=() 00:19:06.593 15:23:14 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:06.593 15:23:14 -- nvmf/common.sh@521 -- # local subsystem config 00:19:06.593 15:23:14 -- target/dif.sh@82 -- # gen_fio_conf 00:19:06.593 15:23:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:06.593 15:23:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:06.593 { 00:19:06.594 "params": { 00:19:06.594 "name": "Nvme$subsystem", 00:19:06.594 "trtype": "$TEST_TRANSPORT", 00:19:06.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:06.594 "adrfam": "ipv4", 00:19:06.594 "trsvcid": "$NVMF_PORT", 00:19:06.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:06.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:06.594 "hdgst": ${hdgst:-false}, 00:19:06.594 "ddgst": ${ddgst:-false} 00:19:06.594 }, 00:19:06.594 "method": "bdev_nvme_attach_controller" 00:19:06.594 } 00:19:06.594 EOF 00:19:06.594 )") 00:19:06.594 15:23:14 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:06.594 15:23:14 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:06.594 15:23:14 -- target/dif.sh@54 -- # local file 00:19:06.594 15:23:14 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:06.594 15:23:14 -- target/dif.sh@56 -- # cat 00:19:06.594 15:23:14 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:06.594 15:23:14 -- nvmf/common.sh@543 -- # cat 00:19:06.594 15:23:14 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.594 15:23:14 -- common/autotest_common.sh@1327 -- # shift 00:19:06.594 15:23:14 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:06.594 15:23:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:06.594 15:23:14 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:06.594 15:23:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:06.594 15:23:14 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.594 15:23:14 -- target/dif.sh@72 -- # (( file <= files )) 00:19:06.594 15:23:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:06.594 { 00:19:06.594 "params": { 00:19:06.594 "name": "Nvme$subsystem", 00:19:06.594 "trtype": "$TEST_TRANSPORT", 00:19:06.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:06.594 "adrfam": "ipv4", 00:19:06.594 "trsvcid": "$NVMF_PORT", 00:19:06.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:06.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:06.594 "hdgst": ${hdgst:-false}, 00:19:06.594 "ddgst": ${ddgst:-false} 00:19:06.594 }, 00:19:06.594 "method": "bdev_nvme_attach_controller" 00:19:06.594 } 00:19:06.594 EOF 00:19:06.594 )") 00:19:06.594 15:23:14 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:06.594 15:23:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:06.594 15:23:14 -- target/dif.sh@73 -- # cat 00:19:06.594 15:23:14 -- nvmf/common.sh@543 -- # cat 00:19:06.594 15:23:14 -- target/dif.sh@72 -- # (( file++ )) 00:19:06.594 15:23:14 -- target/dif.sh@72 -- # (( file <= files )) 00:19:06.594 15:23:14 -- nvmf/common.sh@545 -- # jq . 00:19:06.594 15:23:14 -- nvmf/common.sh@546 -- # IFS=, 00:19:06.594 15:23:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:06.594 "params": { 00:19:06.594 "name": "Nvme0", 00:19:06.594 "trtype": "tcp", 00:19:06.594 "traddr": "10.0.0.2", 00:19:06.594 "adrfam": "ipv4", 00:19:06.594 "trsvcid": "4420", 00:19:06.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:06.594 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:06.594 "hdgst": false, 00:19:06.594 "ddgst": false 00:19:06.594 }, 00:19:06.594 "method": "bdev_nvme_attach_controller" 00:19:06.594 },{ 00:19:06.594 "params": { 00:19:06.594 "name": "Nvme1", 00:19:06.594 "trtype": "tcp", 00:19:06.594 "traddr": "10.0.0.2", 00:19:06.594 "adrfam": "ipv4", 00:19:06.594 "trsvcid": "4420", 00:19:06.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:06.594 "hdgst": false, 00:19:06.594 "ddgst": false 00:19:06.594 }, 00:19:06.594 "method": "bdev_nvme_attach_controller" 00:19:06.594 }' 00:19:06.594 15:23:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:06.594 15:23:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:06.594 15:23:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:06.594 15:23:14 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.594 15:23:14 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:06.594 15:23:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:06.594 15:23:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:06.594 15:23:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:06.594 15:23:14 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:06.594 15:23:14 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:06.594 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:06.594 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:06.594 fio-3.35 00:19:06.594 Starting 2 threads 00:19:16.563 00:19:16.563 filename0: (groupid=0, jobs=1): err= 0: pid=79614: Wed Apr 24 15:23:25 2024 00:19:16.563 read: IOPS=4590, BW=17.9MiB/s (18.8MB/s)(179MiB/10001msec) 00:19:16.563 slat (usec): min=6, max=106, avg=13.32, stdev= 3.87 00:19:16.563 clat (usec): min=520, max=2176, avg=834.55, stdev=69.49 00:19:16.563 lat (usec): min=528, max=2190, avg=847.87, stdev=69.94 00:19:16.563 clat percentiles (usec): 00:19:16.563 | 1.00th=[ 750], 5.00th=[ 766], 10.00th=[ 775], 20.00th=[ 791], 00:19:16.563 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[ 824], 60.00th=[ 832], 00:19:16.563 | 70.00th=[ 848], 80.00th=[ 873], 90.00th=[ 898], 95.00th=[ 930], 00:19:16.563 | 99.00th=[ 1074], 99.50th=[ 1287], 99.90th=[ 1418], 99.95th=[ 1647], 00:19:16.563 | 99.99th=[ 1860] 00:19:16.563 bw ( KiB/s): min=16224, max=19264, per=49.97%, avg=18349.47, stdev=896.65, samples=19 00:19:16.563 iops : min= 4056, max= 4816, avg=4587.37, stdev=224.16, samples=19 00:19:16.563 lat (usec) : 750=0.85%, 1000=97.40% 00:19:16.563 lat (msec) : 2=1.74%, 4=0.01% 00:19:16.563 cpu : usr=89.37%, sys=9.28%, ctx=15, majf=0, minf=9 00:19:16.563 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.563 issued rwts: total=45908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.563 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:16.563 filename1: (groupid=0, jobs=1): err= 0: pid=79615: Wed Apr 24 15:23:25 2024 00:19:16.563 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(179MiB/10001msec) 00:19:16.563 slat (usec): min=6, max=108, avg=13.13, stdev= 3.86 00:19:16.563 clat (usec): min=646, max=2183, avg=835.73, stdev=76.55 00:19:16.563 lat (usec): min=653, max=2194, avg=848.86, stdev=77.30 00:19:16.563 clat percentiles (usec): 00:19:16.563 | 1.00th=[ 709], 5.00th=[ 734], 10.00th=[ 750], 20.00th=[ 783], 00:19:16.563 | 30.00th=[ 799], 40.00th=[ 816], 50.00th=[ 832], 60.00th=[ 840], 00:19:16.563 | 70.00th=[ 857], 80.00th=[ 881], 90.00th=[ 906], 95.00th=[ 938], 00:19:16.563 | 99.00th=[ 1074], 99.50th=[ 1287], 99.90th=[ 1418], 99.95th=[ 1647], 00:19:16.563 | 99.99th=[ 1844] 00:19:16.563 bw ( KiB/s): min=16224, max=19264, per=49.96%, avg=18347.79, stdev=898.42, samples=19 00:19:16.563 iops : min= 4056, max= 4816, avg=4586.95, stdev=224.61, samples=19 00:19:16.563 lat (usec) : 750=9.23%, 1000=88.86% 00:19:16.563 lat (msec) : 2=1.91%, 4=0.01% 00:19:16.563 cpu : usr=89.10%, sys=9.44%, ctx=71, majf=0, minf=9 00:19:16.563 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.563 issued rwts: total=45904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.563 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:16.563 00:19:16.563 Run status group 0 (all jobs): 00:19:16.563 READ: bw=35.9MiB/s (37.6MB/s), 17.9MiB/s-17.9MiB/s (18.8MB/s-18.8MB/s), io=359MiB (376MB), run=10001-10001msec 00:19:16.563 15:23:25 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:16.563 15:23:25 -- target/dif.sh@43 -- # local sub 00:19:16.563 15:23:25 -- target/dif.sh@45 -- # for sub in "$@" 00:19:16.563 15:23:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:16.563 15:23:25 -- target/dif.sh@36 -- # local sub_id=0 00:19:16.563 15:23:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:16.563 15:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.563 15:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:16.563 15:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.563 15:23:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:16.563 15:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.563 15:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:16.563 15:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.563 15:23:25 -- target/dif.sh@45 -- # for sub in "$@" 00:19:16.563 15:23:25 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:16.563 15:23:25 -- target/dif.sh@36 -- # local sub_id=1 00:19:16.563 15:23:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.563 15:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.563 15:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:16.563 15:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.563 15:23:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:16.563 15:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.563 15:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:16.563 ************************************ 00:19:16.563 END TEST fio_dif_1_multi_subsystems 00:19:16.563 ************************************ 00:19:16.563 15:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.563 00:19:16.563 real 0m11.174s 00:19:16.563 user 0m18.625s 00:19:16.563 sys 0m2.167s 00:19:16.563 15:23:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:16.563 15:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:16.563 15:23:25 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:16.563 15:23:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:16.563 15:23:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:16.563 15:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:16.563 ************************************ 00:19:16.563 START TEST fio_dif_rand_params 00:19:16.563 ************************************ 00:19:16.563 15:23:25 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:19:16.563 15:23:25 -- target/dif.sh@100 -- # local NULL_DIF 00:19:16.563 15:23:25 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:16.563 15:23:25 -- target/dif.sh@103 -- # NULL_DIF=3 00:19:16.563 15:23:25 -- target/dif.sh@103 -- # bs=128k 00:19:16.563 15:23:25 -- target/dif.sh@103 -- # numjobs=3 00:19:16.563 15:23:25 -- target/dif.sh@103 -- # iodepth=3 00:19:16.563 15:23:25 -- target/dif.sh@103 -- # runtime=5 00:19:16.563 15:23:25 -- target/dif.sh@105 -- # create_subsystems 0 00:19:16.563 15:23:25 -- target/dif.sh@28 -- # local sub 00:19:16.563 15:23:25 -- target/dif.sh@30 -- # for sub in "$@" 00:19:16.563 15:23:25 -- target/dif.sh@31 -- # create_subsystem 0 00:19:16.563 15:23:25 -- target/dif.sh@18 -- # local sub_id=0 00:19:16.563 15:23:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:16.563 15:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.563 15:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:16.563 bdev_null0 00:19:16.563 15:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.563 15:23:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:16.563 15:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.563 15:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:16.563 15:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.563 15:23:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:16.563 15:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.563 15:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:16.563 15:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.563 15:23:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:16.563 15:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.563 15:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:16.563 [2024-04-24 15:23:25.623987] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.563 15:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.563 15:23:25 -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:16.563 15:23:25 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:16.563 15:23:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:16.563 15:23:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:16.563 15:23:25 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:16.563 15:23:25 -- nvmf/common.sh@521 -- # config=() 00:19:16.563 15:23:25 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:16.563 15:23:25 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:16.563 15:23:25 -- nvmf/common.sh@521 -- # local subsystem config 00:19:16.563 15:23:25 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:16.563 15:23:25 -- target/dif.sh@82 -- # gen_fio_conf 00:19:16.563 15:23:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:16.563 15:23:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:16.563 { 00:19:16.563 "params": { 00:19:16.563 "name": "Nvme$subsystem", 00:19:16.563 "trtype": "$TEST_TRANSPORT", 00:19:16.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.563 "adrfam": "ipv4", 00:19:16.563 "trsvcid": "$NVMF_PORT", 00:19:16.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.563 "hdgst": ${hdgst:-false}, 00:19:16.563 "ddgst": ${ddgst:-false} 00:19:16.563 }, 00:19:16.563 "method": "bdev_nvme_attach_controller" 00:19:16.563 } 00:19:16.563 EOF 00:19:16.563 )") 00:19:16.563 15:23:25 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:16.563 15:23:25 -- common/autotest_common.sh@1327 -- # shift 00:19:16.563 15:23:25 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:16.563 15:23:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:16.563 15:23:25 -- target/dif.sh@54 -- # local file 00:19:16.563 15:23:25 -- target/dif.sh@56 -- # cat 00:19:16.563 15:23:25 -- nvmf/common.sh@543 -- # cat 00:19:16.563 15:23:25 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:16.563 15:23:25 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:16.563 15:23:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:16.563 15:23:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:16.563 15:23:25 -- target/dif.sh@72 -- # (( file <= files )) 00:19:16.563 15:23:25 -- nvmf/common.sh@545 -- # jq . 00:19:16.563 15:23:25 -- nvmf/common.sh@546 -- # IFS=, 00:19:16.563 15:23:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:16.563 "params": { 00:19:16.563 "name": "Nvme0", 00:19:16.563 "trtype": "tcp", 00:19:16.563 "traddr": "10.0.0.2", 00:19:16.563 "adrfam": "ipv4", 00:19:16.563 "trsvcid": "4420", 00:19:16.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:16.563 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:16.563 "hdgst": false, 00:19:16.563 "ddgst": false 00:19:16.563 }, 00:19:16.563 "method": "bdev_nvme_attach_controller" 00:19:16.563 }' 00:19:16.563 15:23:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:16.563 15:23:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:16.563 15:23:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:16.563 15:23:25 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:16.563 15:23:25 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:16.563 15:23:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:16.563 15:23:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:16.563 15:23:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:16.563 15:23:25 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:16.563 15:23:25 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:16.822 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:16.822 ... 00:19:16.822 fio-3.35 00:19:16.822 Starting 3 threads 00:19:23.393 00:19:23.393 filename0: (groupid=0, jobs=1): err= 0: pid=79777: Wed Apr 24 15:23:31 2024 00:19:23.393 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(163MiB/5008msec) 00:19:23.393 slat (nsec): min=7196, max=44170, avg=11112.07, stdev=5038.69 00:19:23.393 clat (usec): min=8014, max=12029, avg=11483.09, stdev=209.04 00:19:23.393 lat (usec): min=8023, max=12046, avg=11494.20, stdev=209.19 00:19:23.393 clat percentiles (usec): 00:19:23.393 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11338], 00:19:23.393 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:19:23.393 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11600], 95.00th=[11731], 00:19:23.393 | 99.00th=[11863], 99.50th=[11863], 99.90th=[11994], 99.95th=[11994], 00:19:23.393 | 99.99th=[11994] 00:19:23.393 bw ( KiB/s): min=32958, max=33792, per=33.33%, avg=33324.60, stdev=402.77, samples=10 00:19:23.393 iops : min= 257, max= 264, avg=260.30, stdev= 3.20, samples=10 00:19:23.393 lat (msec) : 10=0.23%, 20=99.77% 00:19:23.393 cpu : usr=90.93%, sys=8.45%, ctx=9, majf=0, minf=9 00:19:23.393 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.393 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.393 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:23.393 filename0: (groupid=0, jobs=1): err= 0: pid=79778: Wed Apr 24 15:23:31 2024 00:19:23.393 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(163MiB/5007msec) 00:19:23.393 slat (nsec): min=7166, max=40199, avg=12737.57, stdev=5008.77 00:19:23.393 clat (usec): min=11273, max=17932, avg=11504.92, stdev=329.25 00:19:23.393 lat (usec): min=11294, max=17956, avg=11517.66, stdev=329.47 00:19:23.393 clat percentiles (usec): 00:19:23.393 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11338], 00:19:23.393 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:19:23.393 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11600], 95.00th=[11731], 00:19:23.393 | 99.00th=[11863], 99.50th=[11994], 99.90th=[17957], 99.95th=[17957], 00:19:23.393 | 99.99th=[17957] 00:19:23.393 bw ( KiB/s): min=32256, max=33792, per=33.26%, avg=33247.80, stdev=522.03, samples=10 00:19:23.393 iops : min= 252, max= 264, avg=259.70, stdev= 4.11, samples=10 00:19:23.393 lat (msec) : 20=100.00% 00:19:23.393 cpu : usr=91.27%, sys=8.17%, ctx=42, majf=0, minf=9 00:19:23.393 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.393 issued rwts: total=1302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.393 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:23.393 filename0: (groupid=0, jobs=1): err= 0: pid=79779: Wed Apr 24 15:23:31 2024 00:19:23.393 read: IOPS=260, BW=32.6MiB/s (34.1MB/s)(163MiB/5009msec) 00:19:23.393 slat (nsec): min=7326, max=47029, avg=12670.84, stdev=5252.76 00:19:23.393 clat (usec): min=8333, max=12036, avg=11484.25, stdev=189.61 00:19:23.393 lat (usec): min=8343, max=12055, avg=11496.92, stdev=189.67 00:19:23.393 clat percentiles (usec): 00:19:23.393 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11338], 00:19:23.393 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:19:23.393 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11600], 95.00th=[11731], 00:19:23.393 | 99.00th=[11863], 99.50th=[11863], 99.90th=[11994], 99.95th=[11994], 00:19:23.393 | 99.99th=[11994] 00:19:23.393 bw ( KiB/s): min=32761, max=33792, per=33.31%, avg=33304.90, stdev=426.80, samples=10 00:19:23.393 iops : min= 255, max= 264, avg=260.10, stdev= 3.48, samples=10 00:19:23.393 lat (msec) : 10=0.23%, 20=99.77% 00:19:23.393 cpu : usr=91.39%, sys=8.07%, ctx=10, majf=0, minf=0 00:19:23.393 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.393 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.393 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:23.393 00:19:23.393 Run status group 0 (all jobs): 00:19:23.393 READ: bw=97.6MiB/s (102MB/s), 32.5MiB/s-32.6MiB/s (34.1MB/s-34.2MB/s), io=489MiB (513MB), run=5007-5009msec 00:19:23.393 15:23:31 -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:23.393 15:23:31 -- target/dif.sh@43 -- # local sub 00:19:23.393 15:23:31 -- target/dif.sh@45 -- # for sub in "$@" 00:19:23.393 15:23:31 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:23.393 15:23:31 -- target/dif.sh@36 -- # local sub_id=0 00:19:23.393 15:23:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:23.393 15:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.393 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:23.393 15:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.393 15:23:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:23.393 15:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.393 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:23.393 15:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.393 15:23:31 -- target/dif.sh@109 -- # NULL_DIF=2 00:19:23.393 15:23:31 -- target/dif.sh@109 -- # bs=4k 00:19:23.393 15:23:31 -- target/dif.sh@109 -- # numjobs=8 00:19:23.393 15:23:31 -- target/dif.sh@109 -- # iodepth=16 00:19:23.393 15:23:31 -- target/dif.sh@109 -- # runtime= 00:19:23.393 15:23:31 -- target/dif.sh@109 -- # files=2 00:19:23.393 15:23:31 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:23.393 15:23:31 -- target/dif.sh@28 -- # local sub 00:19:23.393 15:23:31 -- target/dif.sh@30 -- # for sub in "$@" 00:19:23.393 15:23:31 -- target/dif.sh@31 -- # create_subsystem 0 00:19:23.393 15:23:31 -- target/dif.sh@18 -- # local sub_id=0 00:19:23.393 15:23:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:23.393 15:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.393 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:23.393 bdev_null0 00:19:23.394 15:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.394 15:23:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:23.394 15:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.394 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:23.394 15:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.394 15:23:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:23.394 15:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.394 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:23.394 15:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.394 15:23:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:23.394 15:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.394 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:23.394 [2024-04-24 15:23:31.640962] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.394 15:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.394 15:23:31 -- target/dif.sh@30 -- # for sub in "$@" 00:19:23.394 15:23:31 -- target/dif.sh@31 -- # create_subsystem 1 00:19:23.394 15:23:31 -- target/dif.sh@18 -- # local sub_id=1 00:19:23.394 15:23:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:23.394 15:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.394 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:23.394 bdev_null1 00:19:23.394 15:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.394 15:23:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:23.394 15:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.394 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:23.394 15:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.394 15:23:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:23.394 15:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.394 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:23.394 15:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.394 15:23:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.394 15:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.394 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:23.394 15:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.394 15:23:31 -- target/dif.sh@30 -- # for sub in "$@" 00:19:23.394 15:23:31 -- target/dif.sh@31 -- # create_subsystem 2 00:19:23.394 15:23:31 -- target/dif.sh@18 -- # local sub_id=2 00:19:23.394 15:23:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:23.394 15:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.394 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:23.394 bdev_null2 00:19:23.394 15:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.394 15:23:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:23.394 15:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.394 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:23.394 15:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.394 15:23:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:23.394 15:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.394 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:23.394 15:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.394 15:23:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:23.394 15:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.394 15:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:23.394 15:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.394 15:23:31 -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:23.394 15:23:31 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:23.394 15:23:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:23.394 15:23:31 -- nvmf/common.sh@521 -- # config=() 00:19:23.394 15:23:31 -- nvmf/common.sh@521 -- # local subsystem config 00:19:23.394 15:23:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:23.394 15:23:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:23.394 15:23:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:23.394 { 00:19:23.394 "params": { 00:19:23.394 "name": "Nvme$subsystem", 00:19:23.394 "trtype": "$TEST_TRANSPORT", 00:19:23.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.394 "adrfam": "ipv4", 00:19:23.394 "trsvcid": "$NVMF_PORT", 00:19:23.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.394 "hdgst": ${hdgst:-false}, 00:19:23.394 "ddgst": ${ddgst:-false} 00:19:23.394 }, 00:19:23.394 "method": "bdev_nvme_attach_controller" 00:19:23.394 } 00:19:23.394 EOF 00:19:23.394 )") 00:19:23.394 15:23:31 -- target/dif.sh@82 -- # gen_fio_conf 00:19:23.394 15:23:31 -- target/dif.sh@54 -- # local file 00:19:23.394 15:23:31 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:23.394 15:23:31 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:23.394 15:23:31 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:23.394 15:23:31 -- target/dif.sh@56 -- # cat 00:19:23.394 15:23:31 -- nvmf/common.sh@543 -- # cat 00:19:23.394 15:23:31 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:23.394 15:23:31 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.394 15:23:31 -- common/autotest_common.sh@1327 -- # shift 00:19:23.394 15:23:31 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:23.394 15:23:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:23.394 15:23:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:23.394 15:23:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:23.394 15:23:31 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.394 15:23:31 -- target/dif.sh@72 -- # (( file <= files )) 00:19:23.394 15:23:31 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:23.394 15:23:31 -- target/dif.sh@73 -- # cat 00:19:23.394 15:23:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:23.394 { 00:19:23.394 "params": { 00:19:23.394 "name": "Nvme$subsystem", 00:19:23.394 "trtype": "$TEST_TRANSPORT", 00:19:23.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.394 "adrfam": "ipv4", 00:19:23.394 "trsvcid": "$NVMF_PORT", 00:19:23.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.394 "hdgst": ${hdgst:-false}, 00:19:23.394 "ddgst": ${ddgst:-false} 00:19:23.394 }, 00:19:23.394 "method": "bdev_nvme_attach_controller" 00:19:23.394 } 00:19:23.394 EOF 00:19:23.394 )") 00:19:23.394 15:23:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:23.394 15:23:31 -- nvmf/common.sh@543 -- # cat 00:19:23.394 15:23:31 -- target/dif.sh@72 -- # (( file++ )) 00:19:23.394 15:23:31 -- target/dif.sh@72 -- # (( file <= files )) 00:19:23.394 15:23:31 -- target/dif.sh@73 -- # cat 00:19:23.394 15:23:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:23.394 15:23:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:23.394 { 00:19:23.394 "params": { 00:19:23.394 "name": "Nvme$subsystem", 00:19:23.394 "trtype": "$TEST_TRANSPORT", 00:19:23.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.394 "adrfam": "ipv4", 00:19:23.394 "trsvcid": "$NVMF_PORT", 00:19:23.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.394 "hdgst": ${hdgst:-false}, 00:19:23.394 "ddgst": ${ddgst:-false} 00:19:23.394 }, 00:19:23.394 "method": "bdev_nvme_attach_controller" 00:19:23.394 } 00:19:23.394 EOF 00:19:23.394 )") 00:19:23.394 15:23:31 -- target/dif.sh@72 -- # (( file++ )) 00:19:23.394 15:23:31 -- target/dif.sh@72 -- # (( file <= files )) 00:19:23.394 15:23:31 -- nvmf/common.sh@543 -- # cat 00:19:23.394 15:23:31 -- nvmf/common.sh@545 -- # jq . 00:19:23.394 15:23:31 -- nvmf/common.sh@546 -- # IFS=, 00:19:23.394 15:23:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:23.394 "params": { 00:19:23.394 "name": "Nvme0", 00:19:23.394 "trtype": "tcp", 00:19:23.394 "traddr": "10.0.0.2", 00:19:23.394 "adrfam": "ipv4", 00:19:23.394 "trsvcid": "4420", 00:19:23.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:23.394 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:23.394 "hdgst": false, 00:19:23.394 "ddgst": false 00:19:23.394 }, 00:19:23.394 "method": "bdev_nvme_attach_controller" 00:19:23.394 },{ 00:19:23.394 "params": { 00:19:23.394 "name": "Nvme1", 00:19:23.394 "trtype": "tcp", 00:19:23.394 "traddr": "10.0.0.2", 00:19:23.394 "adrfam": "ipv4", 00:19:23.394 "trsvcid": "4420", 00:19:23.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.394 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.394 "hdgst": false, 00:19:23.394 "ddgst": false 00:19:23.394 }, 00:19:23.394 "method": "bdev_nvme_attach_controller" 00:19:23.394 },{ 00:19:23.394 "params": { 00:19:23.394 "name": "Nvme2", 00:19:23.394 "trtype": "tcp", 00:19:23.394 "traddr": "10.0.0.2", 00:19:23.394 "adrfam": "ipv4", 00:19:23.394 "trsvcid": "4420", 00:19:23.394 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:23.394 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:23.394 "hdgst": false, 00:19:23.394 "ddgst": false 00:19:23.394 }, 00:19:23.394 "method": "bdev_nvme_attach_controller" 00:19:23.394 }' 00:19:23.394 15:23:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:23.394 15:23:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:23.394 15:23:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:23.394 15:23:31 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.394 15:23:31 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:23.395 15:23:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:23.395 15:23:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:23.395 15:23:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:23.395 15:23:31 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:23.395 15:23:31 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:23.395 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:23.395 ... 00:19:23.395 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:23.395 ... 00:19:23.395 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:23.395 ... 00:19:23.395 fio-3.35 00:19:23.395 Starting 24 threads 00:19:35.591 00:19:35.591 filename0: (groupid=0, jobs=1): err= 0: pid=79874: Wed Apr 24 15:23:42 2024 00:19:35.591 read: IOPS=168, BW=674KiB/s (691kB/s)(6756KiB/10017msec) 00:19:35.591 slat (usec): min=4, max=8038, avg=37.50, stdev=360.25 00:19:35.592 clat (msec): min=18, max=168, avg=94.64, stdev=23.36 00:19:35.592 lat (msec): min=18, max=168, avg=94.67, stdev=23.36 00:19:35.592 clat percentiles (msec): 00:19:35.592 | 1.00th=[ 43], 5.00th=[ 61], 10.00th=[ 68], 20.00th=[ 72], 00:19:35.592 | 30.00th=[ 78], 40.00th=[ 84], 50.00th=[ 100], 60.00th=[ 107], 00:19:35.592 | 70.00th=[ 112], 80.00th=[ 117], 90.00th=[ 123], 95.00th=[ 129], 00:19:35.592 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 169], 99.95th=[ 169], 00:19:35.592 | 99.99th=[ 169] 00:19:35.592 bw ( KiB/s): min= 528, max= 888, per=4.17%, avg=671.10, stdev=86.49, samples=20 00:19:35.592 iops : min= 132, max= 222, avg=167.75, stdev=21.58, samples=20 00:19:35.592 lat (msec) : 20=0.41%, 50=1.18%, 100=50.27%, 250=48.13% 00:19:35.592 cpu : usr=41.60%, sys=1.71%, ctx=1268, majf=0, minf=9 00:19:35.592 IO depths : 1=0.1%, 2=2.0%, 4=7.8%, 8=75.5%, 16=14.7%, 32=0.0%, >=64=0.0% 00:19:35.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.592 complete : 0=0.0%, 4=89.0%, 8=9.3%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.592 issued rwts: total=1689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.592 filename0: (groupid=0, jobs=1): err= 0: pid=79875: Wed Apr 24 15:23:42 2024 00:19:35.592 read: IOPS=179, BW=720KiB/s (737kB/s)(7200KiB/10004msec) 00:19:35.592 slat (usec): min=4, max=4038, avg=23.62, stdev=133.72 00:19:35.592 clat (msec): min=5, max=153, avg=88.80, stdev=26.16 00:19:35.592 lat (msec): min=5, max=153, avg=88.83, stdev=26.17 00:19:35.592 clat percentiles (msec): 00:19:35.592 | 1.00th=[ 10], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 71], 00:19:35.592 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 99], 00:19:35.592 | 70.00th=[ 109], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 125], 00:19:35.592 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 153], 99.95th=[ 155], 00:19:35.592 | 99.99th=[ 155] 00:19:35.592 bw ( KiB/s): min= 616, max= 992, per=4.34%, avg=699.26, stdev=86.76, samples=19 00:19:35.592 iops : min= 154, max= 248, avg=174.79, stdev=21.70, samples=19 00:19:35.592 lat (msec) : 10=1.22%, 20=0.56%, 50=5.39%, 100=53.83%, 250=39.00% 00:19:35.592 cpu : usr=43.33%, sys=1.65%, ctx=1014, majf=0, minf=9 00:19:35.592 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:35.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.592 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.592 issued rwts: total=1800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.592 filename0: (groupid=0, jobs=1): err= 0: pid=79876: Wed Apr 24 15:23:42 2024 00:19:35.592 read: IOPS=174, BW=700KiB/s (717kB/s)(7012KiB/10019msec) 00:19:35.592 slat (usec): min=7, max=8033, avg=27.90, stdev=287.07 00:19:35.592 clat (msec): min=18, max=145, avg=91.28, stdev=22.59 00:19:35.592 lat (msec): min=18, max=145, avg=91.31, stdev=22.59 00:19:35.592 clat percentiles (msec): 00:19:35.592 | 1.00th=[ 38], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 72], 00:19:35.592 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 87], 60.00th=[ 100], 00:19:35.592 | 70.00th=[ 108], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 121], 00:19:35.592 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 146], 99.95th=[ 146], 00:19:35.592 | 99.99th=[ 146] 00:19:35.592 bw ( KiB/s): min= 608, max= 912, per=4.31%, avg=694.75, stdev=69.41, samples=20 00:19:35.592 iops : min= 152, max= 228, avg=173.65, stdev=17.31, samples=20 00:19:35.592 lat (msec) : 20=0.40%, 50=2.74%, 100=56.99%, 250=39.87% 00:19:35.592 cpu : usr=36.76%, sys=1.82%, ctx=1058, majf=0, minf=9 00:19:35.592 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=81.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:35.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.592 complete : 0=0.0%, 4=87.5%, 8=11.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.592 issued rwts: total=1753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.592 filename0: (groupid=0, jobs=1): err= 0: pid=79877: Wed Apr 24 15:23:42 2024 00:19:35.592 read: IOPS=141, BW=568KiB/s (582kB/s)(5696KiB/10029msec) 00:19:35.592 slat (usec): min=4, max=4052, avg=25.31, stdev=166.88 00:19:35.592 clat (msec): min=47, max=181, avg=112.41, stdev=23.55 00:19:35.592 lat (msec): min=47, max=181, avg=112.44, stdev=23.55 00:19:35.592 clat percentiles (msec): 00:19:35.592 | 1.00th=[ 51], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 99], 00:19:35.592 | 30.00th=[ 107], 40.00th=[ 109], 50.00th=[ 112], 60.00th=[ 117], 00:19:35.592 | 70.00th=[ 121], 80.00th=[ 125], 90.00th=[ 142], 95.00th=[ 155], 00:19:35.592 | 99.00th=[ 171], 99.50th=[ 174], 99.90th=[ 182], 99.95th=[ 182], 00:19:35.592 | 99.99th=[ 182] 00:19:35.592 bw ( KiB/s): min= 384, max= 880, per=3.50%, avg=563.00, stdev=114.47, samples=20 00:19:35.592 iops : min= 96, max= 220, avg=140.70, stdev=28.61, samples=20 00:19:35.592 lat (msec) : 50=0.14%, 100=22.05%, 250=77.81% 00:19:35.592 cpu : usr=43.20%, sys=2.07%, ctx=1256, majf=0, minf=9 00:19:35.592 IO depths : 1=0.1%, 2=6.2%, 4=24.9%, 8=56.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:19:35.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.592 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.592 issued rwts: total=1424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.592 filename0: (groupid=0, jobs=1): err= 0: pid=79878: Wed Apr 24 15:23:42 2024 00:19:35.592 read: IOPS=174, BW=699KiB/s (715kB/s)(6992KiB/10007msec) 00:19:35.592 slat (usec): min=4, max=8039, avg=28.69, stdev=287.78 00:19:35.592 clat (msec): min=6, max=144, avg=91.47, stdev=23.51 00:19:35.592 lat (msec): min=10, max=144, avg=91.49, stdev=23.51 00:19:35.592 clat percentiles (msec): 00:19:35.592 | 1.00th=[ 36], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 72], 00:19:35.592 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 103], 00:19:35.592 | 70.00th=[ 109], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 123], 00:19:35.592 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:19:35.592 | 99.99th=[ 144] 00:19:35.592 bw ( KiB/s): min= 608, max= 912, per=4.25%, avg=684.63, stdev=69.81, samples=19 00:19:35.592 iops : min= 152, max= 228, avg=171.16, stdev=17.45, samples=19 00:19:35.592 lat (msec) : 10=0.06%, 20=0.74%, 50=2.92%, 100=55.43%, 250=40.85% 00:19:35.592 cpu : usr=34.84%, sys=1.79%, ctx=951, majf=0, minf=9 00:19:35.592 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=81.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:35.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.592 complete : 0=0.0%, 4=87.6%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.592 issued rwts: total=1748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.592 filename0: (groupid=0, jobs=1): err= 0: pid=79879: Wed Apr 24 15:23:42 2024 00:19:35.592 read: IOPS=171, BW=685KiB/s (702kB/s)(6888KiB/10053msec) 00:19:35.592 slat (usec): min=5, max=7523, avg=23.42, stdev=205.38 00:19:35.592 clat (msec): min=2, max=155, avg=93.14, stdev=29.42 00:19:35.592 lat (msec): min=2, max=155, avg=93.16, stdev=29.42 00:19:35.592 clat percentiles (msec): 00:19:35.592 | 1.00th=[ 6], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 72], 00:19:35.592 | 30.00th=[ 79], 40.00th=[ 85], 50.00th=[ 101], 60.00th=[ 108], 00:19:35.592 | 70.00th=[ 112], 80.00th=[ 120], 90.00th=[ 123], 95.00th=[ 128], 00:19:35.592 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:19:35.592 | 99.99th=[ 157] 00:19:35.592 bw ( KiB/s): min= 512, max= 1408, per=4.24%, avg=682.15, stdev=198.24, samples=20 00:19:35.592 iops : min= 128, max= 352, avg=170.50, stdev=49.54, samples=20 00:19:35.592 lat (msec) : 4=0.93%, 10=1.86%, 20=0.93%, 50=3.77%, 100=41.93% 00:19:35.592 lat (msec) : 250=50.58% 00:19:35.592 cpu : usr=38.04%, sys=1.69%, ctx=1228, majf=0, minf=9 00:19:35.592 IO depths : 1=0.2%, 2=1.3%, 4=4.8%, 8=77.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:35.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.592 complete : 0=0.0%, 4=88.9%, 8=10.1%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.592 issued rwts: total=1722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.592 filename0: (groupid=0, jobs=1): err= 0: pid=79880: Wed Apr 24 15:23:42 2024 00:19:35.592 read: IOPS=183, BW=732KiB/s (750kB/s)(7324KiB/10001msec) 00:19:35.592 slat (usec): min=3, max=8042, avg=28.18, stdev=282.36 00:19:35.592 clat (usec): min=1190, max=143935, avg=87260.68, stdev=27361.74 00:19:35.592 lat (usec): min=1198, max=143944, avg=87288.86, stdev=27368.84 00:19:35.592 clat percentiles (msec): 00:19:35.592 | 1.00th=[ 4], 5.00th=[ 46], 10.00th=[ 58], 20.00th=[ 70], 00:19:35.592 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 96], 00:19:35.592 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 125], 00:19:35.592 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:19:35.592 | 99.99th=[ 144] 00:19:35.592 bw ( KiB/s): min= 616, max= 998, per=4.37%, avg=704.84, stdev=88.62, samples=19 00:19:35.593 iops : min= 154, max= 249, avg=176.16, stdev=22.08, samples=19 00:19:35.593 lat (msec) : 2=0.33%, 4=0.71%, 10=1.04%, 20=0.71%, 50=5.13% 00:19:35.593 lat (msec) : 100=53.74%, 250=38.34% 00:19:35.593 cpu : usr=32.05%, sys=1.19%, ctx=903, majf=0, minf=9 00:19:35.593 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:35.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 issued rwts: total=1831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.593 filename0: (groupid=0, jobs=1): err= 0: pid=79881: Wed Apr 24 15:23:42 2024 00:19:35.593 read: IOPS=163, BW=653KiB/s (668kB/s)(6556KiB/10047msec) 00:19:35.593 slat (usec): min=4, max=8025, avg=30.42, stdev=278.08 00:19:35.593 clat (msec): min=9, max=176, avg=97.77, stdev=24.46 00:19:35.593 lat (msec): min=9, max=176, avg=97.80, stdev=24.47 00:19:35.593 clat percentiles (msec): 00:19:35.593 | 1.00th=[ 13], 5.00th=[ 61], 10.00th=[ 70], 20.00th=[ 75], 00:19:35.593 | 30.00th=[ 84], 40.00th=[ 99], 50.00th=[ 105], 60.00th=[ 109], 00:19:35.593 | 70.00th=[ 114], 80.00th=[ 118], 90.00th=[ 125], 95.00th=[ 129], 00:19:35.593 | 99.00th=[ 140], 99.50th=[ 150], 99.90th=[ 178], 99.95th=[ 178], 00:19:35.593 | 99.99th=[ 178] 00:19:35.593 bw ( KiB/s): min= 528, max= 1136, per=4.04%, avg=651.80, stdev=128.42, samples=20 00:19:35.593 iops : min= 132, max= 284, avg=162.90, stdev=32.10, samples=20 00:19:35.593 lat (msec) : 10=0.85%, 20=1.10%, 50=0.98%, 100=41.37%, 250=55.70% 00:19:35.593 cpu : usr=41.59%, sys=1.96%, ctx=1367, majf=0, minf=9 00:19:35.593 IO depths : 1=0.1%, 2=2.5%, 4=9.9%, 8=72.6%, 16=14.9%, 32=0.0%, >=64=0.0% 00:19:35.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 complete : 0=0.0%, 4=90.0%, 8=7.8%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 issued rwts: total=1639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.593 filename1: (groupid=0, jobs=1): err= 0: pid=79882: Wed Apr 24 15:23:42 2024 00:19:35.593 read: IOPS=151, BW=607KiB/s (622kB/s)(6088KiB/10027msec) 00:19:35.593 slat (usec): min=4, max=4033, avg=23.73, stdev=177.42 00:19:35.593 clat (msec): min=37, max=171, avg=105.11, stdev=24.71 00:19:35.593 lat (msec): min=37, max=171, avg=105.14, stdev=24.72 00:19:35.593 clat percentiles (msec): 00:19:35.593 | 1.00th=[ 44], 5.00th=[ 65], 10.00th=[ 66], 20.00th=[ 82], 00:19:35.593 | 30.00th=[ 102], 40.00th=[ 107], 50.00th=[ 109], 60.00th=[ 113], 00:19:35.593 | 70.00th=[ 118], 80.00th=[ 121], 90.00th=[ 130], 95.00th=[ 153], 00:19:35.593 | 99.00th=[ 159], 99.50th=[ 161], 99.90th=[ 171], 99.95th=[ 171], 00:19:35.593 | 99.99th=[ 171] 00:19:35.593 bw ( KiB/s): min= 496, max= 896, per=3.74%, avg=602.30, stdev=103.71, samples=20 00:19:35.593 iops : min= 124, max= 224, avg=150.55, stdev=25.89, samples=20 00:19:35.593 lat (msec) : 50=2.63%, 100=26.81%, 250=70.57% 00:19:35.593 cpu : usr=43.98%, sys=1.97%, ctx=1478, majf=0, minf=9 00:19:35.593 IO depths : 1=0.1%, 2=4.6%, 4=18.3%, 8=63.5%, 16=13.5%, 32=0.0%, >=64=0.0% 00:19:35.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 complete : 0=0.0%, 4=92.5%, 8=3.5%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 issued rwts: total=1522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.593 filename1: (groupid=0, jobs=1): err= 0: pid=79883: Wed Apr 24 15:23:42 2024 00:19:35.593 read: IOPS=164, BW=657KiB/s (673kB/s)(6600KiB/10043msec) 00:19:35.593 slat (usec): min=7, max=8026, avg=20.68, stdev=197.35 00:19:35.593 clat (msec): min=17, max=162, avg=97.15, stdev=23.59 00:19:35.593 lat (msec): min=17, max=162, avg=97.17, stdev=23.59 00:19:35.593 clat percentiles (msec): 00:19:35.593 | 1.00th=[ 37], 5.00th=[ 57], 10.00th=[ 65], 20.00th=[ 74], 00:19:35.593 | 30.00th=[ 84], 40.00th=[ 96], 50.00th=[ 105], 60.00th=[ 108], 00:19:35.593 | 70.00th=[ 112], 80.00th=[ 120], 90.00th=[ 122], 95.00th=[ 130], 00:19:35.593 | 99.00th=[ 138], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 163], 00:19:35.593 | 99.99th=[ 163] 00:19:35.593 bw ( KiB/s): min= 520, max= 1008, per=4.06%, avg=653.40, stdev=106.10, samples=20 00:19:35.593 iops : min= 130, max= 252, avg=163.30, stdev=26.51, samples=20 00:19:35.593 lat (msec) : 20=0.97%, 50=2.73%, 100=43.33%, 250=52.97% 00:19:35.593 cpu : usr=33.50%, sys=1.38%, ctx=1224, majf=0, minf=9 00:19:35.593 IO depths : 1=0.2%, 2=1.0%, 4=3.3%, 8=79.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:35.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 issued rwts: total=1650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.593 filename1: (groupid=0, jobs=1): err= 0: pid=79884: Wed Apr 24 15:23:42 2024 00:19:35.593 read: IOPS=173, BW=693KiB/s (710kB/s)(6952KiB/10033msec) 00:19:35.593 slat (usec): min=3, max=8044, avg=34.69, stdev=384.36 00:19:35.593 clat (msec): min=22, max=168, avg=92.17, stdev=24.78 00:19:35.593 lat (msec): min=22, max=168, avg=92.20, stdev=24.80 00:19:35.593 clat percentiles (msec): 00:19:35.593 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 71], 00:19:35.593 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 95], 60.00th=[ 107], 00:19:35.593 | 70.00th=[ 109], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 127], 00:19:35.593 | 99.00th=[ 134], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 169], 00:19:35.593 | 99.99th=[ 169] 00:19:35.593 bw ( KiB/s): min= 568, max= 1032, per=4.27%, avg=688.35, stdev=125.12, samples=20 00:19:35.593 iops : min= 142, max= 258, avg=172.05, stdev=31.27, samples=20 00:19:35.593 lat (msec) : 50=5.98%, 100=48.45%, 250=45.57% 00:19:35.593 cpu : usr=31.42%, sys=1.69%, ctx=959, majf=0, minf=9 00:19:35.593 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:35.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 issued rwts: total=1738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.593 filename1: (groupid=0, jobs=1): err= 0: pid=79885: Wed Apr 24 15:23:42 2024 00:19:35.593 read: IOPS=172, BW=689KiB/s (706kB/s)(6928KiB/10053msec) 00:19:35.593 slat (usec): min=4, max=4047, avg=22.97, stdev=136.87 00:19:35.593 clat (msec): min=4, max=153, avg=92.58, stdev=27.74 00:19:35.593 lat (msec): min=4, max=153, avg=92.60, stdev=27.74 00:19:35.593 clat percentiles (msec): 00:19:35.593 | 1.00th=[ 7], 5.00th=[ 48], 10.00th=[ 63], 20.00th=[ 72], 00:19:35.593 | 30.00th=[ 80], 40.00th=[ 86], 50.00th=[ 97], 60.00th=[ 107], 00:19:35.593 | 70.00th=[ 112], 80.00th=[ 117], 90.00th=[ 123], 95.00th=[ 128], 00:19:35.593 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 155], 99.95th=[ 155], 00:19:35.593 | 99.99th=[ 155] 00:19:35.593 bw ( KiB/s): min= 528, max= 1520, per=4.26%, avg=686.20, stdev=205.00, samples=20 00:19:35.593 iops : min= 132, max= 380, avg=171.50, stdev=51.27, samples=20 00:19:35.593 lat (msec) : 10=2.66%, 20=1.04%, 50=2.02%, 100=44.46%, 250=49.83% 00:19:35.593 cpu : usr=43.21%, sys=1.89%, ctx=1285, majf=0, minf=9 00:19:35.593 IO depths : 1=0.1%, 2=1.5%, 4=5.9%, 8=76.9%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:35.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 complete : 0=0.0%, 4=89.0%, 8=9.7%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 issued rwts: total=1732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.593 filename1: (groupid=0, jobs=1): err= 0: pid=79886: Wed Apr 24 15:23:42 2024 00:19:35.593 read: IOPS=160, BW=644KiB/s (659kB/s)(6448KiB/10014msec) 00:19:35.593 slat (usec): min=6, max=8039, avg=29.38, stdev=299.51 00:19:35.593 clat (msec): min=18, max=157, avg=99.21, stdev=23.19 00:19:35.593 lat (msec): min=19, max=157, avg=99.24, stdev=23.19 00:19:35.593 clat percentiles (msec): 00:19:35.593 | 1.00th=[ 47], 5.00th=[ 64], 10.00th=[ 72], 20.00th=[ 77], 00:19:35.593 | 30.00th=[ 83], 40.00th=[ 93], 50.00th=[ 108], 60.00th=[ 109], 00:19:35.593 | 70.00th=[ 114], 80.00th=[ 121], 90.00th=[ 125], 95.00th=[ 134], 00:19:35.593 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 157], 99.95th=[ 159], 00:19:35.593 | 99.99th=[ 159] 00:19:35.593 bw ( KiB/s): min= 512, max= 814, per=3.98%, avg=640.35, stdev=80.33, samples=20 00:19:35.593 iops : min= 128, max= 203, avg=160.05, stdev=20.01, samples=20 00:19:35.593 lat (msec) : 20=0.43%, 50=1.12%, 100=44.29%, 250=54.16% 00:19:35.593 cpu : usr=37.71%, sys=1.69%, ctx=1265, majf=0, minf=9 00:19:35.593 IO depths : 1=0.1%, 2=2.3%, 4=9.3%, 8=73.6%, 16=14.8%, 32=0.0%, >=64=0.0% 00:19:35.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 complete : 0=0.0%, 4=89.7%, 8=8.3%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 issued rwts: total=1612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.593 filename1: (groupid=0, jobs=1): err= 0: pid=79887: Wed Apr 24 15:23:42 2024 00:19:35.593 read: IOPS=154, BW=618KiB/s (633kB/s)(6200KiB/10034msec) 00:19:35.593 slat (usec): min=6, max=8025, avg=24.96, stdev=270.47 00:19:35.593 clat (msec): min=14, max=168, avg=103.34, stdev=25.46 00:19:35.593 lat (msec): min=14, max=168, avg=103.36, stdev=25.46 00:19:35.593 clat percentiles (msec): 00:19:35.593 | 1.00th=[ 34], 5.00th=[ 66], 10.00th=[ 71], 20.00th=[ 80], 00:19:35.593 | 30.00th=[ 93], 40.00th=[ 107], 50.00th=[ 108], 60.00th=[ 112], 00:19:35.593 | 70.00th=[ 120], 80.00th=[ 121], 90.00th=[ 132], 95.00th=[ 144], 00:19:35.593 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:19:35.593 | 99.99th=[ 169] 00:19:35.593 bw ( KiB/s): min= 400, max= 1024, per=3.81%, avg=613.20, stdev=129.38, samples=20 00:19:35.593 iops : min= 100, max= 256, avg=153.25, stdev=32.32, samples=20 00:19:35.593 lat (msec) : 20=0.90%, 50=1.81%, 100=31.29%, 250=66.00% 00:19:35.593 cpu : usr=31.59%, sys=1.69%, ctx=913, majf=0, minf=9 00:19:35.593 IO depths : 1=0.1%, 2=4.3%, 4=17.2%, 8=64.8%, 16=13.7%, 32=0.0%, >=64=0.0% 00:19:35.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 complete : 0=0.0%, 4=92.1%, 8=4.1%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.593 issued rwts: total=1550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.593 filename1: (groupid=0, jobs=1): err= 0: pid=79888: Wed Apr 24 15:23:42 2024 00:19:35.593 read: IOPS=165, BW=662KiB/s (678kB/s)(6624KiB/10007msec) 00:19:35.593 slat (usec): min=5, max=6848, avg=39.10, stdev=313.91 00:19:35.594 clat (msec): min=10, max=185, avg=96.45, stdev=24.43 00:19:35.594 lat (msec): min=10, max=185, avg=96.49, stdev=24.42 00:19:35.594 clat percentiles (msec): 00:19:35.594 | 1.00th=[ 20], 5.00th=[ 62], 10.00th=[ 66], 20.00th=[ 73], 00:19:35.594 | 30.00th=[ 80], 40.00th=[ 90], 50.00th=[ 103], 60.00th=[ 108], 00:19:35.594 | 70.00th=[ 112], 80.00th=[ 117], 90.00th=[ 124], 95.00th=[ 131], 00:19:35.594 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 186], 99.95th=[ 186], 00:19:35.594 | 99.99th=[ 186] 00:19:35.594 bw ( KiB/s): min= 512, max= 880, per=4.01%, avg=645.05, stdev=86.17, samples=19 00:19:35.594 iops : min= 128, max= 220, avg=161.26, stdev=21.54, samples=19 00:19:35.594 lat (msec) : 20=1.15%, 50=1.21%, 100=46.07%, 250=51.57% 00:19:35.594 cpu : usr=42.08%, sys=1.96%, ctx=1334, majf=0, minf=9 00:19:35.594 IO depths : 1=0.1%, 2=2.7%, 4=10.6%, 8=72.4%, 16=14.3%, 32=0.0%, >=64=0.0% 00:19:35.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.594 complete : 0=0.0%, 4=89.9%, 8=7.8%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.594 issued rwts: total=1656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.594 filename1: (groupid=0, jobs=1): err= 0: pid=79889: Wed Apr 24 15:23:42 2024 00:19:35.594 read: IOPS=165, BW=661KiB/s (677kB/s)(6612KiB/10007msec) 00:19:35.594 slat (usec): min=7, max=8040, avg=26.53, stdev=279.06 00:19:35.594 clat (msec): min=10, max=180, avg=96.68, stdev=24.50 00:19:35.594 lat (msec): min=10, max=180, avg=96.71, stdev=24.52 00:19:35.594 clat percentiles (msec): 00:19:35.594 | 1.00th=[ 20], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 72], 00:19:35.594 | 30.00th=[ 82], 40.00th=[ 88], 50.00th=[ 105], 60.00th=[ 108], 00:19:35.594 | 70.00th=[ 112], 80.00th=[ 120], 90.00th=[ 123], 95.00th=[ 132], 00:19:35.594 | 99.00th=[ 148], 99.50th=[ 148], 99.90th=[ 180], 99.95th=[ 180], 00:19:35.594 | 99.99th=[ 180] 00:19:35.594 bw ( KiB/s): min= 512, max= 880, per=4.01%, avg=645.05, stdev=83.45, samples=19 00:19:35.594 iops : min= 128, max= 220, avg=161.26, stdev=20.86, samples=19 00:19:35.594 lat (msec) : 20=1.15%, 50=1.09%, 100=46.64%, 250=51.12% 00:19:35.594 cpu : usr=34.55%, sys=1.55%, ctx=1154, majf=0, minf=9 00:19:35.594 IO depths : 1=0.1%, 2=2.5%, 4=9.9%, 8=73.0%, 16=14.6%, 32=0.0%, >=64=0.0% 00:19:35.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.594 complete : 0=0.0%, 4=89.8%, 8=8.0%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.594 issued rwts: total=1653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.594 filename2: (groupid=0, jobs=1): err= 0: pid=79890: Wed Apr 24 15:23:42 2024 00:19:35.594 read: IOPS=172, BW=690KiB/s (707kB/s)(6912KiB/10017msec) 00:19:35.594 slat (usec): min=3, max=8056, avg=44.95, stdev=472.09 00:19:35.594 clat (msec): min=37, max=153, avg=92.48, stdev=22.28 00:19:35.594 lat (msec): min=37, max=153, avg=92.52, stdev=22.29 00:19:35.594 clat percentiles (msec): 00:19:35.594 | 1.00th=[ 44], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 72], 00:19:35.594 | 30.00th=[ 75], 40.00th=[ 84], 50.00th=[ 93], 60.00th=[ 106], 00:19:35.594 | 70.00th=[ 108], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 123], 00:19:35.594 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 155], 00:19:35.594 | 99.99th=[ 155] 00:19:35.594 bw ( KiB/s): min= 584, max= 934, per=4.27%, avg=687.45, stdev=79.95, samples=20 00:19:35.594 iops : min= 146, max= 233, avg=171.80, stdev=19.85, samples=20 00:19:35.594 lat (msec) : 50=2.89%, 100=53.53%, 250=43.58% 00:19:35.594 cpu : usr=31.43%, sys=1.70%, ctx=960, majf=0, minf=9 00:19:35.594 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:35.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.594 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.594 issued rwts: total=1728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.594 filename2: (groupid=0, jobs=1): err= 0: pid=79891: Wed Apr 24 15:23:42 2024 00:19:35.594 read: IOPS=172, BW=692KiB/s (708kB/s)(6928KiB/10017msec) 00:19:35.594 slat (usec): min=4, max=8051, avg=28.36, stdev=255.70 00:19:35.594 clat (msec): min=27, max=149, avg=92.38, stdev=23.66 00:19:35.594 lat (msec): min=27, max=149, avg=92.41, stdev=23.67 00:19:35.594 clat percentiles (msec): 00:19:35.594 | 1.00th=[ 32], 5.00th=[ 54], 10.00th=[ 64], 20.00th=[ 71], 00:19:35.594 | 30.00th=[ 78], 40.00th=[ 84], 50.00th=[ 93], 60.00th=[ 106], 00:19:35.594 | 70.00th=[ 111], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 125], 00:19:35.594 | 99.00th=[ 131], 99.50th=[ 140], 99.90th=[ 148], 99.95th=[ 150], 00:19:35.594 | 99.99th=[ 150] 00:19:35.594 bw ( KiB/s): min= 560, max= 1021, per=4.26%, avg=686.15, stdev=108.12, samples=20 00:19:35.594 iops : min= 140, max= 255, avg=171.50, stdev=26.97, samples=20 00:19:35.594 lat (msec) : 50=3.75%, 100=51.44%, 250=44.80% 00:19:35.594 cpu : usr=41.16%, sys=1.76%, ctx=1179, majf=0, minf=9 00:19:35.594 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:35.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.594 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.594 issued rwts: total=1732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.594 filename2: (groupid=0, jobs=1): err= 0: pid=79892: Wed Apr 24 15:23:42 2024 00:19:35.594 read: IOPS=176, BW=707KiB/s (724kB/s)(7088KiB/10031msec) 00:19:35.594 slat (usec): min=4, max=10026, avg=32.60, stdev=324.05 00:19:35.594 clat (msec): min=23, max=144, avg=90.38, stdev=24.09 00:19:35.594 lat (msec): min=23, max=149, avg=90.42, stdev=24.11 00:19:35.594 clat percentiles (msec): 00:19:35.594 | 1.00th=[ 36], 5.00th=[ 49], 10.00th=[ 61], 20.00th=[ 71], 00:19:35.594 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 88], 60.00th=[ 104], 00:19:35.594 | 70.00th=[ 109], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 125], 00:19:35.594 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:19:35.594 | 99.99th=[ 144] 00:19:35.594 bw ( KiB/s): min= 608, max= 1056, per=4.36%, avg=702.35, stdev=103.32, samples=20 00:19:35.594 iops : min= 152, max= 264, avg=175.55, stdev=25.81, samples=20 00:19:35.594 lat (msec) : 50=6.04%, 100=52.54%, 250=41.42% 00:19:35.594 cpu : usr=39.95%, sys=1.78%, ctx=1119, majf=0, minf=9 00:19:35.594 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:35.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.594 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.594 issued rwts: total=1772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.594 filename2: (groupid=0, jobs=1): err= 0: pid=79893: Wed Apr 24 15:23:42 2024 00:19:35.594 read: IOPS=168, BW=675KiB/s (692kB/s)(6784KiB/10046msec) 00:19:35.594 slat (usec): min=3, max=8067, avg=44.70, stdev=476.58 00:19:35.594 clat (msec): min=9, max=166, avg=94.44, stdev=25.41 00:19:35.594 lat (msec): min=9, max=166, avg=94.48, stdev=25.41 00:19:35.594 clat percentiles (msec): 00:19:35.594 | 1.00th=[ 16], 5.00th=[ 51], 10.00th=[ 62], 20.00th=[ 72], 00:19:35.594 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 100], 60.00th=[ 108], 00:19:35.594 | 70.00th=[ 111], 80.00th=[ 118], 90.00th=[ 121], 95.00th=[ 124], 00:19:35.594 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 167], 00:19:35.594 | 99.99th=[ 167] 00:19:35.594 bw ( KiB/s): min= 528, max= 1136, per=4.17%, avg=671.80, stdev=130.62, samples=20 00:19:35.594 iops : min= 132, max= 284, avg=167.90, stdev=32.65, samples=20 00:19:35.594 lat (msec) : 10=0.83%, 20=1.06%, 50=3.01%, 100=46.05%, 250=49.06% 00:19:35.594 cpu : usr=31.61%, sys=1.55%, ctx=964, majf=0, minf=9 00:19:35.594 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=79.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:35.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.594 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.594 issued rwts: total=1696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.594 filename2: (groupid=0, jobs=1): err= 0: pid=79894: Wed Apr 24 15:23:42 2024 00:19:35.594 read: IOPS=164, BW=657KiB/s (672kB/s)(6576KiB/10015msec) 00:19:35.594 slat (usec): min=3, max=8035, avg=21.64, stdev=197.98 00:19:35.594 clat (msec): min=14, max=155, avg=97.32, stdev=21.95 00:19:35.594 lat (msec): min=14, max=155, avg=97.34, stdev=21.96 00:19:35.594 clat percentiles (msec): 00:19:35.594 | 1.00th=[ 48], 5.00th=[ 63], 10.00th=[ 71], 20.00th=[ 73], 00:19:35.594 | 30.00th=[ 83], 40.00th=[ 89], 50.00th=[ 103], 60.00th=[ 109], 00:19:35.594 | 70.00th=[ 111], 80.00th=[ 120], 90.00th=[ 123], 95.00th=[ 129], 00:19:35.594 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:19:35.594 | 99.99th=[ 157] 00:19:35.594 bw ( KiB/s): min= 512, max= 768, per=4.06%, avg=653.15, stdev=66.14, samples=20 00:19:35.594 iops : min= 128, max= 192, avg=163.25, stdev=16.47, samples=20 00:19:35.594 lat (msec) : 20=0.06%, 50=1.40%, 100=47.93%, 250=50.61% 00:19:35.594 cpu : usr=31.64%, sys=1.55%, ctx=917, majf=0, minf=9 00:19:35.594 IO depths : 1=0.1%, 2=2.3%, 4=9.2%, 8=73.9%, 16=14.5%, 32=0.0%, >=64=0.0% 00:19:35.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.594 complete : 0=0.0%, 4=89.4%, 8=8.5%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.594 issued rwts: total=1644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.594 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.594 filename2: (groupid=0, jobs=1): err= 0: pid=79895: Wed Apr 24 15:23:42 2024 00:19:35.594 read: IOPS=173, BW=694KiB/s (710kB/s)(6944KiB/10012msec) 00:19:35.594 slat (usec): min=4, max=4053, avg=22.92, stdev=127.20 00:19:35.594 clat (msec): min=35, max=155, avg=92.12, stdev=22.44 00:19:35.594 lat (msec): min=35, max=155, avg=92.15, stdev=22.43 00:19:35.594 clat percentiles (msec): 00:19:35.594 | 1.00th=[ 44], 5.00th=[ 61], 10.00th=[ 65], 20.00th=[ 72], 00:19:35.594 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 88], 60.00th=[ 104], 00:19:35.594 | 70.00th=[ 109], 80.00th=[ 115], 90.00th=[ 122], 95.00th=[ 126], 00:19:35.594 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 157], 99.95th=[ 157], 00:19:35.594 | 99.99th=[ 157] 00:19:35.594 bw ( KiB/s): min= 576, max= 896, per=4.29%, avg=690.30, stdev=72.10, samples=20 00:19:35.594 iops : min= 144, max= 224, avg=172.55, stdev=18.00, samples=20 00:19:35.595 lat (msec) : 50=1.96%, 100=56.16%, 250=41.88% 00:19:35.595 cpu : usr=39.43%, sys=1.80%, ctx=1258, majf=0, minf=9 00:19:35.595 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.9%, 16=15.0%, 32=0.0%, >=64=0.0% 00:19:35.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.595 complete : 0=0.0%, 4=88.0%, 8=11.0%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.595 issued rwts: total=1736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.595 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.595 filename2: (groupid=0, jobs=1): err= 0: pid=79896: Wed Apr 24 15:23:42 2024 00:19:35.595 read: IOPS=178, BW=715KiB/s (732kB/s)(7152KiB/10006msec) 00:19:35.595 slat (usec): min=4, max=8023, avg=31.78, stdev=268.02 00:19:35.595 clat (msec): min=5, max=144, avg=89.40, stdev=24.76 00:19:35.595 lat (msec): min=5, max=144, avg=89.43, stdev=24.76 00:19:35.595 clat percentiles (msec): 00:19:35.595 | 1.00th=[ 11], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 72], 00:19:35.595 | 30.00th=[ 74], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 101], 00:19:35.595 | 70.00th=[ 109], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 122], 00:19:35.595 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 144], 99.95th=[ 144], 00:19:35.595 | 99.99th=[ 144] 00:19:35.595 bw ( KiB/s): min= 616, max= 968, per=4.32%, avg=695.37, stdev=77.26, samples=19 00:19:35.595 iops : min= 154, max= 242, avg=173.79, stdev=19.31, samples=19 00:19:35.595 lat (msec) : 10=0.73%, 20=0.89%, 50=4.03%, 100=55.03%, 250=39.32% 00:19:35.595 cpu : usr=40.47%, sys=1.74%, ctx=1107, majf=0, minf=9 00:19:35.595 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:35.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.595 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.595 issued rwts: total=1788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.595 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.595 filename2: (groupid=0, jobs=1): err= 0: pid=79897: Wed Apr 24 15:23:42 2024 00:19:35.595 read: IOPS=164, BW=657KiB/s (673kB/s)(6608KiB/10056msec) 00:19:35.595 slat (usec): min=5, max=4030, avg=17.67, stdev=99.08 00:19:35.595 clat (msec): min=5, max=188, avg=97.12, stdev=29.45 00:19:35.595 lat (msec): min=5, max=188, avg=97.14, stdev=29.45 00:19:35.595 clat percentiles (msec): 00:19:35.595 | 1.00th=[ 7], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:19:35.595 | 30.00th=[ 85], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 110], 00:19:35.595 | 70.00th=[ 113], 80.00th=[ 121], 90.00th=[ 122], 95.00th=[ 132], 00:19:35.595 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 188], 99.95th=[ 188], 00:19:35.595 | 99.99th=[ 188] 00:19:35.595 bw ( KiB/s): min= 464, max= 1513, per=4.06%, avg=653.55, stdev=221.23, samples=20 00:19:35.595 iops : min= 116, max= 378, avg=163.30, stdev=55.27, samples=20 00:19:35.595 lat (msec) : 10=2.91%, 20=0.97%, 50=1.51%, 100=33.11%, 250=61.50% 00:19:35.595 cpu : usr=39.10%, sys=1.63%, ctx=1305, majf=0, minf=9 00:19:35.595 IO depths : 1=0.1%, 2=3.0%, 4=12.0%, 8=70.0%, 16=14.8%, 32=0.0%, >=64=0.0% 00:19:35.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.595 complete : 0=0.0%, 4=90.9%, 8=6.4%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.595 issued rwts: total=1652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.595 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:35.595 00:19:35.595 Run status group 0 (all jobs): 00:19:35.595 READ: bw=15.7MiB/s (16.5MB/s), 568KiB/s-732KiB/s (582kB/s-750kB/s), io=158MiB (166MB), run=10001-10056msec 00:19:35.595 15:23:42 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:35.595 15:23:42 -- target/dif.sh@43 -- # local sub 00:19:35.595 15:23:42 -- target/dif.sh@45 -- # for sub in "$@" 00:19:35.595 15:23:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:35.595 15:23:42 -- target/dif.sh@36 -- # local sub_id=0 00:19:35.595 15:23:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:35.595 15:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.595 15:23:42 -- common/autotest_common.sh@10 -- # set +x 00:19:35.595 15:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.595 15:23:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:35.595 15:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.595 15:23:42 -- common/autotest_common.sh@10 -- # set +x 00:19:35.595 15:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.595 15:23:42 -- target/dif.sh@45 -- # for sub in "$@" 00:19:35.595 15:23:42 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:35.595 15:23:42 -- target/dif.sh@36 -- # local sub_id=1 00:19:35.595 15:23:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:35.595 15:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.595 15:23:42 -- common/autotest_common.sh@10 -- # set +x 00:19:35.595 15:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.595 15:23:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:35.595 15:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.595 15:23:42 -- common/autotest_common.sh@10 -- # set +x 00:19:35.595 15:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.595 15:23:42 -- target/dif.sh@45 -- # for sub in "$@" 00:19:35.595 15:23:42 -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:35.595 15:23:42 -- target/dif.sh@36 -- # local sub_id=2 00:19:35.595 15:23:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:35.595 15:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.595 15:23:42 -- common/autotest_common.sh@10 -- # set +x 00:19:35.595 15:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.595 15:23:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:35.595 15:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.595 15:23:42 -- common/autotest_common.sh@10 -- # set +x 00:19:35.595 15:23:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.595 15:23:43 -- target/dif.sh@115 -- # NULL_DIF=1 00:19:35.595 15:23:43 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:35.595 15:23:43 -- target/dif.sh@115 -- # numjobs=2 00:19:35.595 15:23:43 -- target/dif.sh@115 -- # iodepth=8 00:19:35.595 15:23:43 -- target/dif.sh@115 -- # runtime=5 00:19:35.595 15:23:43 -- target/dif.sh@115 -- # files=1 00:19:35.595 15:23:43 -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:35.595 15:23:43 -- target/dif.sh@28 -- # local sub 00:19:35.595 15:23:43 -- target/dif.sh@30 -- # for sub in "$@" 00:19:35.595 15:23:43 -- target/dif.sh@31 -- # create_subsystem 0 00:19:35.595 15:23:43 -- target/dif.sh@18 -- # local sub_id=0 00:19:35.595 15:23:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:35.595 15:23:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.595 15:23:43 -- common/autotest_common.sh@10 -- # set +x 00:19:35.595 bdev_null0 00:19:35.595 15:23:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.595 15:23:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:35.595 15:23:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.595 15:23:43 -- common/autotest_common.sh@10 -- # set +x 00:19:35.595 15:23:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.595 15:23:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:35.595 15:23:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.595 15:23:43 -- common/autotest_common.sh@10 -- # set +x 00:19:35.595 15:23:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.595 15:23:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:35.595 15:23:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.595 15:23:43 -- common/autotest_common.sh@10 -- # set +x 00:19:35.595 [2024-04-24 15:23:43.031622] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.595 15:23:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.595 15:23:43 -- target/dif.sh@30 -- # for sub in "$@" 00:19:35.595 15:23:43 -- target/dif.sh@31 -- # create_subsystem 1 00:19:35.595 15:23:43 -- target/dif.sh@18 -- # local sub_id=1 00:19:35.595 15:23:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:35.595 15:23:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.595 15:23:43 -- common/autotest_common.sh@10 -- # set +x 00:19:35.595 bdev_null1 00:19:35.595 15:23:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.595 15:23:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:35.595 15:23:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.595 15:23:43 -- common/autotest_common.sh@10 -- # set +x 00:19:35.595 15:23:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.595 15:23:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:35.596 15:23:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.596 15:23:43 -- common/autotest_common.sh@10 -- # set +x 00:19:35.596 15:23:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.596 15:23:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:35.596 15:23:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.596 15:23:43 -- common/autotest_common.sh@10 -- # set +x 00:19:35.596 15:23:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.596 15:23:43 -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:35.596 15:23:43 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:35.596 15:23:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:35.596 15:23:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:35.596 15:23:43 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:35.596 15:23:43 -- nvmf/common.sh@521 -- # config=() 00:19:35.596 15:23:43 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:35.596 15:23:43 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:35.596 15:23:43 -- nvmf/common.sh@521 -- # local subsystem config 00:19:35.596 15:23:43 -- target/dif.sh@82 -- # gen_fio_conf 00:19:35.596 15:23:43 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:35.596 15:23:43 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:35.596 15:23:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:35.596 15:23:43 -- common/autotest_common.sh@1327 -- # shift 00:19:35.596 15:23:43 -- target/dif.sh@54 -- # local file 00:19:35.596 15:23:43 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:35.596 15:23:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:35.596 { 00:19:35.596 "params": { 00:19:35.596 "name": "Nvme$subsystem", 00:19:35.596 "trtype": "$TEST_TRANSPORT", 00:19:35.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.596 "adrfam": "ipv4", 00:19:35.596 "trsvcid": "$NVMF_PORT", 00:19:35.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.596 "hdgst": ${hdgst:-false}, 00:19:35.596 "ddgst": ${ddgst:-false} 00:19:35.596 }, 00:19:35.596 "method": "bdev_nvme_attach_controller" 00:19:35.596 } 00:19:35.596 EOF 00:19:35.596 )") 00:19:35.596 15:23:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:35.596 15:23:43 -- target/dif.sh@56 -- # cat 00:19:35.596 15:23:43 -- nvmf/common.sh@543 -- # cat 00:19:35.596 15:23:43 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:35.596 15:23:43 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:35.596 15:23:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:35.596 15:23:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:35.596 15:23:43 -- target/dif.sh@72 -- # (( file <= files )) 00:19:35.596 15:23:43 -- target/dif.sh@73 -- # cat 00:19:35.596 15:23:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:35.596 15:23:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:35.596 { 00:19:35.596 "params": { 00:19:35.596 "name": "Nvme$subsystem", 00:19:35.596 "trtype": "$TEST_TRANSPORT", 00:19:35.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.596 "adrfam": "ipv4", 00:19:35.596 "trsvcid": "$NVMF_PORT", 00:19:35.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.596 "hdgst": ${hdgst:-false}, 00:19:35.596 "ddgst": ${ddgst:-false} 00:19:35.596 }, 00:19:35.596 "method": "bdev_nvme_attach_controller" 00:19:35.596 } 00:19:35.596 EOF 00:19:35.596 )") 00:19:35.596 15:23:43 -- nvmf/common.sh@543 -- # cat 00:19:35.596 15:23:43 -- target/dif.sh@72 -- # (( file++ )) 00:19:35.596 15:23:43 -- target/dif.sh@72 -- # (( file <= files )) 00:19:35.596 15:23:43 -- nvmf/common.sh@545 -- # jq . 00:19:35.596 15:23:43 -- nvmf/common.sh@546 -- # IFS=, 00:19:35.596 15:23:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:35.596 "params": { 00:19:35.596 "name": "Nvme0", 00:19:35.596 "trtype": "tcp", 00:19:35.596 "traddr": "10.0.0.2", 00:19:35.596 "adrfam": "ipv4", 00:19:35.596 "trsvcid": "4420", 00:19:35.596 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:35.596 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:35.596 "hdgst": false, 00:19:35.596 "ddgst": false 00:19:35.596 }, 00:19:35.596 "method": "bdev_nvme_attach_controller" 00:19:35.596 },{ 00:19:35.596 "params": { 00:19:35.596 "name": "Nvme1", 00:19:35.596 "trtype": "tcp", 00:19:35.596 "traddr": "10.0.0.2", 00:19:35.596 "adrfam": "ipv4", 00:19:35.596 "trsvcid": "4420", 00:19:35.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.596 "hdgst": false, 00:19:35.596 "ddgst": false 00:19:35.596 }, 00:19:35.596 "method": "bdev_nvme_attach_controller" 00:19:35.596 }' 00:19:35.596 15:23:43 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:35.596 15:23:43 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:35.596 15:23:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:35.596 15:23:43 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:35.596 15:23:43 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:35.596 15:23:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:35.596 15:23:43 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:35.596 15:23:43 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:35.596 15:23:43 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:35.596 15:23:43 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:35.596 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:35.596 ... 00:19:35.596 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:35.596 ... 00:19:35.596 fio-3.35 00:19:35.596 Starting 4 threads 00:19:39.899 00:19:39.899 filename0: (groupid=0, jobs=1): err= 0: pid=80042: Wed Apr 24 15:23:48 2024 00:19:39.899 read: IOPS=1807, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5004msec) 00:19:39.899 slat (nsec): min=6279, max=89326, avg=18560.55, stdev=9248.88 00:19:39.899 clat (usec): min=1245, max=7329, avg=4361.80, stdev=996.88 00:19:39.899 lat (usec): min=1259, max=7353, avg=4380.36, stdev=994.41 00:19:39.899 clat percentiles (usec): 00:19:39.899 | 1.00th=[ 2008], 5.00th=[ 2212], 10.00th=[ 2606], 20.00th=[ 3163], 00:19:39.899 | 30.00th=[ 4359], 40.00th=[ 4686], 50.00th=[ 4883], 60.00th=[ 4948], 00:19:39.899 | 70.00th=[ 5014], 80.00th=[ 5014], 90.00th=[ 5080], 95.00th=[ 5145], 00:19:39.899 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 6128], 99.95th=[ 6652], 00:19:39.899 | 99.99th=[ 7308] 00:19:39.899 bw ( KiB/s): min=12416, max=17824, per=22.81%, avg=14672.00, stdev=2194.49, samples=9 00:19:39.899 iops : min= 1552, max= 2228, avg=1834.00, stdev=274.31, samples=9 00:19:39.899 lat (msec) : 2=1.00%, 4=22.79%, 10=76.22% 00:19:39.899 cpu : usr=91.78%, sys=6.98%, ctx=8, majf=0, minf=0 00:19:39.899 IO depths : 1=0.2%, 2=15.1%, 4=55.4%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.899 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.899 issued rwts: total=9045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.899 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:39.899 filename0: (groupid=0, jobs=1): err= 0: pid=80043: Wed Apr 24 15:23:48 2024 00:19:39.899 read: IOPS=2057, BW=16.1MiB/s (16.9MB/s)(80.4MiB/5002msec) 00:19:39.899 slat (nsec): min=7404, max=75422, avg=19523.95, stdev=8961.76 00:19:39.899 clat (usec): min=1536, max=7423, avg=3836.61, stdev=1059.01 00:19:39.899 lat (usec): min=1550, max=7437, avg=3856.14, stdev=1057.64 00:19:39.899 clat percentiles (usec): 00:19:39.899 | 1.00th=[ 2147], 5.00th=[ 2442], 10.00th=[ 2474], 20.00th=[ 2671], 00:19:39.899 | 30.00th=[ 2737], 40.00th=[ 3064], 50.00th=[ 4359], 60.00th=[ 4686], 00:19:39.899 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 4948], 95.00th=[ 5014], 00:19:39.899 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 5735], 99.95th=[ 5866], 00:19:39.899 | 99.99th=[ 7046] 00:19:39.899 bw ( KiB/s): min=13840, max=17376, per=25.44%, avg=16362.67, stdev=1300.54, samples=9 00:19:39.899 iops : min= 1730, max= 2172, avg=2045.33, stdev=162.57, samples=9 00:19:39.899 lat (msec) : 2=0.41%, 4=44.94%, 10=54.66% 00:19:39.899 cpu : usr=93.64%, sys=5.40%, ctx=6, majf=0, minf=10 00:19:39.899 IO depths : 1=0.4%, 2=5.2%, 4=60.9%, 8=33.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.899 complete : 0=0.0%, 4=98.1%, 8=1.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.899 issued rwts: total=10290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.899 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:39.899 filename1: (groupid=0, jobs=1): err= 0: pid=80044: Wed Apr 24 15:23:48 2024 00:19:39.899 read: IOPS=2058, BW=16.1MiB/s (16.9MB/s)(80.4MiB/5001msec) 00:19:39.899 slat (nsec): min=7180, max=70874, avg=18684.65, stdev=8424.66 00:19:39.899 clat (usec): min=893, max=7022, avg=3836.48, stdev=1101.46 00:19:39.899 lat (usec): min=903, max=7036, avg=3855.16, stdev=1099.71 00:19:39.899 clat percentiles (usec): 00:19:39.899 | 1.00th=[ 1254], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2671], 00:19:39.899 | 30.00th=[ 2737], 40.00th=[ 3097], 50.00th=[ 4359], 60.00th=[ 4686], 00:19:39.899 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 4948], 95.00th=[ 5014], 00:19:39.899 | 99.00th=[ 5342], 99.50th=[ 5473], 99.90th=[ 5866], 99.95th=[ 5866], 00:19:39.899 | 99.99th=[ 6128] 00:19:39.899 bw ( KiB/s): min=13840, max=17376, per=25.46%, avg=16371.78, stdev=1307.22, samples=9 00:19:39.899 iops : min= 1730, max= 2172, avg=2046.44, stdev=163.39, samples=9 00:19:39.899 lat (usec) : 1000=0.27% 00:19:39.899 lat (msec) : 2=1.58%, 4=43.73%, 10=54.42% 00:19:39.899 cpu : usr=93.64%, sys=5.38%, ctx=14, majf=0, minf=9 00:19:39.899 IO depths : 1=0.2%, 2=5.6%, 4=60.7%, 8=33.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.899 complete : 0=0.0%, 4=97.8%, 8=2.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.899 issued rwts: total=10293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.899 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:39.899 filename1: (groupid=0, jobs=1): err= 0: pid=80045: Wed Apr 24 15:23:48 2024 00:19:39.899 read: IOPS=2118, BW=16.5MiB/s (17.4MB/s)(82.8MiB/5003msec) 00:19:39.899 slat (usec): min=7, max=171, avg=18.70, stdev= 9.25 00:19:39.899 clat (usec): min=832, max=9733, avg=3728.69, stdev=1108.15 00:19:39.899 lat (usec): min=840, max=9777, avg=3747.39, stdev=1106.62 00:19:39.899 clat percentiles (usec): 00:19:39.899 | 1.00th=[ 1958], 5.00th=[ 2212], 10.00th=[ 2442], 20.00th=[ 2638], 00:19:39.899 | 30.00th=[ 2704], 40.00th=[ 2868], 50.00th=[ 3949], 60.00th=[ 4555], 00:19:39.899 | 70.00th=[ 4752], 80.00th=[ 4883], 90.00th=[ 4948], 95.00th=[ 5080], 00:19:39.899 | 99.00th=[ 5342], 99.50th=[ 5473], 99.90th=[ 6063], 99.95th=[ 6521], 00:19:39.899 | 99.99th=[ 7308] 00:19:39.899 bw ( KiB/s): min=14112, max=18368, per=26.29%, avg=16910.22, stdev=1228.01, samples=9 00:19:39.899 iops : min= 1764, max= 2296, avg=2113.78, stdev=153.50, samples=9 00:19:39.899 lat (usec) : 1000=0.06% 00:19:39.900 lat (msec) : 2=1.41%, 4=49.11%, 10=49.42% 00:19:39.900 cpu : usr=92.76%, sys=5.82%, ctx=631, majf=0, minf=9 00:19:39.900 IO depths : 1=0.2%, 2=3.2%, 4=61.9%, 8=34.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.900 complete : 0=0.0%, 4=98.8%, 8=1.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.900 issued rwts: total=10598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.900 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:39.900 00:19:39.900 Run status group 0 (all jobs): 00:19:39.900 READ: bw=62.8MiB/s (65.9MB/s), 14.1MiB/s-16.5MiB/s (14.8MB/s-17.4MB/s), io=314MiB (330MB), run=5001-5004msec 00:19:39.900 15:23:49 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:39.900 15:23:49 -- target/dif.sh@43 -- # local sub 00:19:39.900 15:23:49 -- target/dif.sh@45 -- # for sub in "$@" 00:19:39.900 15:23:49 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:39.900 15:23:49 -- target/dif.sh@36 -- # local sub_id=0 00:19:39.900 15:23:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:39.900 15:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.900 15:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:39.900 15:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.900 15:23:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:39.900 15:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.900 15:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:39.900 15:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.900 15:23:49 -- target/dif.sh@45 -- # for sub in "$@" 00:19:39.900 15:23:49 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:39.900 15:23:49 -- target/dif.sh@36 -- # local sub_id=1 00:19:39.900 15:23:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:39.900 15:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.900 15:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:39.900 15:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.900 15:23:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:39.900 15:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.900 15:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:39.900 ************************************ 00:19:39.900 END TEST fio_dif_rand_params 00:19:39.900 ************************************ 00:19:39.900 15:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.900 00:19:39.900 real 0m23.537s 00:19:39.900 user 2m5.326s 00:19:39.900 sys 0m7.406s 00:19:39.900 15:23:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:39.900 15:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:40.159 15:23:49 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:40.159 15:23:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:40.159 15:23:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:40.159 15:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:40.159 ************************************ 00:19:40.159 START TEST fio_dif_digest 00:19:40.159 ************************************ 00:19:40.159 15:23:49 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:19:40.159 15:23:49 -- target/dif.sh@123 -- # local NULL_DIF 00:19:40.159 15:23:49 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:40.159 15:23:49 -- target/dif.sh@125 -- # local hdgst ddgst 00:19:40.159 15:23:49 -- target/dif.sh@127 -- # NULL_DIF=3 00:19:40.159 15:23:49 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:40.159 15:23:49 -- target/dif.sh@127 -- # numjobs=3 00:19:40.159 15:23:49 -- target/dif.sh@127 -- # iodepth=3 00:19:40.159 15:23:49 -- target/dif.sh@127 -- # runtime=10 00:19:40.159 15:23:49 -- target/dif.sh@128 -- # hdgst=true 00:19:40.159 15:23:49 -- target/dif.sh@128 -- # ddgst=true 00:19:40.159 15:23:49 -- target/dif.sh@130 -- # create_subsystems 0 00:19:40.159 15:23:49 -- target/dif.sh@28 -- # local sub 00:19:40.159 15:23:49 -- target/dif.sh@30 -- # for sub in "$@" 00:19:40.159 15:23:49 -- target/dif.sh@31 -- # create_subsystem 0 00:19:40.159 15:23:49 -- target/dif.sh@18 -- # local sub_id=0 00:19:40.159 15:23:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:40.159 15:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.159 15:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:40.159 bdev_null0 00:19:40.159 15:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.159 15:23:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:40.159 15:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.159 15:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:40.159 15:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.159 15:23:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:40.159 15:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.159 15:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:40.159 15:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.159 15:23:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:40.159 15:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.159 15:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:40.159 [2024-04-24 15:23:49.277304] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.159 15:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.159 15:23:49 -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:40.159 15:23:49 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:40.159 15:23:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:40.159 15:23:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:40.159 15:23:49 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:40.159 15:23:49 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:40.159 15:23:49 -- target/dif.sh@82 -- # gen_fio_conf 00:19:40.159 15:23:49 -- target/dif.sh@54 -- # local file 00:19:40.159 15:23:49 -- nvmf/common.sh@521 -- # config=() 00:19:40.159 15:23:49 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:40.159 15:23:49 -- nvmf/common.sh@521 -- # local subsystem config 00:19:40.159 15:23:49 -- target/dif.sh@56 -- # cat 00:19:40.159 15:23:49 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:40.159 15:23:49 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:40.159 15:23:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.159 15:23:49 -- common/autotest_common.sh@1327 -- # shift 00:19:40.159 15:23:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.159 { 00:19:40.159 "params": { 00:19:40.159 "name": "Nvme$subsystem", 00:19:40.159 "trtype": "$TEST_TRANSPORT", 00:19:40.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.159 "adrfam": "ipv4", 00:19:40.159 "trsvcid": "$NVMF_PORT", 00:19:40.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.159 "hdgst": ${hdgst:-false}, 00:19:40.159 "ddgst": ${ddgst:-false} 00:19:40.159 }, 00:19:40.159 "method": "bdev_nvme_attach_controller" 00:19:40.159 } 00:19:40.159 EOF 00:19:40.159 )") 00:19:40.159 15:23:49 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:40.159 15:23:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:40.159 15:23:49 -- nvmf/common.sh@543 -- # cat 00:19:40.159 15:23:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:40.159 15:23:49 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:40.159 15:23:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:40.159 15:23:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:40.159 15:23:49 -- target/dif.sh@72 -- # (( file <= files )) 00:19:40.159 15:23:49 -- nvmf/common.sh@545 -- # jq . 00:19:40.159 15:23:49 -- nvmf/common.sh@546 -- # IFS=, 00:19:40.159 15:23:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:40.159 "params": { 00:19:40.159 "name": "Nvme0", 00:19:40.159 "trtype": "tcp", 00:19:40.159 "traddr": "10.0.0.2", 00:19:40.159 "adrfam": "ipv4", 00:19:40.159 "trsvcid": "4420", 00:19:40.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:40.159 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:40.159 "hdgst": true, 00:19:40.159 "ddgst": true 00:19:40.159 }, 00:19:40.159 "method": "bdev_nvme_attach_controller" 00:19:40.159 }' 00:19:40.159 15:23:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:40.159 15:23:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:40.159 15:23:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:40.159 15:23:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:40.159 15:23:49 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:40.159 15:23:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:40.159 15:23:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:40.159 15:23:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:40.159 15:23:49 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:40.159 15:23:49 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:40.418 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:40.418 ... 00:19:40.418 fio-3.35 00:19:40.418 Starting 3 threads 00:19:52.753 00:19:52.753 filename0: (groupid=0, jobs=1): err= 0: pid=80155: Wed Apr 24 15:24:00 2024 00:19:52.753 read: IOPS=228, BW=28.5MiB/s (29.9MB/s)(286MiB/10009msec) 00:19:52.753 slat (nsec): min=7325, max=69689, avg=16868.07, stdev=4905.67 00:19:52.753 clat (usec): min=9547, max=15686, avg=13096.59, stdev=238.29 00:19:52.753 lat (usec): min=9555, max=15711, avg=13113.46, stdev=238.97 00:19:52.753 clat percentiles (usec): 00:19:52.753 | 1.00th=[12911], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:19:52.753 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13042], 60.00th=[13042], 00:19:52.753 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:19:52.753 | 99.00th=[13566], 99.50th=[13829], 99.90th=[15664], 99.95th=[15664], 00:19:52.753 | 99.99th=[15664] 00:19:52.753 bw ( KiB/s): min=28416, max=29952, per=33.38%, avg=29264.68, stdev=430.49, samples=19 00:19:52.753 iops : min= 222, max= 234, avg=228.58, stdev= 3.42, samples=19 00:19:52.753 lat (msec) : 10=0.13%, 20=99.87% 00:19:52.753 cpu : usr=91.74%, sys=7.70%, ctx=10, majf=0, minf=0 00:19:52.753 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.753 issued rwts: total=2286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.753 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:52.753 filename0: (groupid=0, jobs=1): err= 0: pid=80156: Wed Apr 24 15:24:00 2024 00:19:52.753 read: IOPS=228, BW=28.6MiB/s (29.9MB/s)(286MiB/10008msec) 00:19:52.753 slat (nsec): min=7505, max=67860, avg=16718.17, stdev=5129.50 00:19:52.753 clat (usec): min=9517, max=13872, avg=13095.26, stdev=181.60 00:19:52.753 lat (usec): min=9531, max=13897, avg=13111.98, stdev=182.19 00:19:52.753 clat percentiles (usec): 00:19:52.753 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:19:52.753 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13042], 60.00th=[13042], 00:19:52.753 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:19:52.753 | 99.00th=[13566], 99.50th=[13698], 99.90th=[13829], 99.95th=[13829], 00:19:52.753 | 99.99th=[13829] 00:19:52.753 bw ( KiB/s): min=28416, max=29952, per=33.38%, avg=29261.74, stdev=353.39, samples=19 00:19:52.753 iops : min= 222, max= 234, avg=228.58, stdev= 2.78, samples=19 00:19:52.753 lat (msec) : 10=0.13%, 20=99.87% 00:19:52.753 cpu : usr=91.69%, sys=7.77%, ctx=13, majf=0, minf=0 00:19:52.753 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.753 issued rwts: total=2286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.753 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:52.753 filename0: (groupid=0, jobs=1): err= 0: pid=80157: Wed Apr 24 15:24:00 2024 00:19:52.753 read: IOPS=228, BW=28.5MiB/s (29.9MB/s)(285MiB/10001msec) 00:19:52.753 slat (nsec): min=7143, max=99914, avg=16152.07, stdev=5773.50 00:19:52.753 clat (usec): min=12882, max=16771, avg=13105.59, stdev=186.57 00:19:52.753 lat (usec): min=12897, max=16795, avg=13121.75, stdev=187.32 00:19:52.753 clat percentiles (usec): 00:19:52.753 | 1.00th=[12911], 5.00th=[12911], 10.00th=[13042], 20.00th=[13042], 00:19:52.753 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13042], 60.00th=[13042], 00:19:52.753 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:19:52.753 | 99.00th=[13566], 99.50th=[13698], 99.90th=[16712], 99.95th=[16712], 00:19:52.753 | 99.99th=[16712] 00:19:52.753 bw ( KiB/s): min=28416, max=29952, per=33.38%, avg=29261.74, stdev=436.37, samples=19 00:19:52.753 iops : min= 222, max= 234, avg=228.58, stdev= 3.42, samples=19 00:19:52.753 lat (msec) : 20=100.00% 00:19:52.753 cpu : usr=91.69%, sys=7.76%, ctx=49, majf=0, minf=0 00:19:52.753 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.753 issued rwts: total=2283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.753 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:52.753 00:19:52.753 Run status group 0 (all jobs): 00:19:52.753 READ: bw=85.6MiB/s (89.8MB/s), 28.5MiB/s-28.6MiB/s (29.9MB/s-29.9MB/s), io=857MiB (898MB), run=10001-10009msec 00:19:52.753 15:24:00 -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:52.753 15:24:00 -- target/dif.sh@43 -- # local sub 00:19:52.753 15:24:00 -- target/dif.sh@45 -- # for sub in "$@" 00:19:52.753 15:24:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:52.753 15:24:00 -- target/dif.sh@36 -- # local sub_id=0 00:19:52.753 15:24:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:52.753 15:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.753 15:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:52.753 15:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.753 15:24:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:52.753 15:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.753 15:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:52.753 ************************************ 00:19:52.753 END TEST fio_dif_digest 00:19:52.753 ************************************ 00:19:52.753 15:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.753 00:19:52.753 real 0m11.023s 00:19:52.753 user 0m28.195s 00:19:52.753 sys 0m2.597s 00:19:52.753 15:24:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:52.753 15:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:52.753 15:24:00 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:52.753 15:24:00 -- target/dif.sh@147 -- # nvmftestfini 00:19:52.753 15:24:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:52.753 15:24:00 -- nvmf/common.sh@117 -- # sync 00:19:52.753 15:24:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:52.753 15:24:00 -- nvmf/common.sh@120 -- # set +e 00:19:52.753 15:24:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.753 15:24:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:52.753 rmmod nvme_tcp 00:19:52.753 rmmod nvme_fabrics 00:19:52.753 rmmod nvme_keyring 00:19:52.753 15:24:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.753 15:24:00 -- nvmf/common.sh@124 -- # set -e 00:19:52.753 15:24:00 -- nvmf/common.sh@125 -- # return 0 00:19:52.753 15:24:00 -- nvmf/common.sh@478 -- # '[' -n 79381 ']' 00:19:52.753 15:24:00 -- nvmf/common.sh@479 -- # killprocess 79381 00:19:52.753 15:24:00 -- common/autotest_common.sh@936 -- # '[' -z 79381 ']' 00:19:52.753 15:24:00 -- common/autotest_common.sh@940 -- # kill -0 79381 00:19:52.753 15:24:00 -- common/autotest_common.sh@941 -- # uname 00:19:52.753 15:24:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:52.753 15:24:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79381 00:19:52.753 killing process with pid 79381 00:19:52.753 15:24:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:52.753 15:24:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:52.753 15:24:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79381' 00:19:52.753 15:24:00 -- common/autotest_common.sh@955 -- # kill 79381 00:19:52.753 15:24:00 -- common/autotest_common.sh@960 -- # wait 79381 00:19:52.753 15:24:00 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:19:52.753 15:24:00 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:52.753 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:52.753 Waiting for block devices as requested 00:19:52.753 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:52.753 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:52.753 15:24:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:52.753 15:24:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:52.753 15:24:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.753 15:24:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:52.753 15:24:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.753 15:24:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:52.753 15:24:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.753 15:24:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:52.753 00:19:52.753 real 1m0.111s 00:19:52.753 user 3m48.920s 00:19:52.753 sys 0m19.007s 00:19:52.753 15:24:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:52.753 ************************************ 00:19:52.753 END TEST nvmf_dif 00:19:52.753 ************************************ 00:19:52.753 15:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:52.754 15:24:01 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:52.754 15:24:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:52.754 15:24:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:52.754 15:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:52.754 ************************************ 00:19:52.754 START TEST nvmf_abort_qd_sizes 00:19:52.754 ************************************ 00:19:52.754 15:24:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:52.754 * Looking for test storage... 00:19:52.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:52.754 15:24:01 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:52.754 15:24:01 -- nvmf/common.sh@7 -- # uname -s 00:19:52.754 15:24:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.754 15:24:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.754 15:24:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.754 15:24:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.754 15:24:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.754 15:24:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.754 15:24:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.754 15:24:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.754 15:24:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.754 15:24:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.754 15:24:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:19:52.754 15:24:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:19:52.754 15:24:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.754 15:24:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.754 15:24:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:52.754 15:24:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.754 15:24:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:52.754 15:24:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.754 15:24:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.754 15:24:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.754 15:24:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.754 15:24:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.754 15:24:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.754 15:24:01 -- paths/export.sh@5 -- # export PATH 00:19:52.754 15:24:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.754 15:24:01 -- nvmf/common.sh@47 -- # : 0 00:19:52.754 15:24:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.754 15:24:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.754 15:24:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.754 15:24:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.754 15:24:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.754 15:24:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.754 15:24:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.754 15:24:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.754 15:24:01 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:19:52.754 15:24:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:52.754 15:24:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.754 15:24:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:52.754 15:24:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:52.754 15:24:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:52.754 15:24:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.754 15:24:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:52.754 15:24:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.754 15:24:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:52.754 15:24:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:52.754 15:24:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:52.754 15:24:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:52.754 15:24:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:52.754 15:24:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:52.754 15:24:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.754 15:24:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.754 15:24:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:52.754 15:24:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:52.754 15:24:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:52.754 15:24:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:52.754 15:24:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:52.754 15:24:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.754 15:24:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:52.754 15:24:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:52.754 15:24:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:52.754 15:24:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:52.754 15:24:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:52.754 15:24:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:52.754 Cannot find device "nvmf_tgt_br" 00:19:52.754 15:24:01 -- nvmf/common.sh@155 -- # true 00:19:52.754 15:24:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:52.754 Cannot find device "nvmf_tgt_br2" 00:19:52.754 15:24:01 -- nvmf/common.sh@156 -- # true 00:19:52.754 15:24:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:52.754 15:24:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:52.754 Cannot find device "nvmf_tgt_br" 00:19:52.754 15:24:01 -- nvmf/common.sh@158 -- # true 00:19:52.754 15:24:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:52.754 Cannot find device "nvmf_tgt_br2" 00:19:52.754 15:24:01 -- nvmf/common.sh@159 -- # true 00:19:52.754 15:24:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:52.754 15:24:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:52.754 15:24:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:52.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.754 15:24:01 -- nvmf/common.sh@162 -- # true 00:19:52.754 15:24:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:52.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.754 15:24:01 -- nvmf/common.sh@163 -- # true 00:19:52.754 15:24:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:52.754 15:24:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:52.754 15:24:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:52.754 15:24:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:52.754 15:24:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:52.754 15:24:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:52.754 15:24:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:52.754 15:24:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:52.754 15:24:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:52.754 15:24:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:52.754 15:24:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:52.754 15:24:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:52.754 15:24:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:52.754 15:24:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:52.754 15:24:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:52.754 15:24:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:52.754 15:24:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:52.754 15:24:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:52.754 15:24:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:52.754 15:24:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:52.754 15:24:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:52.754 15:24:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:52.754 15:24:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:52.754 15:24:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:52.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:19:52.754 00:19:52.754 --- 10.0.0.2 ping statistics --- 00:19:52.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.754 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:19:52.754 15:24:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:52.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:52.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:19:52.754 00:19:52.754 --- 10.0.0.3 ping statistics --- 00:19:52.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.754 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:52.754 15:24:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:52.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:19:52.754 00:19:52.754 --- 10.0.0.1 ping statistics --- 00:19:52.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.754 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:52.754 15:24:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.754 15:24:01 -- nvmf/common.sh@422 -- # return 0 00:19:52.754 15:24:01 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:19:52.754 15:24:01 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:53.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:53.578 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:53.578 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:53.578 15:24:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.578 15:24:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:53.578 15:24:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:53.578 15:24:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.578 15:24:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:53.578 15:24:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:53.578 15:24:02 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:19:53.578 15:24:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:53.578 15:24:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:53.578 15:24:02 -- common/autotest_common.sh@10 -- # set +x 00:19:53.578 15:24:02 -- nvmf/common.sh@470 -- # nvmfpid=80761 00:19:53.578 15:24:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:53.578 15:24:02 -- nvmf/common.sh@471 -- # waitforlisten 80761 00:19:53.578 15:24:02 -- common/autotest_common.sh@817 -- # '[' -z 80761 ']' 00:19:53.578 15:24:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.578 15:24:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:53.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.578 15:24:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.578 15:24:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:53.578 15:24:02 -- common/autotest_common.sh@10 -- # set +x 00:19:53.578 [2024-04-24 15:24:02.821644] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:19:53.578 [2024-04-24 15:24:02.821735] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.836 [2024-04-24 15:24:02.961247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:54.093 [2024-04-24 15:24:03.092780] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.093 [2024-04-24 15:24:03.092857] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.093 [2024-04-24 15:24:03.092881] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.093 [2024-04-24 15:24:03.092899] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.093 [2024-04-24 15:24:03.092913] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.093 [2024-04-24 15:24:03.093076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.093 [2024-04-24 15:24:03.093205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.093 [2024-04-24 15:24:03.093616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:54.093 [2024-04-24 15:24:03.093638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.696 15:24:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:54.696 15:24:03 -- common/autotest_common.sh@850 -- # return 0 00:19:54.696 15:24:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:54.696 15:24:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:54.696 15:24:03 -- common/autotest_common.sh@10 -- # set +x 00:19:54.696 15:24:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.696 15:24:03 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:54.696 15:24:03 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:19:54.696 15:24:03 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:19:54.696 15:24:03 -- scripts/common.sh@309 -- # local bdf bdfs 00:19:54.696 15:24:03 -- scripts/common.sh@310 -- # local nvmes 00:19:54.696 15:24:03 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:19:54.696 15:24:03 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:54.696 15:24:03 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:19:54.696 15:24:03 -- scripts/common.sh@295 -- # local bdf= 00:19:54.696 15:24:03 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:19:54.696 15:24:03 -- scripts/common.sh@230 -- # local class 00:19:54.696 15:24:03 -- scripts/common.sh@231 -- # local subclass 00:19:54.696 15:24:03 -- scripts/common.sh@232 -- # local progif 00:19:54.696 15:24:03 -- scripts/common.sh@233 -- # printf %02x 1 00:19:54.696 15:24:03 -- scripts/common.sh@233 -- # class=01 00:19:54.696 15:24:03 -- scripts/common.sh@234 -- # printf %02x 8 00:19:54.696 15:24:03 -- scripts/common.sh@234 -- # subclass=08 00:19:54.696 15:24:03 -- scripts/common.sh@235 -- # printf %02x 2 00:19:54.696 15:24:03 -- scripts/common.sh@235 -- # progif=02 00:19:54.696 15:24:03 -- scripts/common.sh@237 -- # hash lspci 00:19:54.696 15:24:03 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:19:54.696 15:24:03 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:19:54.696 15:24:03 -- scripts/common.sh@240 -- # grep -i -- -p02 00:19:54.696 15:24:03 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:54.696 15:24:03 -- scripts/common.sh@242 -- # tr -d '"' 00:19:54.696 15:24:03 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:54.696 15:24:03 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:19:54.696 15:24:03 -- scripts/common.sh@15 -- # local i 00:19:54.696 15:24:03 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:19:54.696 15:24:03 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:54.696 15:24:03 -- scripts/common.sh@24 -- # return 0 00:19:54.696 15:24:03 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:19:54.696 15:24:03 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:54.696 15:24:03 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:19:54.696 15:24:03 -- scripts/common.sh@15 -- # local i 00:19:54.696 15:24:03 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:19:54.696 15:24:03 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:54.696 15:24:03 -- scripts/common.sh@24 -- # return 0 00:19:54.696 15:24:03 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:19:54.696 15:24:03 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:19:54.696 15:24:03 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:19:54.696 15:24:03 -- scripts/common.sh@320 -- # uname -s 00:19:54.696 15:24:03 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:19:54.696 15:24:03 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:19:54.696 15:24:03 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:19:54.696 15:24:03 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:19:54.696 15:24:03 -- scripts/common.sh@320 -- # uname -s 00:19:54.696 15:24:03 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:19:54.696 15:24:03 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:19:54.696 15:24:03 -- scripts/common.sh@325 -- # (( 2 )) 00:19:54.696 15:24:03 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:54.696 15:24:03 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:19:54.696 15:24:03 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:19:54.696 15:24:03 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:19:54.696 15:24:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:54.696 15:24:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:54.696 15:24:03 -- common/autotest_common.sh@10 -- # set +x 00:19:54.954 ************************************ 00:19:54.954 START TEST spdk_target_abort 00:19:54.954 ************************************ 00:19:54.954 15:24:04 -- common/autotest_common.sh@1111 -- # spdk_target 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:19:54.954 15:24:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.954 15:24:04 -- common/autotest_common.sh@10 -- # set +x 00:19:54.954 spdk_targetn1 00:19:54.954 15:24:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:54.954 15:24:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.954 15:24:04 -- common/autotest_common.sh@10 -- # set +x 00:19:54.954 [2024-04-24 15:24:04.079731] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.954 15:24:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:19:54.954 15:24:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.954 15:24:04 -- common/autotest_common.sh@10 -- # set +x 00:19:54.954 15:24:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:19:54.954 15:24:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.954 15:24:04 -- common/autotest_common.sh@10 -- # set +x 00:19:54.954 15:24:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:19:54.954 15:24:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.954 15:24:04 -- common/autotest_common.sh@10 -- # set +x 00:19:54.954 [2024-04-24 15:24:04.111896] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.954 15:24:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:54.954 15:24:04 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:58.243 Initializing NVMe Controllers 00:19:58.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:19:58.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:58.243 Initialization complete. Launching workers. 00:19:58.243 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10823, failed: 0 00:19:58.243 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1030, failed to submit 9793 00:19:58.243 success 801, unsuccess 229, failed 0 00:19:58.243 15:24:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:58.243 15:24:07 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:01.531 Initializing NVMe Controllers 00:20:01.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:01.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:01.531 Initialization complete. Launching workers. 00:20:01.531 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8976, failed: 0 00:20:01.531 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1167, failed to submit 7809 00:20:01.531 success 386, unsuccess 781, failed 0 00:20:01.531 15:24:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:01.531 15:24:10 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:04.824 Initializing NVMe Controllers 00:20:04.824 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:04.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:04.824 Initialization complete. Launching workers. 00:20:04.824 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31180, failed: 0 00:20:04.824 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2308, failed to submit 28872 00:20:04.824 success 409, unsuccess 1899, failed 0 00:20:04.824 15:24:13 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:04.824 15:24:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.824 15:24:13 -- common/autotest_common.sh@10 -- # set +x 00:20:04.824 15:24:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.824 15:24:13 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:04.824 15:24:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.824 15:24:13 -- common/autotest_common.sh@10 -- # set +x 00:20:05.391 15:24:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.391 15:24:14 -- target/abort_qd_sizes.sh@61 -- # killprocess 80761 00:20:05.391 15:24:14 -- common/autotest_common.sh@936 -- # '[' -z 80761 ']' 00:20:05.391 15:24:14 -- common/autotest_common.sh@940 -- # kill -0 80761 00:20:05.391 15:24:14 -- common/autotest_common.sh@941 -- # uname 00:20:05.391 15:24:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:05.391 15:24:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80761 00:20:05.391 15:24:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:05.391 15:24:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:05.391 15:24:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80761' 00:20:05.391 killing process with pid 80761 00:20:05.391 15:24:14 -- common/autotest_common.sh@955 -- # kill 80761 00:20:05.391 15:24:14 -- common/autotest_common.sh@960 -- # wait 80761 00:20:05.650 ************************************ 00:20:05.650 END TEST spdk_target_abort 00:20:05.650 ************************************ 00:20:05.650 00:20:05.650 real 0m10.749s 00:20:05.650 user 0m43.385s 00:20:05.650 sys 0m2.330s 00:20:05.650 15:24:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:05.650 15:24:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.650 15:24:14 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:05.650 15:24:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:05.650 15:24:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:05.650 15:24:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.650 ************************************ 00:20:05.650 START TEST kernel_target_abort 00:20:05.650 ************************************ 00:20:05.650 15:24:14 -- common/autotest_common.sh@1111 -- # kernel_target 00:20:05.650 15:24:14 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:05.650 15:24:14 -- nvmf/common.sh@717 -- # local ip 00:20:05.650 15:24:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:05.650 15:24:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:05.650 15:24:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.650 15:24:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.650 15:24:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:05.650 15:24:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.650 15:24:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:05.650 15:24:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:05.650 15:24:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:05.650 15:24:14 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:05.650 15:24:14 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:05.650 15:24:14 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:20:05.650 15:24:14 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:05.650 15:24:14 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:05.650 15:24:14 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:05.650 15:24:14 -- nvmf/common.sh@628 -- # local block nvme 00:20:05.650 15:24:14 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:20:05.650 15:24:14 -- nvmf/common.sh@631 -- # modprobe nvmet 00:20:05.909 15:24:14 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:05.909 15:24:14 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:06.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:06.168 Waiting for block devices as requested 00:20:06.168 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:06.426 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:06.426 15:24:15 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:06.426 15:24:15 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:06.426 15:24:15 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:20:06.426 15:24:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:06.426 15:24:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:06.426 15:24:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:06.426 15:24:15 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:20:06.426 15:24:15 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:06.426 15:24:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:06.426 No valid GPT data, bailing 00:20:06.426 15:24:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:06.426 15:24:15 -- scripts/common.sh@391 -- # pt= 00:20:06.426 15:24:15 -- scripts/common.sh@392 -- # return 1 00:20:06.426 15:24:15 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:20:06.426 15:24:15 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:06.426 15:24:15 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:06.426 15:24:15 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:20:06.426 15:24:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:06.426 15:24:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:06.426 15:24:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:06.426 15:24:15 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:20:06.426 15:24:15 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:06.426 15:24:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:06.426 No valid GPT data, bailing 00:20:06.426 15:24:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:06.426 15:24:15 -- scripts/common.sh@391 -- # pt= 00:20:06.426 15:24:15 -- scripts/common.sh@392 -- # return 1 00:20:06.427 15:24:15 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:20:06.427 15:24:15 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:06.427 15:24:15 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:06.427 15:24:15 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:20:06.427 15:24:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:06.427 15:24:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:06.427 15:24:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:06.427 15:24:15 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:20:06.427 15:24:15 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:06.427 15:24:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:06.686 No valid GPT data, bailing 00:20:06.686 15:24:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:06.686 15:24:15 -- scripts/common.sh@391 -- # pt= 00:20:06.686 15:24:15 -- scripts/common.sh@392 -- # return 1 00:20:06.686 15:24:15 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:20:06.686 15:24:15 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:06.686 15:24:15 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:06.686 15:24:15 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:20:06.686 15:24:15 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:06.686 15:24:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:06.686 15:24:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:06.686 15:24:15 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:20:06.686 15:24:15 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:06.686 15:24:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:06.686 No valid GPT data, bailing 00:20:06.686 15:24:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:06.686 15:24:15 -- scripts/common.sh@391 -- # pt= 00:20:06.686 15:24:15 -- scripts/common.sh@392 -- # return 1 00:20:06.686 15:24:15 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:20:06.686 15:24:15 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:20:06.686 15:24:15 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:06.686 15:24:15 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:06.686 15:24:15 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:06.686 15:24:15 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:06.686 15:24:15 -- nvmf/common.sh@656 -- # echo 1 00:20:06.686 15:24:15 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:20:06.686 15:24:15 -- nvmf/common.sh@658 -- # echo 1 00:20:06.686 15:24:15 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:20:06.686 15:24:15 -- nvmf/common.sh@661 -- # echo tcp 00:20:06.686 15:24:15 -- nvmf/common.sh@662 -- # echo 4420 00:20:06.686 15:24:15 -- nvmf/common.sh@663 -- # echo ipv4 00:20:06.686 15:24:15 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:06.686 15:24:15 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab --hostid=bfc76e61-7421-4d42-8df7-48fb051b4cab -a 10.0.0.1 -t tcp -s 4420 00:20:06.686 00:20:06.686 Discovery Log Number of Records 2, Generation counter 2 00:20:06.686 =====Discovery Log Entry 0====== 00:20:06.686 trtype: tcp 00:20:06.686 adrfam: ipv4 00:20:06.686 subtype: current discovery subsystem 00:20:06.686 treq: not specified, sq flow control disable supported 00:20:06.686 portid: 1 00:20:06.686 trsvcid: 4420 00:20:06.686 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:06.686 traddr: 10.0.0.1 00:20:06.686 eflags: none 00:20:06.686 sectype: none 00:20:06.686 =====Discovery Log Entry 1====== 00:20:06.686 trtype: tcp 00:20:06.686 adrfam: ipv4 00:20:06.686 subtype: nvme subsystem 00:20:06.686 treq: not specified, sq flow control disable supported 00:20:06.686 portid: 1 00:20:06.686 trsvcid: 4420 00:20:06.686 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:06.686 traddr: 10.0.0.1 00:20:06.686 eflags: none 00:20:06.686 sectype: none 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:06.686 15:24:15 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:09.969 Initializing NVMe Controllers 00:20:09.969 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:09.969 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:09.969 Initialization complete. Launching workers. 00:20:09.969 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32253, failed: 0 00:20:09.969 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32253, failed to submit 0 00:20:09.969 success 0, unsuccess 32253, failed 0 00:20:09.969 15:24:19 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:09.969 15:24:19 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:13.298 Initializing NVMe Controllers 00:20:13.298 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:13.298 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:13.298 Initialization complete. Launching workers. 00:20:13.298 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69292, failed: 0 00:20:13.298 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30017, failed to submit 39275 00:20:13.299 success 0, unsuccess 30017, failed 0 00:20:13.299 15:24:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:13.299 15:24:22 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:16.582 Initializing NVMe Controllers 00:20:16.582 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:16.582 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:16.582 Initialization complete. Launching workers. 00:20:16.582 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79013, failed: 0 00:20:16.582 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19722, failed to submit 59291 00:20:16.582 success 0, unsuccess 19722, failed 0 00:20:16.582 15:24:25 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:20:16.582 15:24:25 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:16.582 15:24:25 -- nvmf/common.sh@675 -- # echo 0 00:20:16.582 15:24:25 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:16.582 15:24:25 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:16.582 15:24:25 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:16.582 15:24:25 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:16.582 15:24:25 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:20:16.582 15:24:25 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:20:16.582 15:24:25 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:17.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:18.523 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:18.781 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:18.781 ************************************ 00:20:18.781 END TEST kernel_target_abort 00:20:18.781 ************************************ 00:20:18.781 00:20:18.781 real 0m13.018s 00:20:18.781 user 0m6.312s 00:20:18.781 sys 0m4.034s 00:20:18.781 15:24:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:18.781 15:24:27 -- common/autotest_common.sh@10 -- # set +x 00:20:18.781 15:24:27 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:18.781 15:24:27 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:20:18.781 15:24:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:18.781 15:24:27 -- nvmf/common.sh@117 -- # sync 00:20:18.781 15:24:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:18.781 15:24:27 -- nvmf/common.sh@120 -- # set +e 00:20:18.781 15:24:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:18.781 15:24:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:18.781 rmmod nvme_tcp 00:20:18.781 rmmod nvme_fabrics 00:20:18.781 rmmod nvme_keyring 00:20:18.781 15:24:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:18.781 15:24:28 -- nvmf/common.sh@124 -- # set -e 00:20:18.781 Process with pid 80761 is not found 00:20:18.781 15:24:28 -- nvmf/common.sh@125 -- # return 0 00:20:18.781 15:24:28 -- nvmf/common.sh@478 -- # '[' -n 80761 ']' 00:20:18.781 15:24:28 -- nvmf/common.sh@479 -- # killprocess 80761 00:20:18.781 15:24:28 -- common/autotest_common.sh@936 -- # '[' -z 80761 ']' 00:20:18.781 15:24:28 -- common/autotest_common.sh@940 -- # kill -0 80761 00:20:18.781 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (80761) - No such process 00:20:18.781 15:24:28 -- common/autotest_common.sh@963 -- # echo 'Process with pid 80761 is not found' 00:20:18.781 15:24:28 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:20:18.781 15:24:28 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:19.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:19.348 Waiting for block devices as requested 00:20:19.348 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:19.348 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:19.607 15:24:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:19.607 15:24:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:19.607 15:24:28 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:19.607 15:24:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:19.607 15:24:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.607 15:24:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:19.607 15:24:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.607 15:24:28 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:19.607 00:20:19.607 real 0m27.229s 00:20:19.607 user 0m50.917s 00:20:19.607 sys 0m7.804s 00:20:19.607 15:24:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:19.607 15:24:28 -- common/autotest_common.sh@10 -- # set +x 00:20:19.607 ************************************ 00:20:19.607 END TEST nvmf_abort_qd_sizes 00:20:19.607 ************************************ 00:20:19.607 15:24:28 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:19.607 15:24:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:19.607 15:24:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:19.607 15:24:28 -- common/autotest_common.sh@10 -- # set +x 00:20:19.607 ************************************ 00:20:19.607 START TEST keyring_file 00:20:19.607 ************************************ 00:20:19.607 15:24:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:19.607 * Looking for test storage... 00:20:19.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:19.607 15:24:28 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:19.607 15:24:28 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:19.607 15:24:28 -- nvmf/common.sh@7 -- # uname -s 00:20:19.607 15:24:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.607 15:24:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.607 15:24:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.607 15:24:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.607 15:24:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.607 15:24:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.607 15:24:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.607 15:24:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.607 15:24:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.607 15:24:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.607 15:24:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bfc76e61-7421-4d42-8df7-48fb051b4cab 00:20:19.607 15:24:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=bfc76e61-7421-4d42-8df7-48fb051b4cab 00:20:19.607 15:24:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.607 15:24:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.607 15:24:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:19.607 15:24:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.608 15:24:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:19.608 15:24:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.608 15:24:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.608 15:24:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.608 15:24:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.608 15:24:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.608 15:24:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.608 15:24:28 -- paths/export.sh@5 -- # export PATH 00:20:19.608 15:24:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.608 15:24:28 -- nvmf/common.sh@47 -- # : 0 00:20:19.608 15:24:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:19.608 15:24:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:19.608 15:24:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.608 15:24:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.608 15:24:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.608 15:24:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:19.608 15:24:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:19.608 15:24:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:19.608 15:24:28 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:19.608 15:24:28 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:19.608 15:24:28 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:19.608 15:24:28 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:20:19.608 15:24:28 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:20:19.608 15:24:28 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:20:19.608 15:24:28 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:19.608 15:24:28 -- keyring/common.sh@15 -- # local name key digest path 00:20:19.608 15:24:28 -- keyring/common.sh@17 -- # name=key0 00:20:19.608 15:24:28 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:19.608 15:24:28 -- keyring/common.sh@17 -- # digest=0 00:20:19.608 15:24:28 -- keyring/common.sh@18 -- # mktemp 00:20:19.896 15:24:28 -- keyring/common.sh@18 -- # path=/tmp/tmp.SpGph4vPNk 00:20:19.897 15:24:28 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:19.897 15:24:28 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:19.897 15:24:28 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:19.897 15:24:28 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:19.897 15:24:28 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:20:19.897 15:24:28 -- nvmf/common.sh@693 -- # digest=0 00:20:19.897 15:24:28 -- nvmf/common.sh@694 -- # python - 00:20:19.897 15:24:28 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SpGph4vPNk 00:20:19.897 15:24:28 -- keyring/common.sh@23 -- # echo /tmp/tmp.SpGph4vPNk 00:20:19.897 15:24:28 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.SpGph4vPNk 00:20:19.897 15:24:28 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:20:19.897 15:24:28 -- keyring/common.sh@15 -- # local name key digest path 00:20:19.897 15:24:28 -- keyring/common.sh@17 -- # name=key1 00:20:19.897 15:24:28 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:19.897 15:24:28 -- keyring/common.sh@17 -- # digest=0 00:20:19.897 15:24:28 -- keyring/common.sh@18 -- # mktemp 00:20:19.897 15:24:28 -- keyring/common.sh@18 -- # path=/tmp/tmp.cFKgxAgHlE 00:20:19.897 15:24:28 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:19.897 15:24:28 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:19.897 15:24:28 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:19.897 15:24:28 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:19.897 15:24:28 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:20:19.897 15:24:28 -- nvmf/common.sh@693 -- # digest=0 00:20:19.897 15:24:28 -- nvmf/common.sh@694 -- # python - 00:20:19.897 15:24:28 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cFKgxAgHlE 00:20:19.897 15:24:28 -- keyring/common.sh@23 -- # echo /tmp/tmp.cFKgxAgHlE 00:20:19.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.897 15:24:28 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.cFKgxAgHlE 00:20:19.897 15:24:28 -- keyring/file.sh@30 -- # tgtpid=81648 00:20:19.897 15:24:28 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:19.897 15:24:28 -- keyring/file.sh@32 -- # waitforlisten 81648 00:20:19.897 15:24:28 -- common/autotest_common.sh@817 -- # '[' -z 81648 ']' 00:20:19.897 15:24:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.897 15:24:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:19.897 15:24:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.897 15:24:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:19.897 15:24:28 -- common/autotest_common.sh@10 -- # set +x 00:20:19.897 [2024-04-24 15:24:29.032593] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:20:19.897 [2024-04-24 15:24:29.032919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81648 ] 00:20:20.155 [2024-04-24 15:24:29.167017] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.155 [2024-04-24 15:24:29.315024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.869 15:24:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:20.869 15:24:30 -- common/autotest_common.sh@850 -- # return 0 00:20:20.869 15:24:30 -- keyring/file.sh@33 -- # rpc_cmd 00:20:20.869 15:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.869 15:24:30 -- common/autotest_common.sh@10 -- # set +x 00:20:20.869 [2024-04-24 15:24:30.052070] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.869 null0 00:20:20.869 [2024-04-24 15:24:30.084024] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:20.869 [2024-04-24 15:24:30.084257] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:20.869 [2024-04-24 15:24:30.092029] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:20.869 15:24:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.869 15:24:30 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:20.869 15:24:30 -- common/autotest_common.sh@638 -- # local es=0 00:20:20.869 15:24:30 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:20.869 15:24:30 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:20.869 15:24:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:20.869 15:24:30 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:20.869 15:24:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:20.869 15:24:30 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:20.869 15:24:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.869 15:24:30 -- common/autotest_common.sh@10 -- # set +x 00:20:20.869 [2024-04-24 15:24:30.104024] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:20:20.869 { 00:20:20.869 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.869 "secure_channel": false, 00:20:20.869 "listen_address": { 00:20:20.869 "trtype": "tcp", 00:20:20.869 "traddr": "127.0.0.1", 00:20:20.869 "trsvcid": "4420" 00:20:20.869 }, 00:20:20.869 "method": "nvmf_subsystem_add_listener", 00:20:20.869 "req_id": 1 00:20:20.869 } 00:20:20.869 Got JSON-RPC error response 00:20:20.869 response: 00:20:20.869 { 00:20:20.869 "code": -32602, 00:20:20.869 "message": "Invalid parameters" 00:20:20.869 } 00:20:20.869 15:24:30 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:20.869 15:24:30 -- common/autotest_common.sh@641 -- # es=1 00:20:20.869 15:24:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:20.869 15:24:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:20.869 15:24:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:21.128 15:24:30 -- keyring/file.sh@46 -- # bperfpid=81664 00:20:21.128 15:24:30 -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:20:21.128 15:24:30 -- keyring/file.sh@48 -- # waitforlisten 81664 /var/tmp/bperf.sock 00:20:21.128 15:24:30 -- common/autotest_common.sh@817 -- # '[' -z 81664 ']' 00:20:21.128 15:24:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:21.128 15:24:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:21.128 15:24:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:21.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:21.128 15:24:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:21.128 15:24:30 -- common/autotest_common.sh@10 -- # set +x 00:20:21.128 [2024-04-24 15:24:30.161292] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:20:21.128 [2024-04-24 15:24:30.161590] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81664 ] 00:20:21.128 [2024-04-24 15:24:30.296878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.385 [2024-04-24 15:24:30.448089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.951 15:24:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:21.951 15:24:31 -- common/autotest_common.sh@850 -- # return 0 00:20:21.951 15:24:31 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SpGph4vPNk 00:20:21.951 15:24:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SpGph4vPNk 00:20:22.209 15:24:31 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.cFKgxAgHlE 00:20:22.209 15:24:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.cFKgxAgHlE 00:20:22.468 15:24:31 -- keyring/file.sh@51 -- # get_key key0 00:20:22.468 15:24:31 -- keyring/file.sh@51 -- # jq -r .path 00:20:22.468 15:24:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:22.468 15:24:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:22.468 15:24:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:22.727 15:24:31 -- keyring/file.sh@51 -- # [[ /tmp/tmp.SpGph4vPNk == \/\t\m\p\/\t\m\p\.\S\p\G\p\h\4\v\P\N\k ]] 00:20:22.727 15:24:31 -- keyring/file.sh@52 -- # get_key key1 00:20:22.727 15:24:31 -- keyring/file.sh@52 -- # jq -r .path 00:20:22.727 15:24:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:22.727 15:24:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:22.727 15:24:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:23.049 15:24:32 -- keyring/file.sh@52 -- # [[ /tmp/tmp.cFKgxAgHlE == \/\t\m\p\/\t\m\p\.\c\F\K\g\x\A\g\H\l\E ]] 00:20:23.049 15:24:32 -- keyring/file.sh@53 -- # get_refcnt key0 00:20:23.049 15:24:32 -- keyring/common.sh@12 -- # get_key key0 00:20:23.049 15:24:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:23.049 15:24:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:23.049 15:24:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:23.049 15:24:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:23.307 15:24:32 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:20:23.307 15:24:32 -- keyring/file.sh@54 -- # get_refcnt key1 00:20:23.307 15:24:32 -- keyring/common.sh@12 -- # get_key key1 00:20:23.307 15:24:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:23.307 15:24:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:23.307 15:24:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:23.307 15:24:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:23.568 15:24:32 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:20:23.568 15:24:32 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:23.568 15:24:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:23.829 [2024-04-24 15:24:32.853157] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.829 nvme0n1 00:20:23.829 15:24:32 -- keyring/file.sh@59 -- # get_refcnt key0 00:20:23.829 15:24:32 -- keyring/common.sh@12 -- # get_key key0 00:20:23.829 15:24:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:23.829 15:24:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:23.829 15:24:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:23.829 15:24:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:24.087 15:24:33 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:20:24.087 15:24:33 -- keyring/file.sh@60 -- # get_refcnt key1 00:20:24.087 15:24:33 -- keyring/common.sh@12 -- # get_key key1 00:20:24.087 15:24:33 -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:24.087 15:24:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:24.087 15:24:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:24.087 15:24:33 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:24.345 15:24:33 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:20:24.345 15:24:33 -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:24.604 Running I/O for 1 seconds... 00:20:25.538 00:20:25.538 Latency(us) 00:20:25.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.538 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:20:25.538 nvme0n1 : 1.01 11420.33 44.61 0.00 0.00 11168.05 5659.93 22163.08 00:20:25.538 =================================================================================================================== 00:20:25.538 Total : 11420.33 44.61 0.00 0.00 11168.05 5659.93 22163.08 00:20:25.538 0 00:20:25.538 15:24:34 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:25.538 15:24:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:25.796 15:24:34 -- keyring/file.sh@65 -- # get_refcnt key0 00:20:25.796 15:24:34 -- keyring/common.sh@12 -- # get_key key0 00:20:25.796 15:24:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:25.796 15:24:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:25.796 15:24:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:25.796 15:24:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:26.054 15:24:35 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:20:26.054 15:24:35 -- keyring/file.sh@66 -- # get_refcnt key1 00:20:26.054 15:24:35 -- keyring/common.sh@12 -- # get_key key1 00:20:26.054 15:24:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:26.054 15:24:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:26.054 15:24:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:26.054 15:24:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:26.314 15:24:35 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:20:26.314 15:24:35 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:26.314 15:24:35 -- common/autotest_common.sh@638 -- # local es=0 00:20:26.314 15:24:35 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:26.314 15:24:35 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:20:26.314 15:24:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:26.314 15:24:35 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:20:26.314 15:24:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:26.314 15:24:35 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:26.314 15:24:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:26.573 [2024-04-24 15:24:35.649616] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96a3c0 (107):[2024-04-24 15:24:35.649615] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:26.573 Transport endpoint is not connected 00:20:26.573 [2024-04-24 15:24:35.650604] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96a3c0 (9): Bad file descriptor 00:20:26.573 [2024-04-24 15:24:35.651600] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:26.573 [2024-04-24 15:24:35.651625] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:26.573 [2024-04-24 15:24:35.651637] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:26.573 request: 00:20:26.573 { 00:20:26.573 "name": "nvme0", 00:20:26.573 "trtype": "tcp", 00:20:26.573 "traddr": "127.0.0.1", 00:20:26.573 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:26.573 "adrfam": "ipv4", 00:20:26.573 "trsvcid": "4420", 00:20:26.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:26.573 "psk": "key1", 00:20:26.573 "method": "bdev_nvme_attach_controller", 00:20:26.573 "req_id": 1 00:20:26.573 } 00:20:26.573 Got JSON-RPC error response 00:20:26.573 response: 00:20:26.573 { 00:20:26.573 "code": -32602, 00:20:26.573 "message": "Invalid parameters" 00:20:26.573 } 00:20:26.573 15:24:35 -- common/autotest_common.sh@641 -- # es=1 00:20:26.573 15:24:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:26.573 15:24:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:26.573 15:24:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:26.573 15:24:35 -- keyring/file.sh@71 -- # get_refcnt key0 00:20:26.573 15:24:35 -- keyring/common.sh@12 -- # get_key key0 00:20:26.573 15:24:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:26.573 15:24:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:26.573 15:24:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:26.573 15:24:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:26.831 15:24:35 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:20:26.831 15:24:35 -- keyring/file.sh@72 -- # get_refcnt key1 00:20:26.831 15:24:35 -- keyring/common.sh@12 -- # get_key key1 00:20:26.831 15:24:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:26.831 15:24:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:26.831 15:24:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:26.831 15:24:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:27.089 15:24:36 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:20:27.089 15:24:36 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:20:27.089 15:24:36 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:27.347 15:24:36 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:20:27.347 15:24:36 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:20:27.604 15:24:36 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:20:27.604 15:24:36 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:27.604 15:24:36 -- keyring/file.sh@77 -- # jq length 00:20:27.864 15:24:36 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:20:27.864 15:24:36 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.SpGph4vPNk 00:20:27.864 15:24:36 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.SpGph4vPNk 00:20:27.864 15:24:36 -- common/autotest_common.sh@638 -- # local es=0 00:20:27.864 15:24:36 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.SpGph4vPNk 00:20:27.864 15:24:36 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:20:27.864 15:24:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:27.864 15:24:36 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:20:27.864 15:24:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:27.864 15:24:36 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SpGph4vPNk 00:20:27.864 15:24:36 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SpGph4vPNk 00:20:28.130 [2024-04-24 15:24:37.171564] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SpGph4vPNk': 0100660 00:20:28.130 [2024-04-24 15:24:37.171622] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:28.130 request: 00:20:28.130 { 00:20:28.130 "name": "key0", 00:20:28.130 "path": "/tmp/tmp.SpGph4vPNk", 00:20:28.130 "method": "keyring_file_add_key", 00:20:28.130 "req_id": 1 00:20:28.130 } 00:20:28.131 Got JSON-RPC error response 00:20:28.131 response: 00:20:28.131 { 00:20:28.131 "code": -1, 00:20:28.131 "message": "Operation not permitted" 00:20:28.131 } 00:20:28.131 15:24:37 -- common/autotest_common.sh@641 -- # es=1 00:20:28.131 15:24:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:28.131 15:24:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:28.131 15:24:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:28.131 15:24:37 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.SpGph4vPNk 00:20:28.131 15:24:37 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SpGph4vPNk 00:20:28.131 15:24:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SpGph4vPNk 00:20:28.388 15:24:37 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.SpGph4vPNk 00:20:28.388 15:24:37 -- keyring/file.sh@88 -- # get_refcnt key0 00:20:28.388 15:24:37 -- keyring/common.sh@12 -- # get_key key0 00:20:28.388 15:24:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:28.389 15:24:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:28.389 15:24:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:28.389 15:24:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:28.646 15:24:37 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:20:28.646 15:24:37 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:28.646 15:24:37 -- common/autotest_common.sh@638 -- # local es=0 00:20:28.646 15:24:37 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:28.646 15:24:37 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:20:28.646 15:24:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:28.646 15:24:37 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:20:28.646 15:24:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:28.646 15:24:37 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:28.646 15:24:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:28.904 [2024-04-24 15:24:37.939715] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.SpGph4vPNk': No such file or directory 00:20:28.904 [2024-04-24 15:24:37.939761] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:20:28.904 [2024-04-24 15:24:37.939788] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:20:28.904 [2024-04-24 15:24:37.939797] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:28.904 [2024-04-24 15:24:37.939806] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:20:28.904 request: 00:20:28.904 { 00:20:28.904 "name": "nvme0", 00:20:28.904 "trtype": "tcp", 00:20:28.904 "traddr": "127.0.0.1", 00:20:28.904 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:28.904 "adrfam": "ipv4", 00:20:28.904 "trsvcid": "4420", 00:20:28.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:28.904 "psk": "key0", 00:20:28.904 "method": "bdev_nvme_attach_controller", 00:20:28.904 "req_id": 1 00:20:28.904 } 00:20:28.904 Got JSON-RPC error response 00:20:28.904 response: 00:20:28.904 { 00:20:28.904 "code": -19, 00:20:28.904 "message": "No such device" 00:20:28.904 } 00:20:28.904 15:24:37 -- common/autotest_common.sh@641 -- # es=1 00:20:28.904 15:24:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:28.904 15:24:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:28.904 15:24:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:28.904 15:24:37 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:20:28.904 15:24:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:29.162 15:24:38 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:29.162 15:24:38 -- keyring/common.sh@15 -- # local name key digest path 00:20:29.162 15:24:38 -- keyring/common.sh@17 -- # name=key0 00:20:29.162 15:24:38 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:29.162 15:24:38 -- keyring/common.sh@17 -- # digest=0 00:20:29.162 15:24:38 -- keyring/common.sh@18 -- # mktemp 00:20:29.162 15:24:38 -- keyring/common.sh@18 -- # path=/tmp/tmp.6L9VsiBmHb 00:20:29.162 15:24:38 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:29.162 15:24:38 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:29.162 15:24:38 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:29.162 15:24:38 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:29.162 15:24:38 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:20:29.162 15:24:38 -- nvmf/common.sh@693 -- # digest=0 00:20:29.162 15:24:38 -- nvmf/common.sh@694 -- # python - 00:20:29.162 15:24:38 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6L9VsiBmHb 00:20:29.162 15:24:38 -- keyring/common.sh@23 -- # echo /tmp/tmp.6L9VsiBmHb 00:20:29.162 15:24:38 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.6L9VsiBmHb 00:20:29.162 15:24:38 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6L9VsiBmHb 00:20:29.162 15:24:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6L9VsiBmHb 00:20:29.420 15:24:38 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:29.420 15:24:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:29.678 nvme0n1 00:20:29.678 15:24:38 -- keyring/file.sh@99 -- # get_refcnt key0 00:20:29.678 15:24:38 -- keyring/common.sh@12 -- # get_key key0 00:20:29.678 15:24:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:29.678 15:24:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:29.678 15:24:38 -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:29.678 15:24:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:29.936 15:24:39 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:20:29.936 15:24:39 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:20:29.936 15:24:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:30.194 15:24:39 -- keyring/file.sh@101 -- # jq -r .removed 00:20:30.194 15:24:39 -- keyring/file.sh@101 -- # get_key key0 00:20:30.194 15:24:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:30.194 15:24:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:30.194 15:24:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:30.451 15:24:39 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:20:30.451 15:24:39 -- keyring/file.sh@102 -- # get_refcnt key0 00:20:30.451 15:24:39 -- keyring/common.sh@12 -- # get_key key0 00:20:30.451 15:24:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:30.451 15:24:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:30.451 15:24:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:30.451 15:24:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:30.709 15:24:39 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:20:30.709 15:24:39 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:30.709 15:24:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:30.966 15:24:40 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:20:30.966 15:24:40 -- keyring/file.sh@104 -- # jq length 00:20:30.966 15:24:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:31.311 15:24:40 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:20:31.311 15:24:40 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6L9VsiBmHb 00:20:31.311 15:24:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6L9VsiBmHb 00:20:31.589 15:24:40 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.cFKgxAgHlE 00:20:31.589 15:24:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.cFKgxAgHlE 00:20:31.885 15:24:40 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:31.885 15:24:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:32.143 nvme0n1 00:20:32.143 15:24:41 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:20:32.143 15:24:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:20:32.404 15:24:41 -- keyring/file.sh@112 -- # config='{ 00:20:32.404 "subsystems": [ 00:20:32.404 { 00:20:32.404 "subsystem": "keyring", 00:20:32.404 "config": [ 00:20:32.404 { 00:20:32.404 "method": "keyring_file_add_key", 00:20:32.404 "params": { 00:20:32.404 "name": "key0", 00:20:32.404 "path": "/tmp/tmp.6L9VsiBmHb" 00:20:32.404 } 00:20:32.404 }, 00:20:32.404 { 00:20:32.404 "method": "keyring_file_add_key", 00:20:32.404 "params": { 00:20:32.404 "name": "key1", 00:20:32.404 "path": "/tmp/tmp.cFKgxAgHlE" 00:20:32.404 } 00:20:32.404 } 00:20:32.405 ] 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "subsystem": "iobuf", 00:20:32.405 "config": [ 00:20:32.405 { 00:20:32.405 "method": "iobuf_set_options", 00:20:32.405 "params": { 00:20:32.405 "small_pool_count": 8192, 00:20:32.405 "large_pool_count": 1024, 00:20:32.405 "small_bufsize": 8192, 00:20:32.405 "large_bufsize": 135168 00:20:32.405 } 00:20:32.405 } 00:20:32.405 ] 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "subsystem": "sock", 00:20:32.405 "config": [ 00:20:32.405 { 00:20:32.405 "method": "sock_impl_set_options", 00:20:32.405 "params": { 00:20:32.405 "impl_name": "uring", 00:20:32.405 "recv_buf_size": 2097152, 00:20:32.405 "send_buf_size": 2097152, 00:20:32.405 "enable_recv_pipe": true, 00:20:32.405 "enable_quickack": false, 00:20:32.405 "enable_placement_id": 0, 00:20:32.405 "enable_zerocopy_send_server": false, 00:20:32.405 "enable_zerocopy_send_client": false, 00:20:32.405 "zerocopy_threshold": 0, 00:20:32.405 "tls_version": 0, 00:20:32.405 "enable_ktls": false 00:20:32.405 } 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "method": "sock_impl_set_options", 00:20:32.405 "params": { 00:20:32.405 "impl_name": "posix", 00:20:32.405 "recv_buf_size": 2097152, 00:20:32.405 "send_buf_size": 2097152, 00:20:32.405 "enable_recv_pipe": true, 00:20:32.405 "enable_quickack": false, 00:20:32.405 "enable_placement_id": 0, 00:20:32.405 "enable_zerocopy_send_server": true, 00:20:32.405 "enable_zerocopy_send_client": false, 00:20:32.405 "zerocopy_threshold": 0, 00:20:32.405 "tls_version": 0, 00:20:32.405 "enable_ktls": false 00:20:32.405 } 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "method": "sock_impl_set_options", 00:20:32.405 "params": { 00:20:32.405 "impl_name": "ssl", 00:20:32.405 "recv_buf_size": 4096, 00:20:32.405 "send_buf_size": 4096, 00:20:32.405 "enable_recv_pipe": true, 00:20:32.405 "enable_quickack": false, 00:20:32.405 "enable_placement_id": 0, 00:20:32.405 "enable_zerocopy_send_server": true, 00:20:32.405 "enable_zerocopy_send_client": false, 00:20:32.405 "zerocopy_threshold": 0, 00:20:32.405 "tls_version": 0, 00:20:32.405 "enable_ktls": false 00:20:32.405 } 00:20:32.405 } 00:20:32.405 ] 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "subsystem": "vmd", 00:20:32.405 "config": [] 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "subsystem": "accel", 00:20:32.405 "config": [ 00:20:32.405 { 00:20:32.405 "method": "accel_set_options", 00:20:32.405 "params": { 00:20:32.405 "small_cache_size": 128, 00:20:32.405 "large_cache_size": 16, 00:20:32.405 "task_count": 2048, 00:20:32.405 "sequence_count": 2048, 00:20:32.405 "buf_count": 2048 00:20:32.405 } 00:20:32.405 } 00:20:32.405 ] 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "subsystem": "bdev", 00:20:32.405 "config": [ 00:20:32.405 { 00:20:32.405 "method": "bdev_set_options", 00:20:32.405 "params": { 00:20:32.405 "bdev_io_pool_size": 65535, 00:20:32.405 "bdev_io_cache_size": 256, 00:20:32.405 "bdev_auto_examine": true, 00:20:32.405 "iobuf_small_cache_size": 128, 00:20:32.405 "iobuf_large_cache_size": 16 00:20:32.405 } 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "method": "bdev_raid_set_options", 00:20:32.405 "params": { 00:20:32.405 "process_window_size_kb": 1024 00:20:32.405 } 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "method": "bdev_iscsi_set_options", 00:20:32.405 "params": { 00:20:32.405 "timeout_sec": 30 00:20:32.405 } 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "method": "bdev_nvme_set_options", 00:20:32.405 "params": { 00:20:32.405 "action_on_timeout": "none", 00:20:32.405 "timeout_us": 0, 00:20:32.405 "timeout_admin_us": 0, 00:20:32.405 "keep_alive_timeout_ms": 10000, 00:20:32.405 "arbitration_burst": 0, 00:20:32.405 "low_priority_weight": 0, 00:20:32.405 "medium_priority_weight": 0, 00:20:32.405 "high_priority_weight": 0, 00:20:32.405 "nvme_adminq_poll_period_us": 10000, 00:20:32.405 "nvme_ioq_poll_period_us": 0, 00:20:32.405 "io_queue_requests": 512, 00:20:32.405 "delay_cmd_submit": true, 00:20:32.405 "transport_retry_count": 4, 00:20:32.405 "bdev_retry_count": 3, 00:20:32.405 "transport_ack_timeout": 0, 00:20:32.405 "ctrlr_loss_timeout_sec": 0, 00:20:32.405 "reconnect_delay_sec": 0, 00:20:32.405 "fast_io_fail_timeout_sec": 0, 00:20:32.405 "disable_auto_failback": false, 00:20:32.405 "generate_uuids": false, 00:20:32.405 "transport_tos": 0, 00:20:32.405 "nvme_error_stat": false, 00:20:32.405 "rdma_srq_size": 0, 00:20:32.405 "io_path_stat": false, 00:20:32.405 "allow_accel_sequence": false, 00:20:32.405 "rdma_max_cq_size": 0, 00:20:32.405 "rdma_cm_event_timeout_ms": 0, 00:20:32.405 "dhchap_digests": [ 00:20:32.405 "sha256", 00:20:32.405 "sha384", 00:20:32.405 "sha512" 00:20:32.405 ], 00:20:32.405 "dhchap_dhgroups": [ 00:20:32.405 "null", 00:20:32.405 "ffdhe2048", 00:20:32.405 "ffdhe3072", 00:20:32.405 "ffdhe4096", 00:20:32.405 "ffdhe6144", 00:20:32.405 "ffdhe8192" 00:20:32.405 ] 00:20:32.405 } 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "method": "bdev_nvme_attach_controller", 00:20:32.405 "params": { 00:20:32.405 "name": "nvme0", 00:20:32.405 "trtype": "TCP", 00:20:32.405 "adrfam": "IPv4", 00:20:32.405 "traddr": "127.0.0.1", 00:20:32.405 "trsvcid": "4420", 00:20:32.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:32.405 "prchk_reftag": false, 00:20:32.405 "prchk_guard": false, 00:20:32.405 "ctrlr_loss_timeout_sec": 0, 00:20:32.405 "reconnect_delay_sec": 0, 00:20:32.405 "fast_io_fail_timeout_sec": 0, 00:20:32.405 "psk": "key0", 00:20:32.405 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:32.405 "hdgst": false, 00:20:32.405 "ddgst": false 00:20:32.405 } 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "method": "bdev_nvme_set_hotplug", 00:20:32.405 "params": { 00:20:32.405 "period_us": 100000, 00:20:32.405 "enable": false 00:20:32.405 } 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "method": "bdev_wait_for_examine" 00:20:32.405 } 00:20:32.405 ] 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "subsystem": "nbd", 00:20:32.405 "config": [] 00:20:32.405 } 00:20:32.405 ] 00:20:32.405 }' 00:20:32.405 15:24:41 -- keyring/file.sh@114 -- # killprocess 81664 00:20:32.405 15:24:41 -- common/autotest_common.sh@936 -- # '[' -z 81664 ']' 00:20:32.405 15:24:41 -- common/autotest_common.sh@940 -- # kill -0 81664 00:20:32.405 15:24:41 -- common/autotest_common.sh@941 -- # uname 00:20:32.405 15:24:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.405 15:24:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81664 00:20:32.405 killing process with pid 81664 00:20:32.405 Received shutdown signal, test time was about 1.000000 seconds 00:20:32.405 00:20:32.405 Latency(us) 00:20:32.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.405 =================================================================================================================== 00:20:32.405 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.405 15:24:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:32.405 15:24:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:32.405 15:24:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81664' 00:20:32.405 15:24:41 -- common/autotest_common.sh@955 -- # kill 81664 00:20:32.405 15:24:41 -- common/autotest_common.sh@960 -- # wait 81664 00:20:32.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:32.735 15:24:41 -- keyring/file.sh@117 -- # bperfpid=81913 00:20:32.735 15:24:41 -- keyring/file.sh@119 -- # waitforlisten 81913 /var/tmp/bperf.sock 00:20:32.735 15:24:41 -- common/autotest_common.sh@817 -- # '[' -z 81913 ']' 00:20:32.735 15:24:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:32.735 15:24:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:32.735 15:24:41 -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:20:32.735 15:24:41 -- keyring/file.sh@115 -- # echo '{ 00:20:32.735 "subsystems": [ 00:20:32.735 { 00:20:32.735 "subsystem": "keyring", 00:20:32.735 "config": [ 00:20:32.735 { 00:20:32.735 "method": "keyring_file_add_key", 00:20:32.735 "params": { 00:20:32.735 "name": "key0", 00:20:32.735 "path": "/tmp/tmp.6L9VsiBmHb" 00:20:32.735 } 00:20:32.735 }, 00:20:32.735 { 00:20:32.735 "method": "keyring_file_add_key", 00:20:32.735 "params": { 00:20:32.735 "name": "key1", 00:20:32.735 "path": "/tmp/tmp.cFKgxAgHlE" 00:20:32.735 } 00:20:32.735 } 00:20:32.735 ] 00:20:32.735 }, 00:20:32.735 { 00:20:32.735 "subsystem": "iobuf", 00:20:32.735 "config": [ 00:20:32.735 { 00:20:32.735 "method": "iobuf_set_options", 00:20:32.735 "params": { 00:20:32.735 "small_pool_count": 8192, 00:20:32.735 "large_pool_count": 1024, 00:20:32.735 "small_bufsize": 8192, 00:20:32.735 "large_bufsize": 135168 00:20:32.735 } 00:20:32.735 } 00:20:32.735 ] 00:20:32.735 }, 00:20:32.735 { 00:20:32.735 "subsystem": "sock", 00:20:32.735 "config": [ 00:20:32.735 { 00:20:32.735 "method": "sock_impl_set_options", 00:20:32.735 "params": { 00:20:32.735 "impl_name": "uring", 00:20:32.735 "recv_buf_size": 2097152, 00:20:32.735 "send_buf_size": 2097152, 00:20:32.735 "enable_recv_pipe": true, 00:20:32.735 "enable_quickack": false, 00:20:32.735 "enable_placement_id": 0, 00:20:32.735 "enable_zerocopy_send_server": false, 00:20:32.735 "enable_zerocopy_send_client": false, 00:20:32.735 "zerocopy_threshold": 0, 00:20:32.735 "tls_version": 0, 00:20:32.735 "enable_ktls": false 00:20:32.735 } 00:20:32.735 }, 00:20:32.735 { 00:20:32.735 "method": "sock_impl_set_options", 00:20:32.735 "params": { 00:20:32.735 "impl_name": "posix", 00:20:32.735 "recv_buf_size": 2097152, 00:20:32.735 "send_buf_size": 2097152, 00:20:32.735 "enable_recv_pipe": true, 00:20:32.735 "enable_quickack": false, 00:20:32.735 "enable_placement_id": 0, 00:20:32.735 "enable_zerocopy_send_server": true, 00:20:32.735 "enable_zerocopy_send_client": false, 00:20:32.735 "zerocopy_threshold": 0, 00:20:32.735 "tls_version": 0, 00:20:32.735 "enable_ktls": false 00:20:32.735 } 00:20:32.735 }, 00:20:32.735 { 00:20:32.735 "method": "sock_impl_set_options", 00:20:32.735 "params": { 00:20:32.735 "impl_name": "ssl", 00:20:32.735 "recv_buf_size": 4096, 00:20:32.735 "send_buf_size": 4096, 00:20:32.735 "enable_recv_pipe": true, 00:20:32.735 "enable_quickack": false, 00:20:32.735 "enable_placement_id": 0, 00:20:32.735 "enable_zerocopy_send_server": true, 00:20:32.735 "enable_zerocopy_send_client": false, 00:20:32.735 "zerocopy_threshold": 0, 00:20:32.735 "tls_version": 0, 00:20:32.735 "enable_ktls": false 00:20:32.735 } 00:20:32.735 } 00:20:32.735 ] 00:20:32.735 }, 00:20:32.735 { 00:20:32.735 "subsystem": "vmd", 00:20:32.735 "config": [] 00:20:32.735 }, 00:20:32.735 { 00:20:32.735 "subsystem": "accel", 00:20:32.735 "config": [ 00:20:32.735 { 00:20:32.735 "method": "accel_set_options", 00:20:32.735 "params": { 00:20:32.735 "small_cache_size": 128, 00:20:32.735 "large_cache_size": 16, 00:20:32.735 "task_count": 2048, 00:20:32.735 "sequence_count": 2048, 00:20:32.735 "buf_count": 2048 00:20:32.735 } 00:20:32.736 } 00:20:32.736 ] 00:20:32.736 }, 00:20:32.736 { 00:20:32.736 "subsystem": "bdev", 00:20:32.736 "config": [ 00:20:32.736 { 00:20:32.736 "method": "bdev_set_options", 00:20:32.736 "params": { 00:20:32.736 "bdev_io_pool_size": 65535, 00:20:32.736 "bdev_io_cache_size": 256, 00:20:32.736 "bdev_auto_examine": true, 00:20:32.736 "iobuf_small_cache_size": 128, 00:20:32.736 "iobuf_large_cache_size": 16 00:20:32.736 } 00:20:32.736 }, 00:20:32.736 { 00:20:32.736 "method": "bdev_raid_set_options", 00:20:32.736 "params": { 00:20:32.736 "process_window_size_kb": 1024 00:20:32.736 } 00:20:32.736 }, 00:20:32.736 { 00:20:32.736 "method": "bdev_iscsi_set_options", 00:20:32.736 "params": { 00:20:32.736 "timeout_sec": 30 00:20:32.736 } 00:20:32.736 }, 00:20:32.736 { 00:20:32.736 "method": "bdev_nvme_set_options", 00:20:32.736 "params": { 00:20:32.736 "action_on_timeout": "none", 00:20:32.736 "timeout_us": 0, 00:20:32.736 "timeout_admin_us": 0, 00:20:32.736 "keep_alive_timeout_ms": 10000, 00:20:32.736 "arbitration_burst": 0, 00:20:32.736 "low_priority_weight": 0, 00:20:32.736 "medium_priority_weight": 0, 00:20:32.736 "high_priority_weight": 0, 00:20:32.736 "nvme_adminq_poll_period_us": 10000, 00:20:32.736 "nvme_ioq_poll_period_us": 0, 00:20:32.736 "io_queue_requests": 512, 00:20:32.736 "delay_cmd_submit": true, 00:20:32.736 "transport_retry_count": 4, 00:20:32.736 "bdev_retry_count": 3, 00:20:32.736 "transport_ack_timeout": 0, 00:20:32.736 "ctrlr_loss_timeout_sec": 0, 00:20:32.736 "reconnect_delay_sec": 0, 00:20:32.736 "fast_io_fail_timeout_sec": 0, 00:20:32.736 "disable_auto_failback": false, 00:20:32.736 "generate_uuids": false, 00:20:32.736 "transport_tos": 0, 00:20:32.736 "nvme_error_stat": false, 00:20:32.736 "rdma_srq_size": 0, 00:20:32.736 "io_path_stat": false, 00:20:32.736 "allow_accel_sequence": false, 00:20:32.736 "rdma_max_cq_size": 0, 00:20:32.736 "rdma_cm_event_timeout_ms": 0, 00:20:32.736 "dhchap_digests": [ 00:20:32.736 "sha256", 00:20:32.736 "sha384", 00:20:32.736 "sha512" 00:20:32.736 ], 00:20:32.736 "dhchap_dhgroups": [ 00:20:32.736 "null", 00:20:32.736 "ffdhe2048", 00:20:32.736 "ffdhe3072", 00:20:32.736 "ffdhe4096", 00:20:32.736 "ffdhe6144", 00:20:32.736 "ffdhe8192" 00:20:32.736 ] 00:20:32.736 } 00:20:32.736 }, 00:20:32.736 { 00:20:32.736 "method": "bdev_nvme_attach_controller", 00:20:32.736 "params": { 00:20:32.736 "name": "nvme0", 00:20:32.736 "trtype": "TCP", 00:20:32.736 "adrfam": "IPv4", 00:20:32.736 "traddr": "127.0.0.1", 00:20:32.736 "trsvcid": "4420", 00:20:32.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:32.736 "prchk_reftag": false, 00:20:32.736 "prchk_guard": false, 00:20:32.736 "ctrlr_loss_timeout_sec": 0, 00:20:32.736 "reconnect_delay_sec": 0, 00:20:32.736 "fast_io_fail_timeout_sec": 0, 00:20:32.736 "psk": "key0", 00:20:32.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:32.736 "hdgst": false, 00:20:32.736 "ddgst": false 00:20:32.736 } 00:20:32.736 }, 00:20:32.736 { 00:20:32.736 "method": "bdev_nvme_set_hotplug", 00:20:32.736 "params": { 00:20:32.736 "period_us": 100000, 00:20:32.736 "enable": false 00:20:32.736 } 00:20:32.736 }, 00:20:32.736 { 00:20:32.736 "method": "bdev_wait_for_examine" 00:20:32.736 } 00:20:32.736 ] 00:20:32.736 }, 00:20:32.736 { 00:20:32.736 "subsystem": "nbd", 00:20:32.736 "config": [] 00:20:32.736 } 00:20:32.736 ] 00:20:32.736 }' 00:20:32.736 15:24:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:32.736 15:24:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:32.736 15:24:41 -- common/autotest_common.sh@10 -- # set +x 00:20:32.736 [2024-04-24 15:24:41.862323] Starting SPDK v24.05-pre git sha1 0d1f30fbf / DPDK 23.11.0 initialization... 00:20:32.736 [2024-04-24 15:24:41.862720] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81913 ] 00:20:33.017 [2024-04-24 15:24:41.996915] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.017 [2024-04-24 15:24:42.113674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.347 [2024-04-24 15:24:42.299460] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.646 15:24:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:33.646 15:24:42 -- common/autotest_common.sh@850 -- # return 0 00:20:33.646 15:24:42 -- keyring/file.sh@120 -- # jq length 00:20:33.646 15:24:42 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:20:33.646 15:24:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:33.970 15:24:43 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:20:33.970 15:24:43 -- keyring/file.sh@121 -- # get_refcnt key0 00:20:33.970 15:24:43 -- keyring/common.sh@12 -- # get_key key0 00:20:33.970 15:24:43 -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:33.970 15:24:43 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:33.970 15:24:43 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:33.970 15:24:43 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:34.231 15:24:43 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:20:34.231 15:24:43 -- keyring/file.sh@122 -- # get_refcnt key1 00:20:34.231 15:24:43 -- keyring/common.sh@12 -- # get_key key1 00:20:34.231 15:24:43 -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:34.231 15:24:43 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:34.231 15:24:43 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:34.231 15:24:43 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:34.488 15:24:43 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:20:34.488 15:24:43 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:20:34.488 15:24:43 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:20:34.488 15:24:43 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:20:34.746 15:24:43 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:20:34.746 15:24:43 -- keyring/file.sh@1 -- # cleanup 00:20:34.746 15:24:43 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.6L9VsiBmHb /tmp/tmp.cFKgxAgHlE 00:20:34.746 15:24:43 -- keyring/file.sh@20 -- # killprocess 81913 00:20:34.746 15:24:43 -- common/autotest_common.sh@936 -- # '[' -z 81913 ']' 00:20:34.746 15:24:43 -- common/autotest_common.sh@940 -- # kill -0 81913 00:20:34.746 15:24:43 -- common/autotest_common.sh@941 -- # uname 00:20:34.746 15:24:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:34.746 15:24:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81913 00:20:34.746 killing process with pid 81913 00:20:34.746 Received shutdown signal, test time was about 1.000000 seconds 00:20:34.746 00:20:34.746 Latency(us) 00:20:34.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.746 =================================================================================================================== 00:20:34.746 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:34.746 15:24:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:34.746 15:24:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:34.746 15:24:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81913' 00:20:34.746 15:24:43 -- common/autotest_common.sh@955 -- # kill 81913 00:20:34.746 15:24:43 -- common/autotest_common.sh@960 -- # wait 81913 00:20:35.004 15:24:44 -- keyring/file.sh@21 -- # killprocess 81648 00:20:35.004 15:24:44 -- common/autotest_common.sh@936 -- # '[' -z 81648 ']' 00:20:35.004 15:24:44 -- common/autotest_common.sh@940 -- # kill -0 81648 00:20:35.004 15:24:44 -- common/autotest_common.sh@941 -- # uname 00:20:35.004 15:24:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:35.004 15:24:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81648 00:20:35.004 killing process with pid 81648 00:20:35.004 15:24:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:35.004 15:24:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:35.004 15:24:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81648' 00:20:35.004 15:24:44 -- common/autotest_common.sh@955 -- # kill 81648 00:20:35.004 [2024-04-24 15:24:44.225351] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:35.004 15:24:44 -- common/autotest_common.sh@960 -- # wait 81648 00:20:35.570 ************************************ 00:20:35.571 END TEST keyring_file 00:20:35.571 ************************************ 00:20:35.571 00:20:35.571 real 0m15.917s 00:20:35.571 user 0m39.480s 00:20:35.571 sys 0m3.084s 00:20:35.571 15:24:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:35.571 15:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.571 15:24:44 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:20:35.571 15:24:44 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:20:35.571 15:24:44 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:20:35.571 15:24:44 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:20:35.571 15:24:44 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:35.571 15:24:44 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:20:35.571 15:24:44 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:35.571 15:24:44 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:20:35.571 15:24:44 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:20:35.571 15:24:44 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:20:35.571 15:24:44 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:35.571 15:24:44 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:20:35.571 15:24:44 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:20:35.571 15:24:44 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:20:35.571 15:24:44 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:20:35.571 15:24:44 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:20:35.571 15:24:44 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:20:35.571 15:24:44 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:20:35.571 15:24:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:35.571 15:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:35.571 15:24:44 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:20:35.571 15:24:44 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:20:35.571 15:24:44 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:20:35.571 15:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:37.471 INFO: APP EXITING 00:20:37.471 INFO: killing all VMs 00:20:37.471 INFO: killing vhost app 00:20:37.471 INFO: EXIT DONE 00:20:37.764 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:37.764 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:37.764 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:38.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:38.703 Cleaning 00:20:38.703 Removing: /var/run/dpdk/spdk0/config 00:20:38.703 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:38.703 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:38.703 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:38.703 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:38.703 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:38.703 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:38.703 Removing: /var/run/dpdk/spdk1/config 00:20:38.703 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:38.703 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:38.703 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:38.703 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:38.703 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:38.703 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:38.703 Removing: /var/run/dpdk/spdk2/config 00:20:38.703 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:38.703 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:38.703 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:38.703 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:38.703 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:38.703 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:38.703 Removing: /var/run/dpdk/spdk3/config 00:20:38.703 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:38.703 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:38.703 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:38.703 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:38.703 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:38.703 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:38.703 Removing: /var/run/dpdk/spdk4/config 00:20:38.703 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:38.703 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:38.703 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:38.703 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:38.703 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:38.703 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:38.703 Removing: /dev/shm/nvmf_trace.0 00:20:38.703 Removing: /dev/shm/spdk_tgt_trace.pid58353 00:20:38.703 Removing: /var/run/dpdk/spdk0 00:20:38.703 Removing: /var/run/dpdk/spdk1 00:20:38.703 Removing: /var/run/dpdk/spdk2 00:20:38.703 Removing: /var/run/dpdk/spdk3 00:20:38.703 Removing: /var/run/dpdk/spdk4 00:20:38.703 Removing: /var/run/dpdk/spdk_pid58185 00:20:38.703 Removing: /var/run/dpdk/spdk_pid58353 00:20:38.703 Removing: /var/run/dpdk/spdk_pid58584 00:20:38.703 Removing: /var/run/dpdk/spdk_pid58669 00:20:38.703 Removing: /var/run/dpdk/spdk_pid58702 00:20:38.703 Removing: /var/run/dpdk/spdk_pid58826 00:20:38.703 Removing: /var/run/dpdk/spdk_pid58844 00:20:38.703 Removing: /var/run/dpdk/spdk_pid58972 00:20:38.703 Removing: /var/run/dpdk/spdk_pid59168 00:20:38.703 Removing: /var/run/dpdk/spdk_pid59315 00:20:38.703 Removing: /var/run/dpdk/spdk_pid59385 00:20:38.703 Removing: /var/run/dpdk/spdk_pid59466 00:20:38.703 Removing: /var/run/dpdk/spdk_pid59562 00:20:38.703 Removing: /var/run/dpdk/spdk_pid59648 00:20:38.703 Removing: /var/run/dpdk/spdk_pid59691 00:20:38.703 Removing: /var/run/dpdk/spdk_pid59730 00:20:38.703 Removing: /var/run/dpdk/spdk_pid59798 00:20:38.703 Removing: /var/run/dpdk/spdk_pid59922 00:20:38.703 Removing: /var/run/dpdk/spdk_pid60358 00:20:38.703 Removing: /var/run/dpdk/spdk_pid60414 00:20:38.703 Removing: /var/run/dpdk/spdk_pid60469 00:20:38.703 Removing: /var/run/dpdk/spdk_pid60485 00:20:38.703 Removing: /var/run/dpdk/spdk_pid60562 00:20:38.703 Removing: /var/run/dpdk/spdk_pid60578 00:20:38.703 Removing: /var/run/dpdk/spdk_pid60649 00:20:38.703 Removing: /var/run/dpdk/spdk_pid60665 00:20:38.703 Removing: /var/run/dpdk/spdk_pid60720 00:20:38.703 Removing: /var/run/dpdk/spdk_pid60738 00:20:38.703 Removing: /var/run/dpdk/spdk_pid60783 00:20:38.703 Removing: /var/run/dpdk/spdk_pid60801 00:20:38.703 Removing: /var/run/dpdk/spdk_pid60938 00:20:38.703 Removing: /var/run/dpdk/spdk_pid60978 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61058 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61124 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61152 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61223 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61268 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61306 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61345 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61390 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61424 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61468 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61507 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61545 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61591 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61629 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61668 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61708 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61746 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61785 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61830 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61869 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61911 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61952 00:20:38.703 Removing: /var/run/dpdk/spdk_pid61991 00:20:38.703 Removing: /var/run/dpdk/spdk_pid62036 00:20:38.703 Removing: /var/run/dpdk/spdk_pid62111 00:20:38.703 Removing: /var/run/dpdk/spdk_pid62214 00:20:38.703 Removing: /var/run/dpdk/spdk_pid62538 00:20:38.703 Removing: /var/run/dpdk/spdk_pid62555 00:20:38.703 Removing: /var/run/dpdk/spdk_pid62601 00:20:38.703 Removing: /var/run/dpdk/spdk_pid62609 00:20:38.703 Removing: /var/run/dpdk/spdk_pid62630 00:20:38.703 Removing: /var/run/dpdk/spdk_pid62649 00:20:38.703 Removing: /var/run/dpdk/spdk_pid62668 00:20:38.703 Removing: /var/run/dpdk/spdk_pid62688 00:20:38.703 Removing: /var/run/dpdk/spdk_pid62708 00:20:38.703 Removing: /var/run/dpdk/spdk_pid62722 00:20:38.703 Removing: /var/run/dpdk/spdk_pid62743 00:20:38.963 Removing: /var/run/dpdk/spdk_pid62762 00:20:38.963 Removing: /var/run/dpdk/spdk_pid62781 00:20:38.963 Removing: /var/run/dpdk/spdk_pid62796 00:20:38.963 Removing: /var/run/dpdk/spdk_pid62821 00:20:38.963 Removing: /var/run/dpdk/spdk_pid62834 00:20:38.963 Removing: /var/run/dpdk/spdk_pid62851 00:20:38.963 Removing: /var/run/dpdk/spdk_pid62870 00:20:38.963 Removing: /var/run/dpdk/spdk_pid62889 00:20:38.963 Removing: /var/run/dpdk/spdk_pid62910 00:20:38.963 Removing: /var/run/dpdk/spdk_pid62950 00:20:38.963 Removing: /var/run/dpdk/spdk_pid62958 00:20:38.963 Removing: /var/run/dpdk/spdk_pid62993 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63066 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63104 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63108 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63146 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63161 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63163 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63218 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63237 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63271 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63279 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63294 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63304 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63313 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63327 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63332 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63347 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63385 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63417 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63426 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63465 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63475 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63482 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63532 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63544 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63580 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63587 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63595 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63608 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63614 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63623 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63636 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63638 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63722 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63775 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63895 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63940 00:20:38.963 Removing: /var/run/dpdk/spdk_pid63987 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64007 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64024 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64044 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64081 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64096 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64182 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64198 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64246 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64327 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64388 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64417 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64520 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64572 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64614 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64874 00:20:38.963 Removing: /var/run/dpdk/spdk_pid64987 00:20:38.963 Removing: /var/run/dpdk/spdk_pid65024 00:20:38.963 Removing: /var/run/dpdk/spdk_pid65364 00:20:38.963 Removing: /var/run/dpdk/spdk_pid65402 00:20:38.963 Removing: /var/run/dpdk/spdk_pid65717 00:20:38.963 Removing: /var/run/dpdk/spdk_pid66143 00:20:38.963 Removing: /var/run/dpdk/spdk_pid66427 00:20:38.963 Removing: /var/run/dpdk/spdk_pid67231 00:20:38.963 Removing: /var/run/dpdk/spdk_pid68060 00:20:38.963 Removing: /var/run/dpdk/spdk_pid68182 00:20:38.963 Removing: /var/run/dpdk/spdk_pid68244 00:20:38.963 Removing: /var/run/dpdk/spdk_pid69522 00:20:38.963 Removing: /var/run/dpdk/spdk_pid69738 00:20:38.963 Removing: /var/run/dpdk/spdk_pid70052 00:20:38.963 Removing: /var/run/dpdk/spdk_pid70161 00:20:38.963 Removing: /var/run/dpdk/spdk_pid70294 00:20:38.963 Removing: /var/run/dpdk/spdk_pid70324 00:20:38.963 Removing: /var/run/dpdk/spdk_pid70346 00:20:38.963 Removing: /var/run/dpdk/spdk_pid70379 00:20:38.963 Removing: /var/run/dpdk/spdk_pid70471 00:20:38.963 Removing: /var/run/dpdk/spdk_pid70606 00:20:38.963 Removing: /var/run/dpdk/spdk_pid70761 00:20:38.963 Removing: /var/run/dpdk/spdk_pid70842 00:20:38.963 Removing: /var/run/dpdk/spdk_pid71035 00:20:38.963 Removing: /var/run/dpdk/spdk_pid71118 00:20:38.963 Removing: /var/run/dpdk/spdk_pid71211 00:20:38.963 Removing: /var/run/dpdk/spdk_pid71518 00:20:38.963 Removing: /var/run/dpdk/spdk_pid71902 00:20:39.221 Removing: /var/run/dpdk/spdk_pid71909 00:20:39.221 Removing: /var/run/dpdk/spdk_pid72189 00:20:39.221 Removing: /var/run/dpdk/spdk_pid72203 00:20:39.221 Removing: /var/run/dpdk/spdk_pid72223 00:20:39.221 Removing: /var/run/dpdk/spdk_pid72252 00:20:39.221 Removing: /var/run/dpdk/spdk_pid72258 00:20:39.221 Removing: /var/run/dpdk/spdk_pid72543 00:20:39.221 Removing: /var/run/dpdk/spdk_pid72590 00:20:39.221 Removing: /var/run/dpdk/spdk_pid72869 00:20:39.221 Removing: /var/run/dpdk/spdk_pid73071 00:20:39.221 Removing: /var/run/dpdk/spdk_pid73459 00:20:39.221 Removing: /var/run/dpdk/spdk_pid73956 00:20:39.221 Removing: /var/run/dpdk/spdk_pid74553 00:20:39.221 Removing: /var/run/dpdk/spdk_pid74555 00:20:39.221 Removing: /var/run/dpdk/spdk_pid76497 00:20:39.221 Removing: /var/run/dpdk/spdk_pid76567 00:20:39.221 Removing: /var/run/dpdk/spdk_pid76627 00:20:39.221 Removing: /var/run/dpdk/spdk_pid76687 00:20:39.221 Removing: /var/run/dpdk/spdk_pid76812 00:20:39.221 Removing: /var/run/dpdk/spdk_pid76877 00:20:39.221 Removing: /var/run/dpdk/spdk_pid76933 00:20:39.221 Removing: /var/run/dpdk/spdk_pid76993 00:20:39.221 Removing: /var/run/dpdk/spdk_pid77318 00:20:39.221 Removing: /var/run/dpdk/spdk_pid78492 00:20:39.221 Removing: /var/run/dpdk/spdk_pid78633 00:20:39.222 Removing: /var/run/dpdk/spdk_pid78881 00:20:39.222 Removing: /var/run/dpdk/spdk_pid79442 00:20:39.222 Removing: /var/run/dpdk/spdk_pid79605 00:20:39.222 Removing: /var/run/dpdk/spdk_pid79773 00:20:39.222 Removing: /var/run/dpdk/spdk_pid79870 00:20:39.222 Removing: /var/run/dpdk/spdk_pid80033 00:20:39.222 Removing: /var/run/dpdk/spdk_pid80146 00:20:39.222 Removing: /var/run/dpdk/spdk_pid80817 00:20:39.222 Removing: /var/run/dpdk/spdk_pid80847 00:20:39.222 Removing: /var/run/dpdk/spdk_pid80888 00:20:39.222 Removing: /var/run/dpdk/spdk_pid81147 00:20:39.222 Removing: /var/run/dpdk/spdk_pid81182 00:20:39.222 Removing: /var/run/dpdk/spdk_pid81216 00:20:39.222 Removing: /var/run/dpdk/spdk_pid81648 00:20:39.222 Removing: /var/run/dpdk/spdk_pid81664 00:20:39.222 Removing: /var/run/dpdk/spdk_pid81913 00:20:39.222 Clean 00:20:39.222 15:24:48 -- common/autotest_common.sh@1437 -- # return 0 00:20:39.222 15:24:48 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:20:39.222 15:24:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:39.222 15:24:48 -- common/autotest_common.sh@10 -- # set +x 00:20:39.479 15:24:48 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:20:39.480 15:24:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:39.480 15:24:48 -- common/autotest_common.sh@10 -- # set +x 00:20:39.480 15:24:48 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:39.480 15:24:48 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:39.480 15:24:48 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:39.480 15:24:48 -- spdk/autotest.sh@389 -- # hash lcov 00:20:39.480 15:24:48 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:20:39.480 15:24:48 -- spdk/autotest.sh@391 -- # hostname 00:20:39.480 15:24:48 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:39.738 geninfo: WARNING: invalid characters removed from testname! 00:21:06.267 15:25:14 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:08.797 15:25:17 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:11.326 15:25:20 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:14.606 15:25:23 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:16.592 15:25:25 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:19.883 15:25:28 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:21.786 15:25:31 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:22.044 15:25:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:22.044 15:25:31 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:22.044 15:25:31 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.044 15:25:31 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.044 15:25:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.044 15:25:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.044 15:25:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.044 15:25:31 -- paths/export.sh@5 -- $ export PATH 00:21:22.044 15:25:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.044 15:25:31 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:22.044 15:25:31 -- common/autobuild_common.sh@435 -- $ date +%s 00:21:22.044 15:25:31 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713972331.XXXXXX 00:21:22.044 15:25:31 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713972331.Ev3nNU 00:21:22.044 15:25:31 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:21:22.044 15:25:31 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:21:22.044 15:25:31 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:21:22.044 15:25:31 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:22.044 15:25:31 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:22.044 15:25:31 -- common/autobuild_common.sh@451 -- $ get_config_params 00:21:22.044 15:25:31 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:21:22.044 15:25:31 -- common/autotest_common.sh@10 -- $ set +x 00:21:22.044 15:25:31 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:21:22.044 15:25:31 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:21:22.044 15:25:31 -- pm/common@17 -- $ local monitor 00:21:22.044 15:25:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:22.044 15:25:31 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=83632 00:21:22.044 15:25:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:22.044 15:25:31 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=83634 00:21:22.044 15:25:31 -- pm/common@21 -- $ date +%s 00:21:22.044 15:25:31 -- pm/common@26 -- $ sleep 1 00:21:22.044 15:25:31 -- pm/common@21 -- $ date +%s 00:21:22.044 15:25:31 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713972331 00:21:22.044 15:25:31 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713972331 00:21:22.044 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713972331_collect-vmstat.pm.log 00:21:22.044 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713972331_collect-cpu-load.pm.log 00:21:22.986 15:25:32 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:21:22.986 15:25:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:21:22.986 15:25:32 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:21:22.986 15:25:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:22.986 15:25:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:21:22.986 15:25:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:21:22.986 15:25:32 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:22.986 15:25:32 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:22.986 15:25:32 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:22.986 15:25:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:21:22.986 15:25:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:21:22.986 15:25:32 -- pm/common@30 -- $ signal_monitor_resources TERM 00:21:22.986 15:25:32 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:21:22.986 15:25:32 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:22.986 15:25:32 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:21:22.986 15:25:32 -- pm/common@45 -- $ pid=83642 00:21:22.986 15:25:32 -- pm/common@52 -- $ sudo kill -TERM 83642 00:21:22.986 15:25:32 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:22.986 15:25:32 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:21:22.986 15:25:32 -- pm/common@45 -- $ pid=83643 00:21:22.986 15:25:32 -- pm/common@52 -- $ sudo kill -TERM 83643 00:21:23.244 + [[ -n 5256 ]] 00:21:23.244 + sudo kill 5256 00:21:23.256 [Pipeline] } 00:21:23.277 [Pipeline] // timeout 00:21:23.284 [Pipeline] } 00:21:23.302 [Pipeline] // stage 00:21:23.308 [Pipeline] } 00:21:23.326 [Pipeline] // catchError 00:21:23.337 [Pipeline] stage 00:21:23.343 [Pipeline] { (Stop VM) 00:21:23.359 [Pipeline] sh 00:21:23.703 + vagrant halt 00:21:27.888 ==> default: Halting domain... 00:21:34.540 [Pipeline] sh 00:21:34.818 + vagrant destroy -f 00:21:38.368 ==> default: Removing domain... 00:21:38.639 [Pipeline] sh 00:21:38.942 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:21:38.951 [Pipeline] } 00:21:38.969 [Pipeline] // stage 00:21:38.976 [Pipeline] } 00:21:38.992 [Pipeline] // dir 00:21:38.999 [Pipeline] } 00:21:39.017 [Pipeline] // wrap 00:21:39.024 [Pipeline] } 00:21:39.040 [Pipeline] // catchError 00:21:39.050 [Pipeline] stage 00:21:39.052 [Pipeline] { (Epilogue) 00:21:39.067 [Pipeline] sh 00:21:39.349 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:45.933 [Pipeline] catchError 00:21:45.935 [Pipeline] { 00:21:45.947 [Pipeline] sh 00:21:46.223 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:46.223 Artifacts sizes are good 00:21:46.231 [Pipeline] } 00:21:46.248 [Pipeline] // catchError 00:21:46.258 [Pipeline] archiveArtifacts 00:21:46.263 Archiving artifacts 00:21:46.419 [Pipeline] cleanWs 00:21:46.431 [WS-CLEANUP] Deleting project workspace... 00:21:46.431 [WS-CLEANUP] Deferred wipeout is used... 00:21:46.437 [WS-CLEANUP] done 00:21:46.439 [Pipeline] } 00:21:46.455 [Pipeline] // stage 00:21:46.461 [Pipeline] } 00:21:46.477 [Pipeline] // node 00:21:46.482 [Pipeline] End of Pipeline 00:21:46.522 Finished: SUCCESS