[00:06:49] more appropriate quiestion might be ...is there a support for SPDK over ESXi hypervisor [00:07:02] i could get the hit for QEMU/KVM though [00:11:47] *** Quits: sidspdk (10f2ea16@gateway/web/freenode/ip.16.242.234.22) (Ping timeout: 256 seconds) [00:13:00] sidspdk: hey. If you're talking about vhost (https://spdk.io/doc/vhost.html) then it's only suitable for type 2 hypervisors right now [00:13:58] I'm not aware of any hypervisor supporting vhost other than QEMU, but ESXi is definitely out of game [00:16:04] you could probably do PCI passthrough in ESXi and run SPDK inside one of its VM [00:16:32] but it all depends on what you're trying to achieve [01:03:24] *** Joins: sidspdk (10f2ea15@gateway/web/freenode/ip.16.242.234.21) [01:12:58] *** Joins: travis-ci (~travis-ci@ec2-54-147-172-152.compute-1.amazonaws.com) [01:12:59] (spdk/master) setup.sh: cleanup bdevperf trace files (Darek Stojaczyk) [01:12:59] Diff URL: https://github.com/spdk/spdk/compare/2772d86c2314...a17d17de3e59 [01:12:59] *** Parts: travis-ci (~travis-ci@ec2-54-147-172-152.compute-1.amazonaws.com) () [02:36:51] *** Quits: sidspdk (10f2ea15@gateway/web/freenode/ip.16.242.234.21) (Ping timeout: 256 seconds) [07:10:00] *** Joins: lhodev (~lhodev@66-90-218-190.dyn.grandenetworks.net) [07:10:00] *** Quits: alekseymmm (050811aa@gateway/web/freenode/ip.5.8.17.170) (Quit: Page closed) [07:44:42] is "retriger" working for testpool or only for jenkins? [07:45:10] i think it's only jenkins currently [07:45:58] can you retriger https://review.gerrithub.io/#/c/spdk/spdk/+/429911/ for me pls [07:46:19] done [07:46:26] (fyi - it's "retrigger") [07:46:46] maybe klateck should have jenkins accept retriger too :) [08:32:31] *** Joins: johnmeneghini (~johnmeneg@pool-108-20-29-249.bstnma.fios.verizon.net) [08:53:21] where is the wiki or blog page that documents Ben's proposed threading APIs?  He put this up on the screen last week at the meetup, but I can't find it. [09:23:35] *** Quits: johnmeneghini (~johnmeneg@pool-108-20-29-249.bstnma.fios.verizon.net) (Quit: Leaving.) [09:30:26] https://spdk.io/doc/thread_8h.html [09:30:34] it was just the public API of the current code [10:08:06] *** Joins: Tracy35 (~Tracy35@12.218.82.130) [10:09:59] Good morning, @bwalker. It was nice meeting you at the dev meetup last week. Any patch on the nvmf disconnect handling you want me to test for you? [10:10:44] I have some patches out that I thought did what the Mellanox folks said I should be doing, but they don't pass our tests [10:11:15] the first problem I hit was that it does not work the way they indicate it does for soft RoCE, and I sent a note to the linux-rdma mailing list about it [10:11:26] Sagi confirmed it's a bug in Soft RoCE and gave me a patch this morning that I need to try [10:13:01] the second problem is that the code also does not appear to work the way they indicated it should on real Mellanox NICs, but the failure is much less obvious [10:13:12] and I'm still debugging it to figure out if I messed it up or if I found a real bug [10:13:52] and I'm on libibverbs 1.1.16.2, just the one that's packaged with fedora 28 [10:14:01] so I may need to try it on the Mellanox OFED distribution [10:14:11] or I may need to try it on a ConnectX-5 too [10:15:18] https://review.gerrithub.io/#/c/spdk/spdk/+/429709/ [10:15:25] if you want to try that on your hardware, that's the patch [10:59:45] *** Joins: travis-ci (~travis-ci@ec2-184-73-19-111.compute-1.amazonaws.com) [10:59:46] (spdk/master) vhost/blk: check against hotremoved bdev in GET_CONFIG handler (Darek Stojaczyk) [10:59:47] Diff URL: https://github.com/spdk/spdk/compare/decb59575b32...3f12b5fa1e74 [10:59:47] *** Parts: travis-ci (~travis-ci@ec2-184-73-19-111.compute-1.amazonaws.com) () [11:03:20] *** Joins: travis-ci (~travis-ci@ec2-54-80-114-29.compute-1.amazonaws.com) [11:03:21] (spdk/master) iscsi: Fix double dequeue of the primary write task in error handling (Shuhei Matsumoto) [11:03:22] Diff URL: https://github.com/spdk/spdk/compare/3f12b5fa1e74...64a268e2e110 [11:03:22] *** Parts: travis-ci (~travis-ci@ec2-54-80-114-29.compute-1.amazonaws.com) () [11:05:57] *** Joins: travis-ci (~travis-ci@ec2-54-157-22-243.compute-1.amazonaws.com) [11:05:59] (spdk/master) Makefile: add ldflags target to PHONY (Darek Stojaczyk) [11:05:59] Diff URL: https://github.com/spdk/spdk/compare/4f11c593d69c...89f567a763f9 [11:05:59] *** Parts: travis-ci (~travis-ci@ec2-54-157-22-243.compute-1.amazonaws.com) () [11:06:43] Thanks, @bwalker for the information. I can help test it on ConnectX-5 + OFED. Hopefully it can be done by today or tomorrow morning. Will update you once it is done. [11:06:54] *** Joins: travis-ci (~travis-ci@ec2-54-161-94-58.compute-1.amazonaws.com) [11:06:55] (spdk/master) virtio: support dynamic memory registrations (Dariusz Stojaczyk) [11:06:55] Diff URL: https://github.com/spdk/spdk/compare/89f567a763f9...23670d8db75f [11:06:55] *** Parts: travis-ci (~travis-ci@ec2-54-161-94-58.compute-1.amazonaws.com) () [11:11:36] thank you! [11:12:01] I'm actually doing a quick update to it - apparently I'm only allowed to post send operations after a qp has been transitioned to an error state [11:12:05] not both a send and receive [11:12:17] so I'll do the dummy operation for the send side [11:12:26] and for the receive I'll wait until I've hit a cq drained condition at least once [11:18:51] Hope waiting until the cq drained condition is hit will help the receive side. [11:20:37] yeah patch is in the test queue now [15:26:50] Tracy35: I just figured out what I think the problem is, going to do some different patches and test it out [15:50:04] *** Joins: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) [16:28:13] jimharris: ping [16:28:26] hey [16:28:41] Can we "revisit" https://review.gerrithub.io/c/spdk/spdk/+/429963 ? [16:29:31] In particular, I grok your idea about providing an INSTALL_APPNAME, but then that, well, doesn't follow the naming convention that DPDK uses. [16:33:08] spdk_example_nvmeperf wouldn't work? [16:34:46] Well....that's a little bit more than a mouthful, but, um, yeah, that "work". And, it's not without precedence; i.e. there are some other hefty long-named entities in /usr/bin [16:35:22] it is a mouthful - but with multiple hello_world applications in our examples directory, we need something to differentiate them [16:37:59] Adding another indirection for creating the name beyond prepending "spdk_example_" is certainly do'able, but do we want to add that complexity? If we want to name it nvmeperf, then shouldn't we just then rename it to build that way instead of just "perf"? [17:18:25] lhodev: i'm open to that - it will take some people getting used to the new name but it's probably the right decision [17:19:41] @bwalker, That is good news. Will wait for your update to do the test. [17:21:29] jimharris: I totally get that for historical reasons the name-change may ruffle some feathers. There'd also be many places in the documentation and test files that would need to change in kind. [17:22:52] are you hoping to get the examples packaging in for 18.10? [17:23:15] or can we defer this to 19.01? [17:24:54] I was really hoping to get the spdk example binaries installed for 18.10. Ok to postpone the example source/build stuff until later, but definitely was eager to get the binaries there for 18.10. [17:26:00] If we only did the "spdk_example_" prepend and don't rename perf to nvmeperf, then we don't have the huge changes sprinkled all over to worry about. Just sayin' ;-) ;-) [17:27:00] we have to do some kind of renaming - there are four different hello_world apps in examples/ [17:28:37] what is the minimal set of example binaries you would want installed for 18.10? [17:29:22] i need to run - let's pick this up tomorrow - bwalker and pwodkowx will want to weigh in for sure [17:30:02] jimharris: Understood. Catch up tomorrow. [20:06:19] *** Quits: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) (Ping timeout: 256 seconds) [20:27:47] *** Quits: Tracy35 (~Tracy35@12.218.82.130) (Ping timeout: 240 seconds) [23:31:06] *** Joins: sidspdk (10f2ea15@gateway/web/freenode/ip.16.242.234.21) [23:31:34] hi darsto , thanks for your response. [23:32:34] I have a server box running baremetal ESX hypervisor [23:33:01] I have created ubuntu VM and i have connected bunch of NVME disks to server box [23:33:37] was wonder can i carver out volume out of 4 nvme driver i have and then write a hello world kinda app to exercise it [23:33:47] was wondering * [23:34:33] the ubuntu vm is hosted on top of ESX hypervisor [23:35:05] looks like i can install spdk drivers inside ubuntu box and can carve out a volume out of 4 nvme devices [23:35:09] correct me otherwise [23:35:39] provided i expose NVME device as pass through [23:36:20] can someone throw light on this basic question ...thanks for your time [23:39:32] that's correct as long as ESXi offers a way to passthrough PCI devices [23:41:05] I'd expect it to be possible, but I'm not an ESXi expert [23:44:23] yes it has provision, i am able to present PCI passthrough and detect it inside ubuntu [23:44:45] dump question ...i have to just install spdk driver packages and ready to go ? [23:45:00] does it provide RAID functionality as well?