[00:34:29] *** Quits: guerby (~guerby@april/board/guerby) (Read error: Connection reset by peer) [00:35:18] *** Joins: guerby (~guerby@april/board/guerby) [09:54:24] jimharris, wrt stubs that memset a struct I think the easiest is to not use a macro to declare the func unless you know of a slick way to do what I was trying to do? [10:00:13] could we do a per-member initialization instead of the memset? [10:00:27] I don't know - maybe that's no simpler [10:00:55] I'm OK just punting and not using the macro for those functions [10:01:41] jimharris, yeah I think there's a quick point of diminishing returns with these things... [10:11:22] I think you should remove the {val} and go back to just val in the macro, then the user can pass in {0} to initialize structs [10:11:31] or even {.member1 = blah, .member2=blah} [10:25:53] bwalker, good idea, I'll try that [10:25:56] thanks [12:47:07] drv: concerned that the new vhost-blk tests adds 1 minute to the per-patch test time [12:53:28] hmm, wkb-fedora-08 doesn't seem that much longer (~4m total), but vm-fedora-03 takes significantly longer (~7m total) [12:54:04] not sure there's much we can do about it right now [12:54:32] maybe tricks like starting the vms in parallel, or starting a vm with both a virtio-blk and virtio-scsi device [12:54:53] yeah, that might be an interesting test case anyway [13:01:40] this bdev stuff is leading me down all kinds of dark paths [13:02:49] :) [13:03:36] it probably deserves to have some light shined on it [13:04:12] sounds like a counseling phrase my wife would use [13:05:04] so wanting to just let the bdev registration stuff happen in the background - i.e. don't hold up other subsystem initialization etc. [13:06:05] but...bdevio does this thing where one reactor thread is doing cunit stuff and blocking waiting for completions from the other reactor thread [13:06:51] so once the main reactor thread running the cunit stuff starts, it never returns back up to the reactor loop [13:07:12] hmm, bdevio using cunit was always a bit weird anyway - maybe we should rework it to be async? [13:07:26] *** Quits: sethhowe (~sethhowe@134.134.139.72) (Remote host closed the connection) [13:07:34] but that function starts before gpt has had a chance to complete its I/O, so its channel is still open when we do the reset test [13:07:54] and the reset test hangs, because we can't propagate the abort message to that channel that's still open on the blocking therad [13:08:01] yeah - that's what I'm looking at next [13:09:27] *** Joins: sethhowe (~sethhowe@192.55.54.42) [13:25:18] *** Joins: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) [13:28:30] jimharris: is your current patch series going to implement that idea of virtual bdevs getting to re-scan after close somehow? [13:28:49] e.g. to see updated GPT partition tables when mounted via the NBD test app without restarting it [13:30:24] eventually - probably 3289 patches from now :) [13:30:53] heh, ok, no rush [13:31:21] I want to add a test that calls the get_bdevs RPC with a GPT bdev active, so we can test out e.g. https://review.gerrithub.io/#/c/368610/ [13:31:42] maybe I'll just spin up the nbd app a second time to run that test [13:34:21] oh wow - here's a doozy [13:34:49] ok, so the gpt module currently is only set up to check for a gpt on bdevs registered during init (not via rpc) [13:35:01] and i'm changing that with my patches [13:35:26] the ceph rbd tests creates an rbd bdev via rpc [13:36:20] rbd poller runs - completes gpt io to vbdev gpt module, which sees that there is no gpt, and immediately deletes its channel and closes its bdev desc [13:37:00] of course that returns back to the rbd poller which tries to check for the next i/o completion but we've already deleted the rbd context when the channel got destroyed [15:00:13] sethhowe, do you have a list of the package names (script somewhere you can point me to) for CentOS? Got an email from someone asking about pre-req issues with packages not being found so I assume the names are slightly different [15:05:34] yes, I just need one second to find my list. Since it's an older version of the kernel, there are a couple of differences. Are they trying to run the build pool or just spdk? [15:07:12] spdk only [15:07:56] I'll fire up a VM here also just to mess with it. He's using CentOS 7 with Kernel 4.8.0-rc5. [15:15:31] goota love vagrant, centos VM up 8 minutes later :) [15:22:43] I just sent you the list of dependencies. It's basically the same as the list from the build pool bootstrap script. If I remember right from when I set it up, there weren't any issues with centos 7 and the dependencies. [15:25:17] hmmm, OK. I match his versions and try it. thanks man [15:38:32] peluse: just thought of this. the version of gcc that yum provides centos older and doesn't contain ubsan. So you may need to make sure they aren't adding the ubsan option to their configuration when building. [15:39:19] if they are calling autobuild to build the source, that flag is turned on by default and they need to include an autorun-spdk.conf file to disable it. [15:41:57] K, the VM I just fired up couldn't find cunit-devel for some reason, that's one that he wasn't getting also [15:43:22] *** Joins: anbib (6c1ac849@gateway/web/freenode/ip.108.26.200.73) [15:49:21] *** Quits: anbib (6c1ac849@gateway/web/freenode/ip.108.26.200.73) (Quit: Page closed) [15:56:00] That's odd. try sudo yum install epel-release then sudo yum install CUnit-devel. [15:57:40] I had to install epel-release to get access to tmux for the build pool, but I didn't think it was applicable for any of the other packages actually required for spdk. [17:09:24] *** Joins: ziyeyang_ (~ziyeyang@134.134.139.76) [17:11:27] *** Quits: ziyeyang_ (~ziyeyang@134.134.139.76) (Client Quit) [17:11:39] hmm, OK [17:18:14] that did it, thanks