[00:33:21] *** Joins: tkulasek (~tkulasek@192.55.54.40) [01:03:06] *** Joins: felipef (~felipef@cpc92310-cmbg19-2-0-cust421.5-4.cable.virginm.net) [01:08:05] hey all, I'm doing some tests with v17.10 and, when submitting a larger amount of outstanding requests in parallel, am getting a -ENOMEM back from spdk_nvme_ns_cmd_writev(), which suggests _nvme_ns_cmd_rw() is failing somehow. Does anyone have obvious tips before I resort to some creative breakpoints? [01:23:49] *** Quits: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) (Read error: Connection reset by peer) [01:24:50] *** Joins: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) [01:26:45] felipef: either the NVMe queue is full, or NVMe driver run out of its internal buffers [01:27:28] you can see how bdev layer handles this case - it queues the I/O to be resent after some other I/O completes [01:28:51] In bdev_nvme_queue_cmd() I can see it logging an error if that happened. [01:30:01] And actually returning an error for the IO. Is there a way to avoid this from happening altogether? Can I make the internal buffers or the nvme queues bigger? [01:30:05] let me check the codew [01:30:12] (at least on v17.10) [01:30:50] yes. There's a couple of size #defines you can increase [01:31:27] but you still need to handle -ENOMEM case [01:32:22] Maybe I should put the time to figure out exactly what I'm hitting before fiddling with things. [01:33:17] Well, is there a runtime condition where _nvme_ns_cmd_rw() returns NULL which I can't avoid by pre-calculating my queue sizes correctly? [01:33:27] the logging in bdev_nvme_queue_cmd you mention has a condition: if (rc != 0 && rc != -ENOMEM) { [01:36:19] DEFAULT_IO_QUEUE_REQUESTS in nvme_internal.h [01:36:58] I can't give you the details on how to tweak it though - maybe @jimharris could [01:37:40] I was also looking at bdev_nvme_get_buf_cb(), which apparently just completes the IO with SPDK_BDEV_IO_STATUS_NOMEM. [01:38:01] (and bdev_nvme_submit_request()) [01:38:40] yes, and that schedules the I/O to be resent later [01:40:10] But you are right, I can see various other paths handling the ENOMEM. I just think I should avoid that if at all possible by making sure my queues are big enough. [01:40:42] I'll try to work out what's causing it and report back... [01:51:10] At one point I tried increasing the DEFAULT_IO_QUEUE_REQUESTS to 16384. It makes us allocate 2MB per qpair [01:51:17] and nothing exploded [01:55:22] :) [01:56:37] Ultimately I'll need to handle the ENOMEM. Just wondering whether I should do that now and forget about this or put the effort in working out why it's happening. [02:07:45] *** Quits: tomzawadzki (tomzawadzk@nat/intel/x-hwqejthksggadckl) (Ping timeout: 248 seconds) [03:19:05] I think I'm convinced that there's little point in trying to submit more requests down the pipe. I've hacked up a retry mechanism on my layer and it seems to be working. [03:19:11] Thanks for the help! [03:27:00] nice [04:21:19] *** Quits: dlw (~Thunderbi@114.255.44.143) (Ping timeout: 256 seconds) [04:49:16] *** Joins: dlw (~Thunderbi@114.255.44.141) [05:02:07] *** Joins: dlw1 (~Thunderbi@114.255.44.141) [05:02:07] *** Quits: dlw (~Thunderbi@114.255.44.141) (Read error: Connection reset by peer) [05:02:09] *** dlw1 is now known as dlw [06:44:07] darsto, not sure, can you email me info on ZNC server info (IP at least)? I'll check it out, thanks! [06:49:22] sent on priv [06:49:48] there's also a command to clear a buffer manually [06:50:04] i don't remember it now, let me check [06:50:59] . /msg *status clearbuffer darsto [06:51:05] OK, just checked the option. Will exit and restart and see what happens [06:51:09] (no dot, of course) [06:51:50] the above will clear the conversation with darsto [06:52:51] darsto, that did it!!! thanks, man that was annoying the hell out of me. bwalker check it out... [06:56:37] *** Quits: dlw (~Thunderbi@114.255.44.141) (Quit: dlw) [07:12:26] did anyone by any chance look into the nightly test failure from the night before last? Last night looked OK https://ci.spdk.io/spdk/nightly_status.html [07:36:47] nightly test case of rocksdb test has failed again https://ci.spdk.io/spdk/builds/review/8ed528a7c5fb01a26f2acfac25819fcfee0280ab.1524054229/fedora-04/build.log [07:41:48] there is a patch that moves this test case to RUN_NIGHTLY_FAILING https://review.gerrithub.io/#/c/406975/ [07:47:54] *** Quits: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) (Ping timeout: 256 seconds) [07:48:39] pniedzwx, thanks, yeah that's not last night though right - that's from the night before last [07:49:04] and thanks for pointing out that last patch [08:42:49] *** Quits: lhodev (~lhodev@66-90-218-190.dyn.grandenetworks.net) (Quit: My MacBook has gone to sleep. ZZZzzz…) [08:46:55] *** Quits: felipef (~felipef@cpc92310-cmbg19-2-0-cust421.5-4.cable.virginm.net) (Quit: Leaving...) [08:53:57] *** Joins: tkulasek_ (~tkulasek@134.134.139.72) [08:53:58] *** Quits: tkulasek (~tkulasek@192.55.54.40) (Ping timeout: 265 seconds) [09:31:21] jimharris, bwalker: any thoughts on changpe1's updated perf request allocation patch? https://review.gerrithub.io/#/c/403256/ [09:32:16] i +2'd pniedzwx's patch - i'll keep debugging this [09:36:07] drv: done - I agree with you, we calculate the number of requests we need in register_ns() - we should save that and then specify it in the io_qpair_opts when allocating the io qpair [09:36:30] yeah, instead of setting it in probe_cb, we can do it later when we actually allocate the io qpair [09:36:36] that way we have the real io queue size available [09:47:37] *** Quits: tkulasek_ (~tkulasek@134.134.139.72) (Ping timeout: 265 seconds) [10:05:27] *** Quits: mphardy (~mphardy@pool-72-83-7-2.washdc.fios.verizon.net) (Ping timeout: 240 seconds) [10:07:28] *** Joins: mphardy (~mphardy@pool-72-83-7-2.washdc.fios.verizon.net) [10:18:24] *** Joins: lhodev (~lhodev@inet-hqmc01-o.oracle.com) [10:27:35] jimharris: bwalker updated the first patch in his static analysis fix series: https://review.gerrithub.io/#/c/408240/ [10:27:41] once that's reviewed, we can merge the whole series [10:36:52] *** Quits: lhodev (~lhodev@inet-hqmc01-o.oracle.com) (Quit: My MacBook has gone to sleep. ZZZzzz…) [10:45:53] *** Joins: lhodev (~lhodev@inet-hqmc03-o.oracle.com) [10:51:37] *** Joins: travis-ci (~travis-ci@ec2-54-157-245-206.compute-1.amazonaws.com) [10:51:38] (spdk/master) bdev/qos: add RPC method to set QoS at runtime (GangCao) [10:51:38] Diff URL: https://github.com/spdk/spdk/compare/c1f7f02cfefb...ffba4fdbc327 [10:51:38] *** Parts: travis-ci (~travis-ci@ec2-54-157-245-206.compute-1.amazonaws.com) () [10:53:10] *** Joins: travis-ci (~travis-ci@ec2-54-234-35-129.compute-1.amazonaws.com) [10:53:11] (spdk/master) github: Add issue tracker template (Paul Luse) [10:53:11] Diff URL: https://github.com/spdk/spdk/compare/ffba4fdbc327...b67f1afe8e48 [10:53:11] *** Parts: travis-ci (~travis-ci@ec2-54-234-35-129.compute-1.amazonaws.com) () [10:55:53] doesn't this also need to check if the admin command belongs to the process calling check_timeout()? [10:57:06] that's a good point - I think the original "make timeout function per process" patch should have done that [10:57:12] hmm, yes you are probably right [10:57:25] agreed - it's not really associated with this patch but I think it's a rather simple thing to fix here [10:57:31] can we check this one in as a fix for the "don't return admin qpair" and then do another patch? [10:57:34] is there somewhere that the originating process is stored? [10:57:36] to avoid having to re-run the whole series again [10:57:36] ok with me [10:57:40] yep [10:57:47] we put the pid in the request [10:58:43] oh, easy [10:58:47] k, will fix [10:59:02] i'm fine doing at as a separate patch at the end of this series [11:22:06] *** Quits: lhodev (~lhodev@inet-hqmc03-o.oracle.com) (Remote host closed the connection) [11:44:14] i finished reviews on tomek k's snapshot/blob patches - overall they are in really good shape but need a few more changes [12:15:21] *** Quits: sethhowe (~sethhowe@192.55.54.42) (Remote host closed the connection) [12:17:06] *** Joins: sethhowe (~sethhowe@134.134.139.73) [12:40:43] *** Joins: travis-ci (~travis-ci@ec2-54-145-111-70.compute-1.amazonaws.com) [12:40:44] (spdk/master) iscsi: fix nonsensical poll_group asserts (Daniel Verkamp) [12:40:44] Diff URL: https://github.com/spdk/spdk/compare/b67f1afe8e48...ec013016ed64 [12:40:44] *** Parts: travis-ci (~travis-ci@ec2-54-145-111-70.compute-1.amazonaws.com) () [12:46:48] *** Joins: param (~param@157.49.239.59) [12:52:06] *** Joins: travis-ci (~travis-ci@ec2-54-145-111-70.compute-1.amazonaws.com) [12:52:07] (spdk/master) vhost: Fix negative array index use in spdk_vhost_set_socket_path (Ben Walker) [12:52:07] Diff URL: https://github.com/spdk/spdk/compare/ec013016ed64...0692768e69d3 [12:52:07] *** Parts: travis-ci (~travis-ci@ec2-54-145-111-70.compute-1.amazonaws.com) () [13:07:41] *** Quits: param (~param@157.49.239.59) (Quit: Going offline, see ya! (www.adiirc.com)) [13:07:58] *** Joins: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) [13:53:02] Hi all, in about 1 hour at UTC 22:00 we have our scheduled build pool update. This month's update will be lighter since we are getting ready for the release. I will be installing VPP on several test machines and updating test flags to re-balance testing time. [14:07:52] *** Joins: lhodev (~lhodev@66-90-218-190.dyn.grandenetworks.net) [14:40:49] bwalker: can you review these two patches: https://review.gerrithub.io/#/c/408388/ [14:40:52] *** Quits: sethhowe (~sethhowe@134.134.139.73) (Remote host closed the connection) [14:43:03] *** Joins: sethhowe (~sethhowe@192.55.54.42) [14:43:16] just did [14:57:05] *** Joins: travis-ci (~travis-ci@ec2-54-234-35-129.compute-1.amazonaws.com) [14:57:06] (spdk/master) nvme: require trid to be valid in nvme_ctrlr_probe (Daniel Verkamp) [14:57:06] Diff URL: https://github.com/spdk/spdk/compare/0692768e69d3...3fa7c33ac120 [14:57:06] *** Parts: travis-ci (~travis-ci@ec2-54-234-35-129.compute-1.amazonaws.com) () [15:04:42] I will be taking the spdk build pool down in about 15 minutes after lvol/doc: update lvol documentation finishes running. [15:15:38] bwalker: if QoS is not enabled on a bdev, this patch will report 0 - do we want that, or should it just omit it in that case? [15:16:18] i'm fine either way [15:16:43] i see us changing this when we start doing QoS by BW (not IOPs) and read v. write anyways [15:17:20] which patch [15:17:27] the rpc to show qos iops? [15:18:28] I think so far we've taken 0 to mean unlimited (i.e. qos disabled) [15:22:23] is it considered an error if the user calls spdk_put_io_channel on a channel for which there is outstanding I/O? [15:22:29] or should that operation attempt to abort? [15:22:40] specifically, in the bdev layer [15:26:56] yes [15:27:14] sorry [15:27:22] "the rpc to show qos iops?" - yes [15:27:39] and yes to your other question too [15:28:22] drv and/or bwalker: could you take a look at Shuhei's https://review.gerrithub.io/#/c/407397/ - it's the RPC state patch - I think he's going to have to include event.h in a bunch of files that didn't need it before but I'm not sure there's a better option [15:30:54] yeah, I think it should be event.h [15:35:57] that means all the bdev modules depend on the event framework again [15:36:01] at least in terms of header files [15:36:25] hmmm [15:36:41] and iscsi, nvmf, vhost, etc. [15:37:27] so maybe it's not event.h - but all of these modules do have these implicit assumptions that there is (or at least can be) a separate initialization path that is different from normal execution [15:37:51] the iscsi/nvmf/vhost layers for sure [15:38:19] the RPCs should all really be part of the event library, from my point of view [15:38:19] nvme driver too in some respects [15:38:27] I know a lot of them don't work that right now [15:38:29] yes - it's implicitly tangled up if not explicitly [15:38:56] if you put them all into the event framework, you'd have to move all of these rpc files to the event framework too [15:39:02] into lib/event/whatever [15:39:05] yes [15:39:08] but I think if someone was going to use bdev modules (for example) outside of our event framework, they should still be able to use the RPCs as is [15:39:20] ideally RPC implementations should all use only public API from the modules they are built on [15:39:21] -2 :) [15:39:35] well, we could go either way, but currently it's all intermixed [15:40:05] can we define the states in a non-event framework specific way? [15:40:44] i think that's hard if we want to enable Pawel's idea of specifying DPDK initialization parameters via RPC [15:41:10] I don't know if that part is going to work [15:41:21] DPDK is abstracted behind the environment library [15:41:31] so the code generally doesn't even know about DPDK parameters [15:41:39] it does know about event framework parameters [15:41:50] ok - so technically not DPDK initialization parameters, but spdk_app_opts parameters [15:42:19] so that means that the rpc listener comes up prior to the app actually starting? [15:42:28] it would have to [15:42:47] I mean, what polls the socket? [15:42:50] rpc poller stuff would have to be reworked osmehow [15:42:51] somehow [15:43:02] spawn a thread and block or something? [15:43:15] yeah, I don't know how Pawel's approach was going to handle that [15:43:26] well I think based on this we have justification to punt on the env/DPDK init via RPC for now [15:43:27] here's what I'd recommend - spdk event framework stuff is specified on the command line. Everything else is started via RPC [15:44:37] so the target application framework comes up, but doesn't initialize any of the modules [15:44:43] and we add RPCs to initialize them with parameters [15:44:46] we still use a state bit mask, but rpc.h itself defines the first two bits [15:44:55] what's the state mask necessary for? [15:45:19] if you get the RPC to start the bdev layer, call spdk_bdev_initialize [15:45:25] if it is already initialized, it should fail [15:45:43] still allows us to specify RPCs that can be called before and after subsystem initialization - or both [15:45:55] i think we just have one RPC that starts all of the subsystems [15:46:08] can't we just track that internally to each module? [15:46:13] so you do iscsi_initialize, nvmf_initialize, bdev_initialize, start_subsystems [15:46:28] what if i do them in the wrong order? [15:46:29] like in nvmf, if you send the nvmf initialize RPC, I'll check if it's already initialized [15:46:40] or if you send the create subsystem RPC, I'll check if the nvmf subsystem is initialized [15:47:14] so even if there is just one start_subsystems rpc [15:47:27] we have dependencies between subsystems though - in what order their init calls are run [15:47:33] if I get an RPC to, for instance, configure the memory pool size for NVMf [15:47:50] I check my global NVMf flag if it has been initialized yet. If no, set the parameter [15:47:56] if yes, fail the RPC [15:48:17] sure - that part is fine, but when do you actually allocate the memory pool? [15:48:24] in the same RPC? [15:48:38] in the start subsystem rpc [15:48:47] the other rpc just sets a global or whatever [15:48:54] oh ok - i think we're on the same page then [15:49:02] but you don't need a generic flag for this [15:49:11] you just put that logic in each individual subsystem [15:49:22] but just so I'm clear, are you suggesting one start_subsystem RPC, or a separate one per subsystem? [15:49:41] now I'm saying just one start subsystem RPC [15:49:48] ok - yes, I agree [15:50:15] what about RPCs that can run both before and after subsystems are started? [15:50:23] yeah, I think the question is whether we need the state enforcement to be in the generic RPC layer or within each RPC method implementation [15:50:24] like get_rpc_methods [15:50:42] *** Joins: travis-ci (~travis-ci@ec2-54-234-35-129.compute-1.amazonaws.com) [15:50:43] (spdk/master) vhost/nvme: remove pointless task NULL check (Daniel Verkamp) [15:50:43] Diff URL: https://github.com/spdk/spdk/compare/3fa7c33ac120...f6fa387f5bb7 [15:50:43] *** Parts: travis-ci (~travis-ci@ec2-54-234-35-129.compute-1.amazonaws.com) () [15:50:52] they don't check and just run - I don't see a problem with those [15:51:15] I think the key code change that actually needed to happen was that the event framework needed to be started up without initializing subsystems [15:51:17] this just seems like a lot of extra code we have to add now [15:51:31] I hardly think it's any code at all [15:51:34] because now for every RPC, we have to go and check if the subsystem has actually been started [15:51:46] isn't that a 3 line if statement? [15:51:52] in 70 RPCs? [15:52:02] and not every RPC will actually care [15:52:03] is it the same 3 lines though? [15:52:24] they could all potentially have more complex rules [15:52:33] with different stages of initialization and such [15:52:40] when things are allowed at different times [15:52:56] I don't think we'll succeed in making that generic, and if we do it will take a ton of code anyway [15:53:17] I'd rather keep the delineation between our modules and do the checking individually [15:53:47] i'm still not following why we won't succeed in making it generic [15:53:52] and with very little code [15:54:14] so what if iscsi has like 4 different states it can be in, with different RPCs allowed during each state [15:54:17] i want to understand - please help me :) [15:54:34] maybe you can still change some parameters while it is running if no one is connected, for instance [15:55:25] i'm just proposing for now that we have two states - pre-subsystem-init and post-subsystem-init [15:55:42] but the word subsystem itself is an event framework construct [15:56:43] now maybe that maps nicely to all of our libraries - to before and after spdk_bdev_initialize for that library, to before and after spdk_nvmf_tgt_initialize, etc. [15:56:49] i'm not clear if you're arguing for or against a state mask now [15:56:53] against [15:57:00] not necessary I don't htink [15:57:09] put the rules into the individual libraries defining the RPCs [15:57:39] *** Joins: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) [15:58:16] it sounds like a lot of work, but maybe i'm wrong [15:58:39] I think it's less work than trying to make it generic in the long run [15:58:45] yes, you start with only two states [15:58:50] and that is probably less total code [15:58:55] but I don't think the rules are really that simple [15:59:06] but that's what we have today [15:59:15] its just that all of the RPCs can only run after subsystem init [15:59:47] yes but new initialization RPCs will get added [15:59:57] and those can only be run before subsystem init [16:00:06] I doubt it stays that simple [16:00:18] for right now, sure [16:00:34] but for the NVMf ones, I can see additional states where some things can be modified again [16:01:11] *** Joins: KenneF (cf8c2b51@gateway/web/freenode/ip.207.140.43.81) [16:01:12] but having pre/post states in the generic RPC layer doesn't preclude individual RPCs from doing their own thing [16:01:34] sure - but why encode that into the RPC layer if we're going to additionally need to do our own thing anyway? [16:01:55] but we don't have any cases today that need to do their own thing - it's theoretical at this point [16:02:33] so you're saying add the concept of "subsystem init" to the generic RPC code [16:02:45] even though subsystem is a construct from the event framework [16:02:50] but we name it such that it doesn't sound specific to subsystems [16:03:03] ok - we can do that for now [16:03:13] my money is on someone proposing new states in the next 6 months [16:03:46] i'm not taking that bet - you'll propose one yourself in the next 6 minutes [16:03:58] yeah - I can see it coming for nvmf [16:04:09] these libraries already have "initialization" routines [16:05:16] so I think for each library, the RPCs all expect that they can only be called after its respective library's initialization routine has been called [16:09:47] so i guess the question is whether it's a truly generic state mask, or something specific to pre and/or post initialization [16:10:49] most will be one or the other but some could be both - get_rpc_methods, log_level and trace_flag related RPCs [16:19:25] Hi guys, a question regarding memory allocation [16:19:48] I see in documentation that all memory passed to spdk, must be allocated via spdk_dma_malloc() [16:20:04] I have an nvdimm I would like to use [16:20:11] What is the best way to register it with spdk? [16:35:00] ben, jim, daniel: thank you for discussion about RPC for subsystem init, I would like to hear your thought if possible. [16:35:03] Do you think when we support JSON RPC to initialize subsystems, iscsiNVMf-tgt state machine waits for the RPC to get options in the middle of it? [16:35:15] iscsi subsystem needs options to create resource pool. Options will be sent by the new RPC. So, [16:35:26] iscsi subsystem starts its initialization but waits for options in the middle of initialization? or [16:35:40] before starting iscsi subsystem initialization, iscsi subsystem creates resource pool first by the new RPC, and later creating resource pool will be skipped in its initialization [16:35:59] which is reasonable? [16:36:06] on the other hand for nvmf subsystem, [16:36:13] nvmf subsystem needs options to creat nvmf-tgt. [16:36:24] nvmf subsystem starts its initialization but waits for options in the middle of the state machine, e.g. at NVMF_TGT_INIT_PARSE_CONFIG or something like that? or [16:36:35] before starting nvmf subsystem initialization, nvmf-tgt creates nvmf-tgt first by the new RPC, and later start to run the state machine of nvmf subsystem? [16:37:01] and in the state machine, creating nvmf-tgt is skipped because already exists? [16:37:53] Besides, if you are OK, I will continue to update the patch based on your feedback. [16:43:43] About first two questions, first I proposed "waiting for RPC in the middle" but changed to "create pool first, wait for RPC, and then skip creation" according to the discussion. [16:51:42] As I read again, it looks OK to go forward for now. [16:52:16] bwalker, if you didn't see it check out the ZNC setting mentioned earlier, it works like a champ! [16:53:14] hi Shuhei [16:53:59] i think for iscsi (for example), there is one RPC used to set the things like size of resource pools, but it does *not* actually create those resource pools [16:54:26] once user has called all of the different subsystem initialization RPCs, they call a new spdk_start_subsystems RPC [16:55:55] which effectively just calls spdk_subsystem_init() - the same thing that spdk_app_start calls today [16:56:00] OK hold received options and use it after that? [16:56:03] yes [16:56:22] options should be allocated by malloc or static? [16:56:24] I think we may want to change spdk_subsystem_init() to spdk_subsystem_start() [16:56:32] or we can do that later [16:56:56] it could be either - each subsystem can decide [16:57:10] OK. [16:57:22] renaming can be done later. [16:57:53] Sorry one more question. [16:58:09] and i think bwalker and i agreed that we would still use bit mask, but make them generic so they can stay in rpc.h [16:58:36] I'm moving RPC_POST_SUBSYS_START to event.h but stop that work? [16:58:58] moving definition of all stats to rpc.h? [16:59:11] stats -> states [16:59:18] correct [16:59:55] thank you for helpful reply. [16:59:59] we will just do two for now - we don't have agreement on if/how enabling DPDK initialization for RPC would work so we want to defer that [17:00:42] OK, I add two states for now. I haven't read new pawel's patches yet too. [17:01:15] we shouldn't really use "subsystem" in the state names since they are event framework terms [17:01:26] but i don't have good alternative names yet :) [17:03:03] RPC_APP_STATE_INITIALIZE and RPC_APP_STATE_STARTED? [17:03:14] bwalker - ^^^ [17:03:45] OK, I fix the logic first. [17:04:42] unrelated failure on https://review.gerrithub.io/#/c/408256/ if someone gets a chance to restart [17:06:10] peluse: done - i just reverted the two patches that have been causing this intermittent failure [17:06:25] it hit at least 10 times now since the patches were checked in yesterday :( [17:08:30] *** Joins: travis-ci (~travis-ci@ec2-54-157-245-206.compute-1.amazonaws.com) [17:08:31] (spdk/master) Revert "test/virtio: test support for kernel vhost-scsi device" (Jim Harris) [17:08:31] Diff URL: https://github.com/spdk/spdk/compare/f6fa387f5bb7...4ecb2e1d3355 [17:08:31] *** Parts: travis-ci (~travis-ci@ec2-54-157-245-206.compute-1.amazonaws.com) () [17:21:37] jimharris, nice! [17:22:40] for any hard rockers out there... I'll be out of the office tomorrow because of this: http://98kupd.com/events_and_concerts/ufest-2018/ Oh Yeah! [17:28:40] jimharris: I pushed a backport of the getpid change for v18.01.1: https://review.gerrithub.io/#/c/408406/ [17:28:52] needs a couple of test fixes from master to get the build working on that branch again [17:31:49] we should probably set up a nightly test run of each release branch that we are currently supporting to make sure they all still build/pass the tests [17:32:01] since the test pool configuration can change [17:38:24] jim: now two states are defined in spdk/rpc.h. hence I can use SPDK_RPC_REGISTER as is. [17:38:55] For new RPCs to initialize subsystems, SPDK_EVT_RPC_REGISTER or SPDK_SI_RPC_REGISTER can be used. [17:39:24] Do you prefer adding state_mask to SPDK_RPC_REGISTER? [17:44:06] besides about the name of the states, how about SPDK_EVT_RPC and SPDK_LIB_RPC? Anyway I'll try to propose another name and get reviews. [18:10:51] *** Joins: dlw (~Thunderbi@114.255.44.143) [20:26:07] *** Joins: dlw1 (~Thunderbi@114.255.44.143) [20:28:01] *** Quits: dlw (~Thunderbi@114.255.44.143) (Ping timeout: 248 seconds) [20:28:01] *** dlw1 is now known as dlw [20:41:19] *** Quits: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) (Ping timeout: 260 seconds) [21:23:10] *** Joins: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) [22:19:19] *** Quits: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) (Ping timeout: 260 seconds)