[00:15:23] *** Quits: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [00:17:56] *** Joins: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) [02:56:38] *** Quits: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [03:10:03] *** Joins: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) [04:07:53] *** Quits: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [04:31:33] *** Joins: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) [06:34:16] *** Quits: tomzawadzki (~tzawadzk@192.55.54.38) (Ping timeout: 240 seconds) [09:08:13] *** Quits: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [09:28:12] https://review.gerrithub.io/#/c/371978/ [09:47:20] *** Joins: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) [09:47:22] anyone else seeing intermittent "server error" messages from GerritHub? [09:48:35] I actually think GitHub is down [09:48:36] partially [09:48:42] drv was just showing me [09:49:12] at the top of hacker news: https://status.github.com/messages?ts=mon-jul-31-2017 [09:58:29] after all of these sgl/prp changes, I'm now reasonably confident that no one is going to write a different user space NVMe driver and actually get all of these cases right [10:02:58] *** Quits: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [10:03:06] that's how it always goes ain't it... [10:09:30] bwalker: agreed, it's easy to get something working, but then there's getting it WORKING [10:09:43] can I push my 4 patches then? bwalker, drv? [10:10:20] I signed off on them [10:10:57] ok - I pushed the four patches [10:29:02] *** Joins: crane_ (77386ddb@gateway/web/freenode/ip.119.56.109.219) [10:42:40] *** Quits: crane_ (77386ddb@gateway/web/freenode/ip.119.56.109.219) (Ping timeout: 260 seconds) [12:02:22] *** Joins: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) [13:13:11] FYI, big pile of nvme UT patches that have been basting in the sun, just rebased and ready for review :) [13:14:25] I just ran 15 patches through the test pool that were all rebased on top of the vhost fixes [13:14:31] not a single one failed (in vhost) [13:21:21] yay! [13:21:43] I have 12 pending so we'll see how MY luck is, that's the real test :) [13:34:07] *** Quits: guerby (~guerby@april/board/guerby) (Ping timeout: 255 seconds) [13:59:09] *** Joins: guerby (~guerby@ip165.tetaneutral.net) [13:59:09] *** Quits: guerby (~guerby@ip165.tetaneutral.net) (Changing host) [13:59:09] *** Joins: guerby (~guerby@april/board/guerby) [14:26:55] so I just did a quick ~5 line patch to DPDK [14:27:04] and now I can run the nvme identify example without hugepages [14:27:11] *requires an IOMMU [14:29:26] so just curious since everything else requires hugepages, why make the change? [14:29:56] after this patch nothing requires hugepages [14:30:44] it's totally a half-baked solution though - it assumes the user is using vfio and doesn't actually check that [14:30:53] and it breaks multiprocess support [14:31:06] both of those are solvable things I think - I just did the quick patch to prove to myself that it works [14:31:18] I'd ask the DPDK people to maybe think about making it a "real" solution [14:36:01] ahhh, got it. thanks [14:46:13] *** Quits: gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [15:49:33] wow, OK I had 13 or 14 patches go through today w/o any vhost issues. One rebase foul up and one test timeout that I don't think I've seen before, all the tests passed but the overall status was timeout... [16:02:41] peluse: was that the first one of your patches that ran? I think wkb-fedora-03 failed on the previous build and didn't reboot in time for your test (paging sethhowe_) [16:06:09] yeah - I think the vhost patches we checked in today resolved those intermittent vhost failures [16:13:34] drv: thanks for the heads up. It looks like it restarted itself and then immediately went back offline (hit some error state). I fixed one of the issues that was causing this to happen a couple weeks ago, but looks like I missed something else. back to the drawing board. . . [16:48:31] I pushed a review to add the Windows virtio-blk driver bug to the known issues: https://review.gerrithub.io/#/c/372011/ [16:48:35] please review for the release [16:50:37] drv: done [16:50:58] thanks [16:57:10] drv, cool, thanks [23:36:42] *** Joins: tomzawadzki (~tzawadzk@134.134.139.77)