[00:55:04] *** Quits: ziyeyang_ (~ziyeyang@134.134.139.82) (Remote host closed the connection) [00:55:23] *** Joins: ziyeyang_ (ziyeyang@nat/intel/x-burxhfaykmxofyco) [01:15:48] *** Quits: ziyeyang_ (ziyeyang@nat/intel/x-burxhfaykmxofyco) (Ping timeout: 272 seconds) [01:34:01] *** Joins: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) [01:34:25] *** Joins: ziyeyang_ (~ziyeyang@134.134.139.82) [01:44:51] jimharris: same happened when I added unit test for thin provisioning. [01:47:35] *** Quits: Shuhei (caf6fc61@gateway/web/freenode/ip.202.246.252.97) (Ping timeout: 260 seconds) [02:04:37] *** Joins: tomzawadzki (~tomzawadz@134.134.139.72) [02:13:53] *** Quits: ziyeyang_ (~ziyeyang@134.134.139.82) (Remote host closed the connection) [02:14:11] *** Joins: ziyeyang_ (~ziyeyang@134.134.139.82) [02:21:08] drv: could you recompile the docs for https://review.gerrithub.io/c/391603/? [02:29:00] *** Quits: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [03:14:30] *** Joins: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) [03:50:09] *** Quits: ziyeyang_ (~ziyeyang@134.134.139.82) (Ping timeout: 264 seconds) [04:08:41] *** Quits: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [04:26:44] *** Joins: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) [05:28:28] *** Quits: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [05:35:11] *** Joins: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) [05:52:13] *** Quits: gila (~gila@5ED4D9C8.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [08:36:28] *** Quits: tomzawadzki (~tomzawadz@134.134.139.72) (Ping timeout: 265 seconds) [09:09:04] darsto: the docs will be generated on spdk.io once it's checked in, but until then, it will get built as part of the standard build [09:09:23] oh, your commit message has RFC, so it won't get built [09:09:56] drv: ah, nvm then [09:13:58] it is probably a good idea to just reupload with the RFC removed so we can see the proposed output [09:14:24] (that is in the Documentation link on the build output page) [09:21:46] drv: done, thanks [09:25:16] bwalker: assigned two patches to you that precede cunyinch's nvmf hotplug patch [09:28:33] curious, anyone doing apps have issues with getopt under FreeBSD? blobcli being used to test blobstore is puking on the FreeBSD machine w/segfault in getopt(). About to debug it but figured I'd ask in case there's any known issues with getopt under freeBSD [09:31:39] and its not like it just doesn't work at all, it some specific condition so if nothing comes to mind, just ignore me.... [09:31:42] darsto: you still around? [09:33:11] jimharris: yep [09:33:27] looking at https://review.gerrithub.io/#/c/391028/9/lib/bdev/virtio/bdev_virtio.c [09:33:33] send_scan_io() [09:33:53] this sequence of adding the iovs - isn't that a "write" sequence? [09:34:01] i.e. req then data iov then resp [09:34:11] but these scan IO are effectively reads [09:35:12] the tests all passed though - probably because we're not checking the data direction of the scsi task in the scsi_bdev translation code [09:35:16] on the target side [09:36:06] I'll make a comment on the review - just wanted to check if there was something I was missing [09:36:07] right, payload should be write-only [09:38:28] patch looks good otherwise [09:38:59] k, I'll reupload in a sec [09:43:54] darsto: doesn't it need to go after the response iov? [09:45:14] oh wow, how does it even work now? [09:45:58] i think it worked in the previous patch because even though vhost target interpreted it as a "read", the scsi_bdev layer wasn't enforcing data direction "write" on inquiry, read cap, etc. [09:46:10] this version you just pushed should fail though I think [09:46:18] unless you get a fix in before the test pool starts it :) [10:06:58] jimharris: I reviewed those two patches for cunyinch - just had one comment and I'll fix it up for him [10:32:27] drv: test pool times with my test/rocksdb patch: https://ci.spdk.io/builds/review/8283ba4f82a0c7bfc640082ced33936fd406b285.1513299839/ [10:32:42] fedora-06 (not the rocksdb system) is still the long pole in the tent [10:32:55] https://ci.spdk.io/builds/review/8283ba4f82a0c7bfc640082ced33936fd406b285.1513299839/fedora-06/timing.svg [10:33:34] yeah, iscsi_lvol is pretty long [10:34:06] and there's some unaccounted-for time inside the 'autotest' bucket that we should track down [10:34:33] why does the iscsi_lvol setup step take so long? [10:34:47] why are we doing iscsi_pmem? Can't we test pmem through the bdev tests? [10:34:56] iscsi_lvol/setup is 26 seconds - but looking at the test script I have no idea why [10:35:05] Do we even have a test suite that uses all of the bdevs through the fio plugin for coverage? [10:35:36] or bdevio/bdevperf, plus fio bdev plugin [10:35:47] that could all be moved to a single machine as a separate test suite [10:36:06] and then iscsi can just do lvol on nvme probably, since that's the most complicated one [10:36:21] this iscsi nvme remote test can go away too [10:36:22] iscsi_lvol/setup is probably taking so long because its creating 10 128MB malloc luns and constructing an lvolstore on each [10:36:22] iscsi_lvol is building an lvol on a malloc disk, so it should presumably be very fast [10:36:30] oh, hmm [10:36:49] bwalker: that was the goal behind test/lib/bdev/blockdev.sh, but it doesn't cover all blockdev modules yet [10:37:04] I have a patch out to add RBD to it, but the RBD bdev first needs to be fixed to support iovecs [10:37:05] so expanding that could reduce test time on the iscsi test system [10:37:26] also why are there nvmf_tgt tests running on fedora-06? [10:37:33] and lvol should be added there too - I commented on that in the review that added the separate lvol blockdev.sh, but I think it might have been misunderstood [10:37:50] fedora-06 is supposed to be iSCSI + NVMe-oF [10:38:13] so we can get things like the iSCSI target backed by a NVMe-oF bdev connecting to our nvmf_tgt [10:38:15] iscsi_lvol/fio is just a 10 second run of fio, but it takes 24 seconds [10:38:17] although that might not be strictly necessary [10:38:34] well i think iscsi_pmem can be moved immediately to nightly [10:38:44] if we're testing the nvmf bdev in blockdev.sh, we don't need to do that test [10:38:58] for iscsi_lvol, we could run this test with 2 malloc bdevs per-patch, and 10 malloc bdevs nightly [10:39:06] yeah, that should help quite a bit [10:40:59] 10 x 10 lvols is definitely overkill for functional testing [10:41:08] that's probably why fio takes so long too - just startup overhead for 100 threads [10:41:30] I think we should consider defining a new flag like nightly we can use [10:41:35] maybe called "smoke test" [10:41:40] that just hits a smattering of everything [10:41:46] but runs quickly [10:41:56] and use that on centos 6/7 and ubuntu 16/17 [10:42:09] i.e. all systems that are in the pool just to test OS compatibility [10:42:21] those aren't the long poles anyway, though [10:42:36] then on the core fedora systems, they should be more tightly focused [10:42:37] they finish in 2-3 minutes already [10:42:57] I think they need to be longer though - need more tests on the different OS [10:43:08] should do a simple iscsi, nvmf, vhost test on each [10:43:10] a quick one [10:43:46] on the fedora systems we need one doing in-depth blockdev tests, one doing in-depth nvme-of, one iscsi, and one vhost [10:43:59] but I count 3 doing vhost things [10:44:05] one doing iscsi and nvmf [10:44:35] so I think organizing the tests is a good strategy for getting the run time down and the parallelism up [10:46:34] https://review.gerrithub.io/#/c/391976/ - moved it to the front of the queue [11:01:00] darsto: it looks like the machine that's supposed to be building the docs is missing the doxgyen package currently - we're fixing it up [11:01:07] so your patch ran but didn't build the docs this time [13:34:14] do we need to rbd_setup if SPDK_TEST_RBD is not defined? [13:34:37] or rather if SPDK_TEST_RBD -eq 0 [13:36:17] we shouldn't run it in that case [13:36:31] what is it checking right now? just the existence of the ceph binary? [13:38:16] yes [13:38:32] looks like ceph is installed on some systems where we're not actually running any rbd related tests [13:38:49] yeah, it should be installed everywhere now, in theory, since we have a single consistent setup script run on every VM [13:38:59] sethhowe: looking at some of the time taken up outside of the test systems themselves [13:39:35] looks like once status page says a patch passed, the next patch does not start running for about 20 seconds [13:39:47] sethhowe just left, FYI, but we can still discuss and point him at the logs later [13:39:52] ok [13:40:00] some of that time is copying the source tree around, I think [13:40:15] it seems like there's quite a bit of time taken up outside of just the test systems - as much as 2 minutes per patch if I'm looking at this correctly [13:40:17] and there's some possibility for optimizing that, because all of the VM test agents are running on one host [13:40:33] but each VM is currently getting its own copy of the source from the main pool machine [13:40:53] are they getting source via git fetch or by rsync? [13:41:02] copying over sshfs [13:41:03] iscsi test reduction patch: https://review.gerrithub.io/#/c/391976/ [13:41:16] gets us under 7 minutes [13:41:19] because git fetching just the revision they want from the main pool machine, given they already have a copy of the base repo, is faster [13:42:14] we had some issues previously due to caching bugs in (older versions of?) sshfs, so it is currently doing a full copy of the source tree each build [13:42:32] could probably be tweaked to improve performance [13:42:46] drv or bwalker: can one of you look at this one? then i'm going to rebase some of shuhei's other perror cleanup patches [13:42:57] I already +2'd it [13:43:01] again [13:43:23] I think we should work toward just deleting the iscsi_pmem test entirely, but add a bdev pmem test into blockdev.sh in its place [13:43:42] LGTM [13:44:04] https://review.gerrithub.io/#/c/391684/ [13:44:08] sorry - didn't post the link [13:50:02] if one of you looks at https://review.gerrithub.io/#/c/391699/ we can merge the nvmf hotplug series [14:07:49] tests are definitely moving faster now [14:08:05] but I agree there is probably some time to be saved in the turnover between tests [14:10:14] drv: can you re-review this one? https://review.gerrithub.io/#/c/388295/ [14:10:21] you gave it +2, but then I found a bug and -1'd myself [14:13:26] done [16:52:56] *** Joins: pmacarth (~pmacarth@2606:4100:3880:1240:e916:c959:ba63:b62c) [16:53:26] *** Quits: pmacarth (~pmacarth@2606:4100:3880:1240:e916:c959:ba63:b62c) (Client Quit) [16:53:43] *** Joins: pmacarth (~patrickma@2606:4100:3880:1240:e916:c959:ba63:b62c) [16:54:08] *** Quits: pmacarth (~patrickma@2606:4100:3880:1240:e916:c959:ba63:b62c) (Client Quit) [16:54:27] *** Joins: patrickmacarthur (~patrickma@2606:4100:3880:1240:e916:c959:ba63:b62c) [16:55:05] *** Quits: patrickmacarthur (~patrickma@2606:4100:3880:1240:e916:c959:ba63:b62c) (Client Quit) [16:55:33] *** Joins: patrickmacarthur (~patrickma@2606:4100:3880:1240:e916:c959:ba63:b62c)