[00:16:15] Project autotest-nightly-failing build #311: STILL FAILING in 1 hr 16 min. See https://ci.spdk.io/spdk-jenkins for results. [01:18:45] *** Joins: gila (~gila@5ED4D979.cm-7-5d.dynamic.ziggo.nl) [02:00:13] *** Joins: travis-ci (~travis-ci@ec2-52-205-207-162.compute-1.amazonaws.com) [02:00:14] (spdk/master) virtio: switch to spdk_*malloc(). (Darek Stojaczyk) [02:00:14] Diff URL: https://github.com/spdk/spdk/compare/2edc65291325...6e9eca7874d4 [02:00:14] *** Parts: travis-ci (~travis-ci@ec2-52-205-207-162.compute-1.amazonaws.com) () [04:16:23] *** Joins: zhouhui (~wzh@114.255.44.140) [04:16:31] *** Quits: zhouhui (~wzh@114.255.44.140) (Client Quit) [05:27:07] *** Joins: felipef (~felipef@62.254.189.133) [07:45:22] *** Joins: travis-ci (~travis-ci@ec2-3-80-36-176.compute-1.amazonaws.com) [07:45:23] (spdk/master) bdev/raid: remove unnecessary assert. (yidong0635) [07:45:23] Diff URL: https://github.com/spdk/spdk/compare/6e9eca7874d4...0d076e264eae [07:45:23] *** Parts: travis-ci (~travis-ci@ec2-3-80-36-176.compute-1.amazonaws.com) () [07:50:00] PSA: Hi All, I am going to restart the script to apply some changes to it here in a moment. You may notice a short blackout in the status page updating. All future updates should happen inline without any disruption to service. [08:21:52] *** Joins: travis-ci (~travis-ci@ec2-3-87-81-182.compute-1.amazonaws.com) [08:21:53] (spdk/master) lib/ftl: Internal IO retry mechanism in case ENOMEM from nvme layer (Wojciech Malikowski) [08:21:53] Diff URL: https://github.com/spdk/spdk/compare/23d7ff31fc1f...04814b72a85a [08:21:53] *** Parts: travis-ci (~travis-ci@ec2-3-87-81-182.compute-1.amazonaws.com) () [08:58:59] *** Joins: travis-ci (~travis-ci@ec2-3-82-63-17.compute-1.amazonaws.com) [08:59:00] (spdk/master) env/dpdk: add spdk_pci_fini() (Darek Stojaczyk) [08:59:00] Diff URL: https://github.com/spdk/spdk/compare/04814b72a85a...fb51565a59f5 [08:59:00] *** Parts: travis-ci (~travis-ci@ec2-3-82-63-17.compute-1.amazonaws.com) () [09:04:25] *** Joins: travis-ci (~travis-ci@ec2-54-234-208-179.compute-1.amazonaws.com) [09:04:26] (spdk/master) vhost: change vsession->lcore only within that lcore (Darek Stojaczyk) [09:04:27] Diff URL: https://github.com/spdk/spdk/compare/0d076e264eae...23d7ff31fc1f [09:04:27] *** Parts: travis-ci (~travis-ci@ec2-54-234-208-179.compute-1.amazonaws.com) () [10:21:22] *** Quits: felipef (~felipef@62.254.189.133) (Remote host closed the connection) [10:48:19] *** Joins: felipef (~felipef@cpc92310-cmbg19-2-0-cust421.5-4.cable.virginm.net) [10:52:43] *** Quits: felipef (~felipef@cpc92310-cmbg19-2-0-cust421.5-4.cable.virginm.net) (Ping timeout: 246 seconds) [12:13:26] what do we think about some Gerrithub patch house cleaning? [12:14:15] i was going to send an e-mail to the mailing list - any patch 6 months or older will be abandoned X number of weeks from now [12:14:36] patch owner can indicate desire to keep the patch open by rebasing on master which will reset the clock in Gerrit [12:15:16] when we abandon the patch, we can put a nice reply on how to restore it [12:15:42] jimharris, +1 [12:16:42] is 6 months the right number? [12:17:00] jimharris, so I looked at your comments on the fix delete patch, understood them, please take a look at my reply and let me know if that's cool or not. The next two patches after that one address proper unloading via API destruct or hotremove callback [12:17:10] i wouldn't be opposed to a shorter amount of time - 3 or 4 months [12:17:12] I would also say 3 months [12:17:13] yeah [12:17:24] peluse: will do [12:17:45] if someone hasn't touched a patch for 3 months they've implicitly abandoned it :) [12:18:32] 3 months seem fine [12:18:57] peluse: I have a bunch of half year old patches that I'd still like to rebase one day :) [12:19:00] ben's zero copy patch was one that made me think though - it's a patch he sent out first more than a year ago but had to set it aside for a while [12:19:30] lets abandon it then, LOL! [12:19:56] maybe in the note tell people to at lest rebase periodically to show that they are still working on it or intend to work on it [12:20:12] i wasn't sure if bwalker found it helpful to see that patch in his Outgoing list [12:21:03] yeah - i like that idea - if we warn people about this, and tell them to rebase, bwalker would have rebased his patch and it wouldn't get abandoned [12:58:05] jimharris, hey why did you include "struct spdk_reduce_backing_dev *dev" in the comp/decomp callbacks? I don't need it - I don't think anyways [13:00:47] my zcopy ones were rebased recently so they won't get cleaned up [13:01:01] I have a rocksdb patch that I'm holding for posterity that will get removed though [13:11:45] jimharris, hmm, I need something else in the comp/decomp i/f. I keep the PMD device ID and the transform structures in my vbdev structure, need those to submit to DPDK. I give you a ctx when I submit for r/w to the vol (bdev_io), if you can give that back to me in the comp/decomp routines I can get what I need [13:13:15] oh wait, I can get it from the arg I asked you about up above - never mind :) [13:41:48] bwalker: yeah - they've been rebased recently, but with this three month policy they could have been swept up 3 or 4 months ago [13:42:13] but it's fine now - if we would have done this 3 or 4 months ago, you would have just rebased them then [13:45:56] I need to make an update to our rocksdb fork [13:46:12] does anyone remember how that works? Do I just submit a patch to the spdk_rocks branch? [13:46:46] beats me but sounds good [13:48:25] the problem is that I need to modify a file in rocksdb itself to call an extra function [13:48:45] so if I just update the branch now, I strongly suspect that all patches coming in will break [13:48:51] because rocksdb won't compile [13:49:13] so I just did a patch series to SPDK that adds a stub for the function - without the implementation [13:49:19] because that's low risk and we can get it in [13:49:25] i was just going to suggest that [13:49:35] so if we merge that, then I update rocksdb [13:49:44] but still everyone would need to make sure they are rebased [13:50:02] it's a good lesson for everyone to learn anyways :) [13:50:20] always rebase before pushing new versions of your patches [13:50:33] i think overall everyone is doing that though [13:51:21] yeah [13:51:41] I don't actually know how the test machines are set up though - if I update spdk_rocks branch do they just automatically pull it down? [13:51:56] I feel like they do, but I don't see it in the test code atm [13:53:20] I think I may need to do it manually [13:55:39] now you're way out of my wheelhouse [14:01:50] peluse: i responded to your comments on the delete patch [14:02:04] i still think one of the comments could use some rewording but it's not critical - i added my +2 [14:05:15] OK, I'll fix it on the last one in the chain, thanks [15:39:05] *** Quits: gila (~gila@5ED4D979.cm-7-5d.dynamic.ziggo.nl) (Quit: My Mac Pro has gone to sleep. ZZZzzz…) [16:16:42] jimharris: https://review.gerrithub.io/c/spdk/spdk/+/449471 [16:16:49] that's the patch with the stub in it, so we can update rocksdb [16:24:57] +2 [16:25:39] I probably should wait for another review before merging it [16:32:23] @jimharris I just posted a new update on the https://github.com/spdk/spdk/issues/725 [16:33:09] I haven't been able to reproduce it without our management daemon running and issuing all the rpc commands to remove and readd the drive to iscsi_tgtd [16:38:56] thanks jrlusby [16:39:55] i haven't had a chance to spend any time looking at this yet [16:40:10] but it will definitely get some attention at the bug scrub next wednesday [17:16:34] okay [17:21:03] @jimharris side question, I noticed when I'm using fio with libaio through the kernel initiator to spdk and I kick the underlying drive, the fio process seems to continue to make progress and reports iops still happening [17:21:44] which seems completely gibberish and impossible to me [17:22:15] im wondering if you have any thoughts on why it behaves the way it does [19:39:01] *** Joins: travis-ci (~travis-ci@ec2-3-87-219-74.compute-1.amazonaws.com) [19:39:02] (spdk/master) reduce: fix chunk_is_compressed calculation (Jim Harris) [19:39:03] Diff URL: https://github.com/spdk/spdk/compare/fb51565a59f5...df082c931e5a [19:39:03] *** Parts: travis-ci (~travis-ci@ec2-3-87-219-74.compute-1.amazonaws.com) () [19:40:56] *** Joins: travis-ci (~travis-ci@ec2-3-82-63-17.compute-1.amazonaws.com) [19:40:57] (spdk/master) app, log: clarify how to enable log flags (Jim Harris) [19:40:58] Diff URL: https://github.com/spdk/spdk/compare/df082c931e5a...11b38a585aa1 [19:40:58] *** Parts: travis-ci (~travis-ci@ec2-3-82-63-17.compute-1.amazonaws.com) () [19:43:01] *** Joins: travis-ci (~travis-ci@ec2-3-87-81-182.compute-1.amazonaws.com) [19:43:02] (spdk/master) blobfs: Add the return value check for calling cache_append_buffer (Ziye Yang) [19:43:03] Diff URL: https://github.com/spdk/spdk/compare/11b38a585aa1...b151999f06df [19:43:03] *** Parts: travis-ci (~travis-ci@ec2-3-87-81-182.compute-1.amazonaws.com) () [21:46:16] *** Joins: travis-ci (~travis-ci@ec2-54-87-62-101.compute-1.amazonaws.com) [21:46:17] (spdk/master) scripts/perf: Update results parsing in nvmf benchmark scripts (Karol Latecki) [21:46:17] Diff URL: https://github.com/spdk/spdk/compare/82645a63d380...02d7812f461c [21:46:17] *** Parts: travis-ci (~travis-ci@ec2-54-87-62-101.compute-1.amazonaws.com) () [21:48:04] *** Joins: travis-ci (~travis-ci@ec2-54-165-144-236.compute-1.amazonaws.com) [21:48:05] (spdk/master) bdev/compress: insert vol unload into destruct path (paul luse) [21:48:06] Diff URL: https://github.com/spdk/spdk/compare/02d7812f461c...29b446a1bc3c [21:48:06] *** Parts: travis-ci (~travis-ci@ec2-54-165-144-236.compute-1.amazonaws.com) () [21:49:26] *** Joins: travis-ci (~travis-ci@ec2-54-83-143-203.compute-1.amazonaws.com) [21:49:27] (spdk/master) bdev/ftl: defer bdev initialization (Konrad Sztyber) [21:49:28] Diff URL: https://github.com/spdk/spdk/compare/29b446a1bc3c...1d9820817607 [21:49:29] *** Parts: travis-ci (~travis-ci@ec2-54-83-143-203.compute-1.amazonaws.com) () [23:55:31] Project autotest-nightly build #443: SUCCESS in 32 min. See https://ci.spdk.io/spdk-jenkins for results.