[00:11:18] *** Joins: gila (~gila@94-212-217-121.cable.dynamic.v4.ziggo.nl) [00:14:34] Project autotest-nightly build #484: FAILURE in 38 min. See https://dqtibwqq6s6ux.cloudfront.net for results. [00:34:12] *** Joins: tomzawadzki (uid327004@gateway/web/irccloud.com/x-tpazlcnmnpozkyct) [01:50:34] Project autotest-nightly build #485: STILL FAILING in 31 min. See https://dqtibwqq6s6ux.cloudfront.net for results. [02:39:03] Project autotest-nightly build #486: STILL FAILING in 25 min. See https://dqtibwqq6s6ux.cloudfront.net for results. [07:48:05] *** Joins: travis-ci (~travis-ci@ec2-3-95-29-95.compute-1.amazonaws.com) [07:48:05] (spdk/master) ut/blobstore: write and allocate clusters from multiple threads (Tomasz Zawadzki) [07:48:05] Diff URL: https://github.com/spdk/spdk/compare/e600f0967665...b5f96b0ea5d2 [07:48:05] *** Parts: travis-ci (~travis-ci@ec2-3-95-29-95.compute-1.amazonaws.com) () [07:50:03] *** Quits: lhodev (~lhodev@66-90-218-190.dyn.grandenetworks.net) (Ping timeout: 245 seconds) [07:51:15] *** Joins: lhodev (~lhodev@inet-hqmc01-o.oracle.com) [07:53:17] *** Quits: lhodev (~lhodev@inet-hqmc01-o.oracle.com) (Remote host closed the connection) [07:53:48] *** Joins: lhodev (~lhodev@inet-hqmc01-o.oracle.com) [07:55:50] *** Quits: lhodev (~lhodev@inet-hqmc01-o.oracle.com) (Remote host closed the connection) [07:56:22] *** Joins: lhodev (~lhodev@inet-hqmc01-o.oracle.com) [07:58:24] *** Quits: lhodev (~lhodev@inet-hqmc01-o.oracle.com) (Remote host closed the connection) [07:58:55] *** Joins: lhodev (~lhodev@inet-hqmc01-o.oracle.com) [08:00:57] *** Quits: lhodev (~lhodev@inet-hqmc01-o.oracle.com) (Remote host closed the connection) [08:12:21] *** Joins: felipef (~felipef@62.254.189.133) [08:50:36] bwalker: do we want to include those in changelog? nvme sq batching, i/oat sq&cq batching, PGO, new io_uring bdev module [09:27:25] yes - the nvme batching is already in the changelog [09:33:25] darsto: are you currently working on changelog patches or do you want me to finish them up? [09:34:38] i'm done [09:34:57] i'm not sure how to describe those things i've listed [09:35:24] ok I'll get them [09:38:05] jimharris: Can you review just the first 3 patches starting here: https://review.gerrithub.io/c/spdk/spdk/+/452636 [09:41:40] done - I reviewed the first 4 patches [09:46:45] if you find something else that I should put into changelog, feel free to ping me [09:46:51] ill be around for the rest of the day [10:03:05] sethhowe: does Jenkins clone Rocksdb from github or gerrithub? [10:03:17] i'm guessing gerrithub, since i don't see the rocksdb_commit_id on github [10:04:26] Yeah, GerritHub [10:04:57] But it's not doing a full clone every time since we are using the reference repo. [10:16:00] darsto: are you going to make the github release page? [10:16:07] I just uploaded the remaining changelog updates [10:17:10] okay, sure [10:18:53] 859 commits in 90 days [10:19:46] 60 business days really [10:24:05] that's the average [10:24:20] we always have ~800 commits for every release [10:24:20] yeah we seem fairly consistent on pacing [10:54:33] *** Quits: felipef (~felipef@62.254.189.133) (Remote host closed the connection) [11:08:06] *** Quits: tomzawadzki (uid327004@gateway/web/irccloud.com/x-tpazlcnmnpozkyct) (Quit: Connection closed for inactivity) [11:50:03] the thread_local destructor isn't really working [11:50:21] but we can suppress these specific leaks from asan [12:02:37] i think this might be an ASAN bug [12:03:09] it checks for the leaks before the destructor gets run - I can see the destructor running if I suppress the leaks (but without suppression I never see the destructor run) [12:30:20] *** Joins: pawelkax (~pawelkax@134.134.139.83) [12:32:47] darsto: you ready to hit tag? [12:33:42] I can if you haven't drafted it yet [12:34:07] the tip of master is 19.04 atm, ready for the github tag [12:35:06] i'm getting stats for the announcement on the mailing list [12:38:57] *** Joins: travis-ci (~travis-ci@ec2-3-87-12-86.compute-1.amazonaws.com) [12:38:57] (spdk/master) SPDK 19.04 (Tomasz Zawadzki) [12:38:57] Diff URL: https://github.com/spdk/spdk/compare/b5f96b0ea5d2...f19fea15c18b [12:38:57] *** Parts: travis-ci (~travis-ci@ec2-3-87-12-86.compute-1.amazonaws.com) () [12:56:51] Any ideas what I should do about these test failures? https://dqtibwqq6s6ux.cloudfront.net/public_build/autotest-per-patch_29731.html [12:58:06] Looks like some missing python library on centos and ubuntu16 (pexpect) and not enough memory on freebsd? [12:59:27] AFAIK, I didn't introduce any new python libraries or tweak with any memory settings for launching the targets [12:59:29] weird, I wonder what happened to that python module [12:59:42] there must have been an update to the test systems in the last hour [13:00:12] I'll go through it and figure out what happened [13:00:32] Ok, thanks bwalker [13:00:56] it could be that a new VM instance was spun up that's not configured correctly, and the scheduler happened to run on that [13:01:14] if you add a comment saying "retrigger", it will just run the tests again [13:01:30] Ok, will do [13:02:18] we're fighting through intermittent failures like this fairly often, with all sorts of root causes [13:02:26] yeah - i was just looking at that failure - the freebsd one was especially odd [13:02:29] so if you see things like that, always just ask here [13:02:59] Right on [13:03:51] *** Joins: lhodev (~lhodev@66-90-218-190.dyn.grandenetworks.net) [13:03:51] it turns out running a cluster of this size with 100 test runs a day going through it is a pretty serious devops problem [13:04:19] Definitely a challenge! [13:04:45] managing OS updates and getting Jenkins to stop crashing and stuff are major items on our internal todo lists [13:05:41] ok, I'm ready with the stats - I'll wait for Tomek and we'll do the release any time now [13:05:47] sounds good [13:06:12] Are the jenkins slaves all VMs, containers, etc? [13:06:22] it's a mix of VMs and physical systems [13:06:36] VMs for scaling out, physical systems to test specific hardware [13:06:55] the hardest part of testing SPDK is that it contains device drivers, so you need real devices [13:07:27] Yeah makes sense. Not an easy problem to solve when you need real devices [13:10:33] https://review.gerrithub.io/c/spdk/spdk/+/452274/2 retrigger look right? Should I expect a comment from the build bot saying it's kicking off a new build? [13:12:24] I'm waiting for it to show up [13:13:00] This is the status page for the CI system: https://dqtibwqq6s6ux.cloudfront.net/ [13:13:20] I expect it to show up in that pending approval section [13:13:33] ah ok, thanks [13:13:58] there - it's at the top of the Build Queue now [13:14:16] Awesome [13:14:30] fingers crossed [13:15:08] sethhose: could you force-update the docs on spdk.io? [13:15:58] we've just merged some extra content. it'd be nice to have it at the time of the release [13:17:34] sethhowe [13:26:50] bwalker: can do. [13:50:54] *** Joins: travis-ci (~travis-ci@ec2-18-206-38-63.compute-1.amazonaws.com) [13:50:54] (spdk/v19.04) SPDK 19.04 (Tomasz Zawadzki) [13:50:54] Diff URL: https://github.com/spdk/spdk/compare/v19.04 [13:50:54] *** Parts: travis-ci (~travis-ci@ec2-18-206-38-63.compute-1.amazonaws.com) () [14:30:53] *** Joins: tomzawadzki (uid327004@gateway/web/irccloud.com/x-skqvkwcmocjwwhho) [14:34:01] release looks done to me [14:34:03] SPDK 19.04 released. Hooray! [14:34:19] I grabbed the tag from github and put it on gerrithub [14:34:21] merged blog post [14:34:32] and merged the 19.07-pre to master [14:36:01] that explains why i get "Everything up-to-date" when trying to push to gerrithub [14:36:05] thanks [14:57:15] bwalker, jimharris, darsto, For all intel folks on the channel. Are you OK if I bring the ZNC bouncer down for about an hour to do an update, or would you prefer if I scheduled something tomorrow? [14:58:54] I'm fine if you do that [14:59:56] i'm fine with either [15:00:08] especially since i don't use the intel bouncer ;) [15:01:11] OK sounds good. I'll bring it down now since there isn't much activity and everything should come back up fine after the update.