New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
x/build/cmd/coordinator: tracking bug for running benchmarks in the coordinator. #19871
Comments
I anticipate that pre- and post-submit benchmark runs will be somewhat different (e.g. expected latency, # of iterations, etc.), and post-submit benchmark runs require trend analysis, so I am focusing on pre-submit trybot benchmark runs at the moment. We've already had several design discussions about this; I'm going to attempt to replay them into this issue for posterity. |
For trybots: Currently we:
If machines die, they can resume later just from snapshots, bypassing make.bash. The untarring of the snapshot to the tmpfs is fast. I think benchmarks should work similarly. They should be another work unit(s), like unit tests, sharded over machines. |
Perf Trybots Sketch(reviewed Mar 3 by @bradfitz) We want to collect benchmark data as part of the normal Gerrit CL process. To effect this, I propose the following modifications to the build coordinator:
Builder choiceSome builders are probably too slow to run benchmarks on. I suggest we initially run benchmarks on:
@bradfitz's comments: Q: there's no guarantee [the base commit's snapshot] exists. is that a problem? Q: how many iterations are you going to run to get statistically interesting numbers? Q: [freebsd-amd64-gce101] is just another amd64 so I'm not sure it helps so much. Seems overkill for trybots. I even think Windows is a bit redundant but justifiable due to its drastic difference. Linux and FreeBSD are relatively similar. Mostly seeing three different architectures is the interesting bit. I think running builders are three different Intel CPU families (in different GCE zones would be even more interesting) |
Email thread with @bradfitz about how to fit benchmarks into the existing work units for make and test. @quentinmit: I know you have your goal of posting trybot results in <5m. Obviously, that's simply not enough time to post trybot results, and we can't (I think?) parallelize the benchmarks because we'd like them to come from the same machine. Currently with 5 iterations of the benchmarks it's taking 10-20m to complete. So I think that leads me to "we should post make + test results first, and then follow up with bench results in a separate Gerrit comment". Do you think that's a reasonable UI? Alternatively, we could include in the existing Gerrit post "benchmarks are being run, ", and have the benchmark results just stream in at that page without making another post when they're done. The question becomes then how to fit that into trySet. There's an ominous comment in buildStatus.build:
Ideally, we would like to log the Sorry for the long-winded e-mail. Does this all sound okay? @bradfitz: There's been some progress towards that in the past year (unique build IDs and try IDs used mostly for BigQuery), but if we finished that, then we'd have a great place to reference perf data too. So we'd record the Trybot-Result + comment on Gerrit and then keep running benchmarks. People interested in benchmarks can get to it from the original already-spammed comment on Gerrit. Perhaps it should only do a follow-up comment if the benchmark found notable change. @quentinmit: @bradfitz: I'd prefer it to be a logically separate work unit, but do the buildlet sharing as an optimization via the buildlet pool layer. There's an open bug about that somewhere--- if a buildlet is no longer being used (especially ones that are slow to create) and it's never seen a failure and has no new processes running from when it started, then add it back to the free pool, at least for everything but building a release where we really want things to be hermetic. @quentinmit: I could certainly build a mechanism to pass off the buildlet client to a logically separate work unit, but then I also run into the question of how to handle logging. Right now it seems like there's an assumption that there is only one buildStatus for each commit and buildlet type. I think I would have to break that assumption if we want to have the "bench" buildStatus separate from the "make and test" buildStatus. (Code-wise, the benchmark stuff is already logically separate, it has a single entry point of bs.runBench().) @bradfitz: I'd rather have a simple design (benchmarks always start with a fresh buildlet, as a new work unit) and pay the cost of untarring the work that was just done a moment ago in the uncontended case. But in the contended case (like this morning), we'd free up the buildlets for important work (trybots, building the release-branch.go1.8 most recent commit, etc), and catch up on benchmarking later when idle. @quentinmit: Okay, with that, I believe that brings this issue up to the current state of discussion. |
Okay, so, if I understand your proposal correctly, you want us to treat benchmarks as another type of helper buildlet, so we untar the snapshot built by make.bash and then run the benchmarks in a fresh buildlet. Understood, and that makes sense (bearing in mind the overhead of starting another VM, which can be mitigated with your reuse proposal). In the meantime until we do that, I'd like to just put that logic here into Now, given that, I'm still unsure of exactly how to tie that into the existing |
I want to treat each benchmark as a unit of work that needs to be done, just like a "go tool dist test --list" work unit. I don't care whether it runs on the initial buildlet or on one of the helper buildlets. Ideally they'd run on all N of them, including the original one, in parallel. (I assume we're running more than 1 benchmark, and running it more than 1 time.)
We already have the buildlets available. We just created ~N=5 to run unit tests on, and now they're idle. (Also, the Kubernetes buildlets create in seconds. There's not much overhead there.) You'd need to modify
Let's not cut corners. I have to maintain this. If there's a dependency to do, do that first.
The logic for what?
Reusing the idle buildlets is fine. Other question: are you only doing benchmarks for the main repo ("go") or also for subrepos? |
What work does the benchmarking need to do, and what environment does it need? It needs the output from make.bash. How many benchmarks does it run, and which, and how many iterations? Can the iterations run on different VMs if they're the same processors/sizes? Does the parent commit need to be submitted? Can we use benchmark output computed previously for the parent, if it's from the same VM processors/sizes? |
Okay, yes. I am starting with the assumption that we need to run all benchmarks on a single buildlet, so that the results are comparable. With the CL as currently implemented, we are running 4 separate benchmark binaries, and we are running each of them 5 times. At a minimum, I think we need to make sure we run the same set of benchmarks on each buildlet (so we could shard it 5 ways, but not 20 ways). I don't know yet how much variance that introduces.
I think this conflicts with the fact that we may need to wait for the parent commit to make + test, and we don't want to tie up buildlets while waiting.
The logic for deciding whether a buildlet can be reused for benchmarks.
At the moment, only the go repo is in scope. Running benchmarks in subrepos is also interesting, but I'm not yet tackling that because I haven't designed a way to decide which benchmarks to run. |
We need to have built benchmark binaries (which could have been cross-compiled, I suppose), and then we just need to run them on a buildlet. We do require a
I have the assumption that the iterations can't run on different VMs, but see above.
The parent commit does not need to be submitted. It does need to have been built and tested. We can't use previously computed benchmark output because there is far too much variation on GCE. (We could of course use a pre-compiled benchmark binary for the parent, however.) |
Related: #19327 |
How long do each of the ~5 benchmark binaries take to run? For a given binary, do you want to run the old-vs-new binaries as:
Or interleaved old/new like:
? I can see some advantage of interleaving them if the run times are long and something unrelated on the VM (different customer) changes performance somehow. Implementation wise: if you want this part of trybots, we really need to work to minimize overlap what runs when. The benchmarks can be sharded, and since we already have sharded testing buildlets, that won't add much. I'm more concerned about the extra Refactoring the helper buildlets code from being just about test sharding to being earlier and more general (and letting you ask for just 1 immediately vs the timer creating all N helpers at once) will be the trickiest part. As for sharding the benchmarks out over buildlets, I would treat them as normal test units, modifying Then you'd modify I drew a picture of how the sharding works for tests earlier today and @adams-sarah took a picture of it but I don't think it was a great drawing on its own. I can probably make a Gantt chart of each buildlet and what it's doing if that'd help. |
As they say, a caption is worth a thousand words. |
I'll recreate it with words. Thanks, @adams-sarah. |
I've been logging that with spans and we have a fair amount of data now. Do you have any reporting tools or should I just dig in the datastore myself? From a quick glance, it looks like the go1 binary is the worst, at ~60s per run. The other benchmark binaries are around 10-20s each. (go1 contains many benchmarks, of course, so we can split it in half to get 30s, or in three to get 20s.)
I agree, we definitely want to interleave them.
I don't know what you mean by "minimize overlap". Do you mean that we should try to schedule Kube pods on different hosts?
I don't think it makes sense to rebuild the parent commit if a snapshot is not available. We don't even necessarily know where to fetch it from (since it could be in a separate Gerrit CL, or on master). I think we should just wait until the parent commit has been built normally. This means we might take longer than 5m if a whole train of CLs is pushed at once, but I think that's a reasonable tradeoff (and we might wait a while in that case anyway before we can get a buildlet to run tests, too). When buildlet scheduling is implemented, we can teach the scheduling to run CLs in order from oldest to newest.
Yeah, that makes sense. Initially, I'd like to punt on this change and instead just use a completely separate buildlet for benchmarks in parallel with the sharded test buildlets. Then reusing the same buildlets just becomes an optimization. |
x/build/cmd/buildstats -sync will sync from datastore to bigquery, and then you can write SQL.
Yeah, we'll want to do that.
Er, sorry, that sentence made little sense. I meant we want to maximize overlap, to not have idle buildlets while we do many steps in serial. We want to take advantage of potential parallelism, as it currently does.
You just fetch it from the gitmirror, which will return it regardless of where it is.
Idling while waiting for a buildlet isn't acceptable. We're not going to do that.
That's actually not the priority that was proposed in #19178. Recent things are more important than things 5 commits back.
Sure, how about your use one buildlet (maybe total for now, until #19178 can prioritize buildlet users) and do your work totally async with the trybots. The trybots can write "This will eventually be available at $URL", and users can just poll that URL, or your process can leave a comment saying the results are available, if it detected anything interesting. |
CL https://golang.org/cl/38306 mentions this issue. |
Benchmarks are treated as unit tests and distributed to the test helpers, which allows them to fit in our 5m trybot budget. Currently we only run the go1 and x/benchmarks. Running package benchmarks is a TODO. This feature is disabled by default, and is enabled by the "farmer-run-bench" project attribute. Updates golang/go#19178 Updates golang/go#19871 Change-Id: I9c3a14da60c3662e7e2cb4e71953060915cc4364 Reviewed-on: https://go-review.googlesource.com/38306 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
CL https://golang.org/cl/43052 mentions this issue. |
This was lost in one of the many refactorings of CL 38306. Updates golang/go#19871 Change-Id: I4514d4350feb1420861eae9a9dd021fee5b5ba6f Reviewed-on: https://go-review.googlesource.com/43052 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
CL https://golang.org/cl/43532 mentions this issue. |
- Enable benchmarks on only linux-amd64, with 3 additional test helpers, to ensure we fit in the 5m trybot budget (disabling freebsd-amd64-gce101, linux-386, linux-arm, windows-amd64-2008, and windows-386-2008). - Run benchmarks with GOMAXPROCS=2 to reduce variation in benchmarks. See https://perf.golang.org/search?q=upload%3A20170512.4+cores%3Aall+parallel%3A4+%7C+gomaxprocs%3A2+vs+gomaxprocs%3A16+vs+gomaxprocs%3A32 Updates golang/go#19871 Change-Id: I8a28127be86c0c8691605cae4c9d603e3c8122cc Reviewed-on: https://go-review.googlesource.com/43532 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Avelino <t@avelino.xxx>
CL https://golang.org/cl/44175 mentions this issue. |
CL https://golang.org/cl/44176 mentions this issue. |
CL https://golang.org/cl/44173 mentions this issue. |
CL https://golang.org/cl/44174 mentions this issue. |
CL https://golang.org/cl/44211 mentions this issue. |
This refactoring is in preparation for moving benchmarks.go to a separate package. Updates golang/go#19871 Change-Id: I2b30bf5416937e52b603aec8102131fdccceee42 Reviewed-on: https://go-review.googlesource.com/44173 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This moves main.spanLogger and main.eventSpan to spanlog.Logger and spanlog.Span. This change is necessary to start pulling build and benchmark code out of coordinator. The interfaces cannot simply be copied because Logger contains a function whose return type is Span, so the interface must be defined in exactly one place. Updates golang/go#19871 Change-Id: I0a48192e6a5c8f5d0445f4f3d3cce8d91c90f8d3 Reviewed-on: https://go-review.googlesource.com/44174 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This package contains the BuilderRev type moved from cmd/coordinator. The rest of the CL is simply updating coordinator with the new exported names of the type and its fields. This refactoring is in preparation for moving the benchmark building and running code into a separate package. (Most of the diff could have been avoided with a type alias, but I assume we'd rather not do that.) Updates golang/go#19871 Change-Id: Ib6ce49431c8529d6b4e72725d3cd652b9d0160db Reviewed-on: https://go-review.googlesource.com/44175 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This moves RunMake from coordinator.go, allowing other packages to build Go on a buildlet. Updates golang/go#19871 Change-Id: Ic0e7c056020ef434f0620ae816f52b9112ea1c8d Reviewed-on: https://go-review.googlesource.com/44176 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This moves getSourceTgz* to a new package called sourcecache. The original functions couldn't be left in coordinator.go because that would cause a single process to have multiple caches. The code for building and running benchmarks is moved into the buildgo package, and the public API becomes GoBuilder.EnumerateBenchmarks and Run on the resulting BenchmarkItem objects. Updates golang/go#19871 Change-Id: I28b660e1cdaa6d1c6b0378c08de30f5e58316cc6 Reviewed-on: https://go-review.googlesource.com/44211 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
CL https://golang.org/cl/45671 mentions this issue. |
CL https://golang.org/cl/45672 mentions this issue. |
CL https://golang.org/cl/45670 mentions this issue. |
Change https://golang.org/cl/51590 mentions this issue: |
Instead of polling Gerrit every 60 seconds, poll maintnerd every second. This should improve TryBot-requested to TryBot-running latency by 30 seconds on average. (Ideally it'd long poll for changes and not even have ~500ms delay, but this is an improvement.) Also, in the process this does most of the work for golang/go#17626. And this cleans up the repo head tracking complication in the coordinator and just asks maintner for it instead. Running benchmarks in the coordinator has been disabled since 2017-06-23 but this CL disables it further, removing some code it'd need to work. But considering that benchmarking would need code repairs to get working again anyway, I consider this acceptable in the short term. I left TODO notes. Updates golang/go#17626 Updates golang/go#19871 Change-Id: Idab43f65a9cca861bffb37748b036a5c9baee864 Reviewed-on: https://go-review.googlesource.com/51590 Reviewed-by: Andrew Bonventre <andybons@golang.org>
Yup |
(Split off #8930, "repair perf builders", the https://perf.golang.org/ work)
This is a tracking bug for running benchmarks in the coordinator, both pre-submit (trybots) and post-submit.
The text was updated successfully, but these errors were encountered: