New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
testing: add testing.B.Loop for iteration #61515
Comments
It's not clear to me which optimizations those would be. Certainly an unused return value should not be eliminated, but should function arguments be inlined? I have seen benchmarks that explicitly want to check inlining behavior, and also benchmarks that are accidentally-inlinable. I expect that as inlining improves (#61502), the problem of accidentally-inlined arguments will only get worse — but if the (I assume the user would have to create a closure, and then call that closure within the |
I think we would implicitly apply Keep to every function argument and result in the closure passed directly to Loop. I'm pretty sure that's what @rsc was thinking, too.
I definitely agree that this problem is only going to get worse. I'm not sure we need to inhibit inlining, but we need to inhibit optimizations that propagate information across this inlining boundary. That's why I think it works to think of this as applying an implicit Keep to every function argument and result in the closure, and to think of Keep as having a used and non-constant result. (Obviously that's not a very precise specification, though!)
Right. I think if the user specifically wants to benchmark the effect of constant propagation into an inlined function, they would add another layer of function call. We'd only apply the implicit Keep to the direct argument to Loop. That's fairly subtle, but I think such benchmarks are also rare. My other concern with the deoptimization aspect of this is what to do if Loop is called with something other than a function literal. We could say the implicit Keep only applies if it's called directly with a function literal, but that feels.. even weirder. 😅 It may be that |
When b.Loop inlines, |
This proposal has been added to the active column of the proposals project |
The only real question about this was the auto-Keep. All the other benefits listed in the top comment are clear wins. Given that Keep is now likely accept, it seems like this one can be likely accept too. |
Note that the current proposal is used as |
Based on the discussion above, this proposal seems like a likely accept. |
b.Loop has real benefits separate from Keep, so it's worth doing even if we still have questions about Keep. So this seems like it can move to accept. |
If #61405 is accepted it will be possible to range over integers. From the description:
So there might soon be a new, better alternative for benchmark iteration. Does that argue for delaying seeing whether another alternative is needed? |
@timothy-king I don't think that matters here. Range-over-integer is fairly minor syntactic sugar. The issues that |
No change in consensus, so accepted. 🎉 |
I usually wrap a benchmarked op into a It looks like passing a closure to the Loop function would eliminate that requirement as the benchmarked body would be inside the function that can't be inlined anyway (unless the compiler would handle Loop in some special way). (As opposed to being "inlined" inside the |
Update July 26, 2023: See this comment for the latest Loop API. The motivation and arguments for this proposal still otherwise apply in full, but the API has switched to the
for b.Loop() { ... }
form proposed in the Alternatives section.Currently, Go benchmarks are required to repeat the body of the benchmark
(*testing.B).N
times. This approach minimizes measurement overhead, but it’s error-prone and has many limitations:As we discovered in cmd/vet: flag benchmarks that don’t use b #38677, it’s surprisingly common for benchmarks to simply forget to use
b.N
.While a vet check can pretty reliably detect forgotten uses of
b.N
, there’s some evidence that many benchmarks useb.N
incorrectly, such as using it to size the input to an algorithm, rather than as an iteration count.Because the benchmark framework doesn’t know when the b.N loop starts, if a benchmark has any non-trivial setup, it’s important for it to use
(*testing.B).ResetTimer
. It’s generally not clear what counts as non-trivial setup, and very hard to detect whenResetTimer
is necessary.Proposal
I propose that we add the following method to
testing.B
and encourage its use overb.N
:This API has several advantages over
b.N
loops:It cannot be misused for something other than an iteration count. It’s still possible for a benchmark to forget entirely to use
b.Loop
, but that can be detected reliably by vet.The benchmarking framework can record time and other metrics around only the benchmarked operation, so benchmarks no longer need to use
ResetTimer
or be careful about their setup.Iteration ramp-up can be done entirely within
b.Loop
, which means that benchmark setup beforeb.Loop
will happen once and only once, rather than at each ramp-up step. For benchmarks with non-trivial setup, this saves a lot of time. Notably, benchmarks with expensive setup can run for far longer than the specified-benchtime
because of the large number of ramp-up steps (setup time is not counted toward the-benchtime
threshold). It’s also less error-prone than using a globalsync.Once
to reduce setup cost, which can have side effects on GC timing and other benchmarks if the computed results are large.As suggested by @rsc,
b.Loop
could be a clear signal to the compiler not to perform certain optimizations in the loop body that often quietly invalidate benchmark results.In the long term, we could collect distributions rather than just averages for benchmark metrics, which would enable deeper insights into benchmark results and far more powerful statistical methods, such as stationarity tests. The way this would work is that
b.Loop
would perform iteration ramp-up only to the point where it can amortize its measurement overhead (ramping up to, say, 1ms), and then repeat this short measurement loop many times until the total time reaches the specified-benchtime
. For short benchmarks, this could easily gather 1,000 samples, rather than just a mean.Alternatives
This proposal is complementary to
testing.Keep
(#61179). It’s an alternative totesting.B.Iterate
(originally in #48768, with discussion now merged into #61179), which essentially combines Keep and Loop. I believe Iterate could have all of the same benefits as Loop, but it’s much clearer how to make Loop low-overhead. If Loop implicitly inhibits compiler optimizations in the body of its callback, then it has similar deoptimization benefits as Iterate. I would argue that Loop has a shallower learning curve than Iterate, though probably once users get used to either they would have similar usability.If #61405 (range-over-func) is accepted, it may be that we want the signature of
Loop
to beLoop(op func() bool) bool
, which would allow benchmarks to be written as:It’s not clear to me what this form should do if the body attempts to
break
orreturn
.Another option is to mimic
testing.PB.Next
. Here, the signature of Loop would beLoop() bool
and it would be used as:This is slightly harder to implement, but perhaps more ergonomic to use. It's more possible to misuse than the version of Loop that takes a callback (e.g., code could do something wrong with the result, or break out of the loop early). But unlike
b.N
, which is easy to misuse, this seems much harder to use incorrectly than to use correctly.cc @bcmills @rsc
The text was updated successfully, but these errors were encountered: