You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So, I wouldn't have noticed this, but I have a benchmark for a pathological case, and the thing about the pathological case is it only shows up with reasonably large inputs -- but shows up by exploding runtime into minutes. So, of course, I run with -benchtime 1x.
... And it runs twice.
Looking at the "got here" output, it's pretty obvious why:
The thing is, when I've explicitly specified a number of times, I probably don't actually need the initial calibrating run.
There's a reasonable workaround (make it a "Test" function which does its own timing), but it'd be really handy to have all those convenient benchmark things, like reporting memory usage. I thought of trying to use testing.Benchmark(), but it does the same thing -- it runs the function twice.
The best alternative I can think of is to make the exploding parameter a b.N value, and run with -benchtime 1000000x or something like that, but this is completely incompatible with the way any other test/benchmark would want to run, or all the very thoughtful guidance not to vary quadratically on b.N.
The text was updated successfully, but these errors were encountered:
reproduced with go1.15, but I suspect it's always been that way since -benchtime was introduced.
Sample code:
https://play.golang.org/p/It6iFOHZldA
So, I wouldn't have noticed this, but I have a benchmark for a pathological case, and the thing about the pathological case is it only shows up with reasonably large inputs -- but shows up by exploding runtime into minutes. So, of course, I run with
-benchtime 1x
.... And it runs twice.
Looking at the "got here" output, it's pretty obvious why:
N:1, previousN:0, previousDuration:0
=>
N:1, previousN:1, previousDuration:85948
The thing is, when I've explicitly specified a number of times, I probably don't actually need the initial calibrating run.
There's a reasonable workaround (make it a "Test" function which does its own timing), but it'd be really handy to have all those convenient benchmark things, like reporting memory usage. I thought of trying to use
testing.Benchmark()
, but it does the same thing -- it runs the function twice.The best alternative I can think of is to make the exploding parameter a b.N value, and run with -benchtime 1000000x or something like that, but this is completely incompatible with the way any other test/benchmark would want to run, or all the very thoughtful guidance not to vary quadratically on
b.N
.The text was updated successfully, but these errors were encountered: