Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: testing: add a flag to report benchmark metrics for each run #51218

Closed
BrennaEpp opened this issue Feb 16, 2022 · 3 comments
Closed

Comments

@BrennaEpp
Copy link

I would like the package to be able to report metrics for each run, not just an aggregate.

I want to be able to analyze and graph the data using external tools, so I need the runtime etc. of each iteration of the benchmark. Using the built-in benchmarking is great but I need to see the distribution of individual iterations.

@gopherbot gopherbot added this to the Proposal milestone Feb 16, 2022
@davecheney
Copy link
Contributor

davecheney commented Feb 16, 2022

The reason the benchmark tool suppresses the intermediate benchmark runs is they are not statistically significant. Even the final benchmark number is not accurate in and of itself, it should be combined with flags like -count and tools like benchstat to account for warmup and cpu throttling issues.

@ianlancetaylor ianlancetaylor added this to Incoming in Proposals (old) Feb 16, 2022
@ianlancetaylor ianlancetaylor changed the title proposal: testing(benchmark): add a flag to report metrics for each run proposal: testing: add a flag to report benchmark metrics for each run Feb 16, 2022
@rsc
Copy link
Contributor

rsc commented Feb 16, 2022

As @davecheney said, -count is the way to get multiple runs. You can also drop -benchtime to be lower than the default 2s if you want many short runs.

There is no way to get the distribution of every iteration in a particular run, because the benchmark functions themselves do that loop from 0 to b.N.

@rsc rsc moved this from Incoming to Declined in Proposals (old) Feb 16, 2022
@rsc
Copy link
Contributor

rsc commented Feb 16, 2022

No change in consensus, so declined.
— rsc for the proposal review group

@rsc rsc closed this as completed Feb 16, 2022
@golang golang locked and limited conversation to collaborators Feb 16, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
No open projects
Development

No branches or pull requests

4 participants