-
Notifications
You must be signed in to change notification settings - Fork 18k
cmd/go: -cover stdout format is not consistent between failed & pass #39070
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We wrote a tool, GoCop, to enable us to identify and track flaky tests. It is designed to parse stdout of When tests fail, the coverage % output being inconsistently placed leads to data that is missed by the tool. When considering flakiness of a package, the coverage info is something we want to track regardless of the package result. |
I submitted a change to address this issue, I will update the change with integration tests as requested. |
Change https://golang.org/cl/233617 mentions this issue: |
As I asked on the CL review: why do we believe that the coverage numbers are meaningful for failed tests? I suspect that in practice the coverage for failed tests will be systematically lower than for passing tests, because many tests use So it generally will not be meaningful to compare coverage between two test runs with different failing tests or different failure modes — and if you have a flaky test with a consistent failure mode over time, why not fix the test instead of worrying about its coverage? |
@bcmills Thanks for the question. The way I see it, the coverage data reported for a test run is unique to each test run and should be recorded regardless of the outcome of a test. The coverage being potentially less for a failed test run is no different than coverage being less for a passing run if If the expectation is that the coverage % is always reflective of the parameters/factors of the test execution, then there is no logical reason to have inconsistencies between reporting this value based on the outcome of the execution. |
FWIW, I agree that reporting coverage % on failure is more likely to be misleading than helpful. If the tests are failing, the coverage doesn't mater. And if they are failing early, the coverage will be somewhat meaningless. Doesn't seem worth the churn. |
@rsc the same then could be said for execution duration then, could it not? |
Maybe, but "how long did it run before it failed?" is a more meaningful question than "what level of code coverage did it achieve before it failed?" This discussion kind of trailed off, but I think the rough consensus here is that the output should be left as is. |
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
Run
go test -cover
on any set of packages including a failed tests.What did you expect to see?
What did you see instead?
The same behavior is observed via
json
formatting.The text was updated successfully, but these errors were encountered: