Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: testing: re-print failed test names before exiting #41748

Closed
michael-schaller opened this issue Oct 2, 2020 · 10 comments
Closed

proposal: testing: re-print failed test names before exiting #41748

michael-schaller opened this issue Oct 2, 2020 · 10 comments

Comments

@michael-schaller
Copy link
Contributor

michael-schaller commented Oct 2, 2020

Issue:
Currently if an unit test suite fails it only prints FAIL in the very end.
With very large test suites that log a lot and heavily use t.Parallel() it can be hard to find the actual test failures in the output.
(The test suite in my case produces about 6500 lines of output.)

Proposal:
Re-print the failed test names before exiting.
Then the test suite can be re-run with -test.run=... to debug the failed test(s).

Code location in question:
https://github.com/golang/go/blob/go1.15.2/src/testing/testing.go#L1363-L1367

Additional notes:

@gopherbot gopherbot added this to the Proposal milestone Oct 2, 2020
@ALTree
Copy link
Member

ALTree commented Oct 2, 2020

Re-print the failed tests before exiting.

What if all of them (or 80% of them) failed?

@michael-schaller michael-schaller changed the title proposal: testing: re-print failed tests before exiting proposal: testing: re-print failed test names before exiting Oct 2, 2020
@michael-schaller
Copy link
Contributor Author

Re-print the failed tests before exiting.

What if all of them (or 80% of them) failed?

Then I would re-print the names of all failed tests.
Something like this:

--- FAILED TESTS:
  TestABC
  TestCDE
  ...
FAIL

@ALTree
Copy link
Member

ALTree commented Oct 2, 2020

Thanks. Note that proposals to print a test summary at the bottom have been discussed in the past, see e.g #27755, or #30507; often the answer was "we're not changing the output, use -json to wrap the output and change it if you wish".

@michael-schaller
Copy link
Contributor Author

@ALTree The comments in those bugs are fair. However running the test suite with -json isn't very helpful as that requires a parent process that parses the json and then prints a summary. I however would like to have everything self-contained in the test suite itself but currently I can't even get the stats which tests failed from the testing.M struct and so I couldn't implement this in TestMain myself.

So if it isn't desired to implement a re-print of failed test names in the end of the test suite would it then be at least possible to provide details via testing.M so that users can implement what they need?

@ALTree
Copy link
Member

ALTree commented Oct 2, 2020

It's not up to me to decide on this, but I feel it would indeed be better to differentiate this from the previously rejected proposals by making it about testing.M. If you choose to do so, I suggest including concrete details about the change to testing.M you'd want; proposals that are very light on technical details are often not very well received.

@ianlancetaylor ianlancetaylor added this to Incoming in Proposals (old) Oct 2, 2020
@michael-schaller
Copy link
Contributor Author

@ALTree If this proposal is rejected I would open a new proposal to not clutter this proposal. ;-)

@rsc rsc moved this from Incoming to Active in Proposals (old) Oct 7, 2020
@dnephin
Copy link
Contributor

dnephin commented Oct 9, 2020

With very large test suites that log a lot and heavily use t.Parallel() it can be hard to find the actual test failures in the output.

This is one of the first problems gotestsum (pkg.go.dev) was built to solve. It runs go test -json, and prints a summary at the end.

To only list the failed tests you would use --no-summary=output,skipped.

$ gotestsum --no-summary=output,skipped ./testjson/internal/withfails/
✖  testjson/internal/withfails (12ms)

=== Failed
=== FAIL: testjson/internal/withfails TestFailed (0.00s)

=== FAIL: testjson/internal/withfails TestAlsoFailed (0.00s)

DONE 29 tests, 3 skipped, 2 failures in 0.168s

You can use --format=standard-quiet or --format=standard-verbose options if you want the default gotest output in addition to the summary. The custom gotestsum formats all have color highlighting for pass, skip, fail.

Both the default summary, and the --format=dots option are heavily influenced by the pytest -r summary output.

$ gotestsum --format=dots -- -count=1 ./testjson/internal/withfails/
[testjson/internal/withfails]···↷↷✖·✖····✖··✖·········↷···
=== Skipped
=== SKIP: testjson/internal/withfails TestSkipped (0.00s)
    fails_test.go:26: 

=== SKIP: testjson/internal/withfails TestSkippedWitLog (0.00s)
    fails_test.go:30: the skip message

=== SKIP: testjson/internal/withfails TestTimeout (0.00s)
    timeout_test.go:13: skipping slow test

=== Failed
=== FAIL: testjson/internal/withfails TestFailed (0.00s)
    fails_test.go:34: this failed

=== FAIL: testjson/internal/withfails TestFailedWithStderr (0.00s)
this is stderr
    fails_test.go:43: also failed

=== FAIL: testjson/internal/withfails TestNestedWithFailure/c (0.00s)
    fails_test.go:65: failed
    --- FAIL: TestNestedWithFailure/c (0.00s)

=== FAIL: testjson/internal/withfails TestNestedWithFailure (0.00s)

DONE 29 tests, 3 skipped, 4 failures in 0.161s

I understand you are looking for something in the test suite itself. Why is it a problem to use a different binary?

@michael-schaller
Copy link
Contributor Author

Why is it a problem to use a different binary?

Using a different binary is fine for a single developer or a few developers. However it does not scale well the more developers and automation is involved as each developers/automation needs to diverge from their usual behavior/setup. gotestsum achieves the goal to give a sufficient test summary but it is IMHO still an external workaround for a shortcoming of Go's testing framework. In that sense I would very much prefer a solution that would fit better into Go's testing framework. For instance if Go's test framework would provide the needed stats after the tests have finished then TestMain could print a custom summary that fits the respective needs.

@michael-schaller
Copy link
Contributor Author

Overall I get the impression that this proposal wouldn't benefit enough people to be feasible.
However I think that Go's test framework should still provide the needed test stats so that a custom summary can be implemented in TestMain. So I will file a new proposal for that.

Thanks to everyone who provided feedback for this proposal. <3

@michael-schaller
Copy link
Contributor Author

The new proposal is #41878.

@rsc rsc moved this from Active to Declined in Proposals (old) Oct 14, 2020
@golang golang locked and limited conversation to collaborators Oct 9, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
No open projects
Development

No branches or pull requests

4 participants