Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: testing: Provide TestMain with stats so that it can print a custom summary. #41878

Open
michael-schaller opened this issue Oct 9, 2020 · 6 comments

Comments

@michael-schaller
Copy link
Contributor

Background

There have been several requests in the past for a summary in the end of a test suite run:

All these requests have been denied as it isn't clear which summary style/format would clearly be beneficial.

The proposed workaround was to run the tests with -json and to use the output to generate a suitable summary.
For instance gotestsum provides just that but it is an external tool and so developers and automation need to diverge from their usual workflow in order to utilize gotestsum.

Proposal

Add test stats to testing.M so that TestMain can print a custom test results summary after m.Run().

The test stats should include:

  • List of all tests.
  • List of all examples.
  • List of all benchmarks.
  • List of successful tests.
  • List of successful examples.
  • List of successful benchmarks.
  • List of skipped tests.
  • List of skipped examples (in case of -failfast).
  • List of skipped benchmarks (in case tests or examples failed).
  • List of failed tests.
  • List of failed examples.
  • List of failed benchmarks.

This could be done by enhancing the runTests, runExamples and runBenchmarks functions to produce these lists.
All lists would be of type []InternalTest, []InternalExample or []InternalBenchmark.

@ianlancetaylor
Copy link
Contributor

Perhaps we could instead give TestMain a way to get a list of the test functions, and have those functions report whether they succeed or fail. Then TestMain can do whatever it wants.

@ianlancetaylor ianlancetaylor added this to Incoming in Proposals (old) Oct 9, 2020
@dnephin
Copy link
Contributor

dnephin commented Oct 10, 2020

I don't see how the proposed changes to TestMain or testing.M solve the problem. In most cases, especially those cases where there is sufficient output to benefit from a summary, the tests for more than one Go package will be run. There is a TestMain for each package, so the summary would be printed when a package ends, not when the test run ends. The summary will end up in the middle of the output and is just as hard to find as the original test failure.

If you write the summary out to a file and gather it at the end, you once again need a separate binary, so why not use the existing mechanism (-json and a tool like gotestsum)?

The crux of this argument seems to be

developers and automation need to diverge from their usual workflow in order to utilize gotestsum

That statement seems a bit exaggerated to me.

First lets look at the developer experience. An external binary does add a one-time step of downloading the binary, that's true, but after that one time the workflow is essentially unchanged. Compare running go test to running gotestsum:

$ go test      [go test flags] [packages] [test binary flags]
$ gotestsum -- [go test flags] [packages] [test binary flags]

There are optional gotestsum flags which can be specified as env vars, but otherwise the commands are nearly identical. Any go test command line can be run using gotestsum.

Part of the argument seems to be that using another binary doesn't scale to larger teams. Any change to TestMain or testing.M would require every package in every project to import some library to print a summary. That seems like a much larger commitment than running go get once and being able to use the external binary for every package and every project.

Finally there is the question of automation. Many of the largest open source Go projects are already using gotestsum for CI: Kubernetes, Docker, many projects in the Docker and Kubernetes ecosystems, hashicorp/{terraform,vault,consul,nomad}, ipfs/go-ipfs, facebook/ent, influxdata/influxdb, prometheus/prometheus, etc.

If so many of these large projects are already using an external binary, I would suggest it may not actually be a problem. I suspect these projects are using gotestsum not only for the summary, but also because:

  1. from a single test run you are able to produce stdout for humans, a JUnit XML file for the CI system, and a test2json file to produce metrics or analytics.
  2. the standard go test and go test -v formats are not ideal for CI. go test does not print the names of passed or skipped tests. If the environment is not setup correctly you could be unintentionally skipping tests that you thought you were running. go test -v will give you that information, but essentially makes the log output useless for verbose test suites. The testname format in gotestsum is a more appropriate middle ground. You get a single line per passed test, making it easy to visually verify which tests ran and in which order, with full output for failed or skipped tests.

So even if this proposal were implemented, it seems unlikely to be a replacement for the solution that already exists.

@dnephin
Copy link
Contributor

dnephin commented Oct 10, 2020

I have another solution to this problem, that is a little unconventional. If you really don't want to install a different binary, you could use a library to run go test and print the summary. I suspect that any change to TestMain would require roughly the same thing. You would need some library to aggregate and format the returned test cases, because presumably you have more than one package with tests.

As an example, you could create a file in the project at test/main.go with contents:

package main

import (
	"fmt"
	"os"
	"gotest.tools/gotestsum/cmd"
)

func main() {
	name, args := os.Args[0], os.Args[1:]
	args = append([]string{"-f=testname", "--"}, args...)
	if err := cmd.Run(name, args); err != nil {
		fmt.Fprintln(os.Stderr, "ERROR ", err.Error())
		os.Exit(1)
	}
}

To run tests with a summary:

$ go run ./test [go test flags]

@josharian
Copy link
Contributor

Perhaps we could instead give TestMain a way to get a list of the test functions, and have those functions report whether they succeed or fail. Then TestMain can do whatever it wants.

If we also provided it a list of examples and benchmarks, it could run those as well, controlling parallelism and setting b.N = 1, subsuming #41824.

@michael-schaller
Copy link
Contributor Author

@dnephin

Please don't get me wrong on gotestsum. I think gotestsum is very much useful in larger scenarios as you rightfully mention in your first comment. So IMHO gotestsum isn't the issue but rather that it has to use -json to get the data from the test suite instead of the test suite providing that data in an easier accessible form.

I partially agree with your second comment but I wouldn't run go test. Once TestMain has access to that data one could directly import the summary-printer of gotestsum... ;-)

@smw1218
Copy link

smw1218 commented Mar 7, 2023

I would like to have access to the granular test results programmatically as well. I would like collect per-test telemetry for every invocation. Since I can't always control the invocation to use -json, I would need to insert that telemetry collection into the code itself.

Looking at the testing package code, I think all the information I would need is readily available on the common struct. This could be exposed on testing.M as func (m *M) Results() []TestResult that could be something like:

type TestResult interface {
	// Test function name
	Name() string
	// Test, Benchmark, Fuzz
	Type() string
	// Enclosing package
	Package() string
	// Start time of the Test execution
	Start() time.Time
	// Execution duration
	Duration() time.Duration
	// Passed, Failed, Skipped
	Result() string
	// any output printed during the test
	Output() []string
	// Subtest results, empty slice if none
	SubTests() []TestResult
}

I think this would provide the same data as the original proposal, but allow for more extensibility. I think something like this is what @ianlancetaylor was suggesting as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Incoming
Development

No branches or pull requests

6 participants