Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: testing: add support for individual test function execution - expose testing.M.Tests #53701

Closed
mattdbush opened this issue Jul 6, 2022 · 20 comments

Comments

@mattdbush
Copy link

Expose collection of defined Tests for individual function test execution so that boiler plate code can be added around execution of test.

Example:

func TestMain(m *testing.M) {
	setupAll()
	for t, _ := range m.Tests {
		setup(t)
		result := t.Run()
		if result.Passed {
			// handle passed test, discard resources etc
		}
		if result.Failed {
			// example scenario - forward log for immediate notification/action
			// preserve/copy resources for troubleshooting inspection.
		}
		teardown(t)
	}
	teardownAll()
	os.Exit(code)
}
@gopherbot gopherbot added this to the Proposal milestone Jul 6, 2022
@seankhliao
Copy link
Member

Duplicate of #27927

@seankhliao seankhliao marked this as a duplicate of #27927 Jul 6, 2022
@seankhliao seankhliao closed this as not planned Won't fix, can't repro, duplicate, stale Jul 6, 2022
@mattdbush
Copy link
Author

mattdbush commented Jul 8, 2022

@seankhliao the original issue #27927 was only closed due to "age". It was and is still useful, please note that many other frameworks and languages have the feature requested on #53701, which is to enable code to be written before and after execution of each individual test executes. Why do you/go team feel this feature is not useful in the go context?

@ianlancetaylor
Copy link
Contributor

Reopening.

That said, this proposal needs to clarify the type of the elements of m.Test. The sample code is calling t.Run, but that is clearly not the method testing.T.Run.

@ianlancetaylor ianlancetaylor reopened this Jul 8, 2022
@ianlancetaylor ianlancetaylor added this to Incoming in Proposals (old) Jul 8, 2022
@rsc
Copy link
Contributor

rsc commented Jul 13, 2022

We've talked in the past - #28592 (comment) - about exposing a list of tests and letting TestMain reorder and filter it. I'm not as sure about exposing .Run itself. What if the test calls t.Parallel? It doesn't seem like TestMain should be getting in the way of actual test execution.

Note that you can get this kind of per-test setup and teardown already by making a single test - func Test for example - and then give it subtests that you call t.Run for, checking the result. Parallel still throws a wrench into all of this.

@rsc
Copy link
Contributor

rsc commented Jul 13, 2022

This proposal has been added to the active column of the proposals project
and will now be reviewed at the weekly proposal review meetings.
— rsc for the proposal review group

@rsc rsc moved this from Incoming to Active in Proposals (old) Jul 13, 2022
@rsc
Copy link
Contributor

rsc commented Jul 20, 2022

If this issue is about per-test setup and teardown it seems like that is already provided by wrapping t.Run.
In that case it seems like there's nothing more to do here.

If that's not right, and there's more to do here, what exactly should we do? What API should we add and why?

@meling
Copy link

meling commented Jul 20, 2022

Looking at the initial example, there is an idea for inspecting the result of individual tests. I’ve long wanted a way to do that for my autograder tool. Currently, I have to carefully insert code before every t.Error and t.Fatal. Having a way to inspect the result of a test programmatically would allow doing my thing only once… instead of for every test failure location.

Not sure if that’s what OP wanted with the proposal.

@rsc
Copy link
Contributor

rsc commented Jul 27, 2022

@meling, If you call the tests with t.Run (as subtests), t.Run returns a boolean saying whether the test failed or not.

@rsc
Copy link
Contributor

rsc commented Jul 27, 2022

It sounds like maybe there's nothing to do here.

@mattdbush
Copy link
Author

mattdbush commented Jul 28, 2022

@rsc the use of t.Run is used in context of running sub-tests that are specific to a topic or functional area. The test startup and shutdown code wanted by this proposal is to provide common/cross-cutting concerns, and would want these functions applied to all unit tests and potentially all sub-tests.

So for example if we wanted to forward the test log for all individual tests (or all failed tests) to a communication service such as Slack and do this immediately when each test is executed. At present we have to parse log at end and forward at end of test runs. So rather than interact with unit test output as a whole logged file we could interact with test results and log output individually each test.

The benefit sought by this proposal is to avoid adding lots of boiler plate code inside our tests and around our sub-tests but to define common behaviours at a top level which improves the testing effectiveness and would make our CI/CD smoother.

@rsc
Copy link
Contributor

rsc commented Aug 3, 2022

So for example if we wanted to forward the test log for all individual tests (or all failed tests) to a communication service such as Slack and do this immediately when each test is executed. At present we have to parse log at end and forward at end of test runs. So rather than interact with unit test output as a whole logged file we could interact with test results and log output individually each test.

It seems like for this kind of use, it would work to use 'go test -json', which does stream test results out as they happen, with a parent process that reads the JSON and does whatever notifications are appropriate.

@mattdbush
Copy link
Author

@rsc I just provided that as one example. Other cross cutting concerns would be automatically check memory usage, goroutine leak detection, function timings, log stitching etc. What you describe is doing things at the command line outside of the running unit test code. There is a lot of useful things that can be done when interacting with each individual test (before and after each test) from inside the running golang test process. Our unit tests call other servers/micro services and we generate unique test trace ids for each unit test, it would be very useful for us to stitch those logs together and report it all with the one unit test log. Please understand this is just one use case/example.

Many/most other language unit testing frameworks have this ability I am confused why you think this type of feature is not seen as important for the golang language.

@rsc
Copy link
Contributor

rsc commented Aug 10, 2022

This proposal doesn't seem like it is converging to something concrete.

@mattdbush, can you state what the API is that you think we should add? And note that any API that appears to operate on individual tests needs to account for the Parallel method, which ends up letting multiple tests run at the same time.

@mattdbush
Copy link
Author

mattdbush commented Aug 10, 2022

@rsc the proposal is for the exposing of the internal list of test functions which is defined as tests []InternalTests so that they can be executed directly/explicitly.

At present the tests are being passed to an internal runTests and executed in an opaque way via the runTests function (testing.go:1810-1852).

Once tests are exported as Tests then following becomes possible:

func TestMain(m *testing.M) {
	setupAll()
        var tests []InternalTest = m.Tests
        // run tests directly and do pre + post test operations.
        m.EnableExplicitRun
	for _, test := range tests {
                // individual test setup
		setup(test)
                var result bool
		result := t.Run(test.Name, t.F)
		if result {
			// handle passed test, discard resources etc
		} else {
			// example scenario - forward log for immediate notification/action
			// preserve/copy resources for troubleshooting inspection.
		}
                // individual test cleanup / post test handling
		teardown(t)
	}
	teardownAll()
	os.Exit(code)
}

Golang coders can then choose to use the internal opaque execution which is useful and adequate for many situations but there can be extra value/control given to running the tests directly/explicitly and means golang code can be used rather than the earlier suggested command line approach of processing unit test output files.

@rsc
Copy link
Contributor

rsc commented Aug 17, 2022

@mattdbush, you can already do setupAll/teardownAll in TestMain.
For setup and teardown per-test, it seems like those can be placed in the test.
For "forward log for immediate notification/action" it seems like go test -json is the answer.
And the loop still does not account for the complications of t.Parallel.

@rsc
Copy link
Contributor

rsc commented Aug 17, 2022

Based on the discussion above, this proposal seems like a likely decline.
— rsc for the proposal review group

@mattdbush
Copy link
Author

mattdbush commented Aug 17, 2022

If for each individual test they could optionally execute a PreTest and PostTest function pointers (if assigned) then we could have them execute within the context of the t.Parallel execution and would mean we wouldn't need to expose the []internalTest of M. If I created a proposal based on that would it be more likely to be accepted @rsc?

This would enable golang tests to have these functions optionally assigned in the TestMain(m *testing.M) function.

The log forwarding was just one example, other examples/use cases are:

  • generation of per-test context.Context objects with individual test Ids.
  • capturing of start and end times and check durations against SLA values.
  • resetting/clearing data

@apparentlymart
Copy link

apparentlymart commented Aug 31, 2022

It seems like there's an assumption inherent in this proposal that all tests belonging to a particular package will have something in common and that it's profitable to factor out whatever that is into some common location.

If so: can you say more about why an entire package is the right granularity for whatever common behavior you want to factor out? Isn't it possible/likely that there will be at least a few tests that are somehow different than the others, for which the setup/teardown would be unnecessary or inappropriate?

I think there are some other possible options too, which would have different benefits and drawbacks:

  • Subtests of a single top-level test, using t.Run. This already works today, using the test function as the container for "tests that have something in common".
  • All tests across all packages in an entire module.
  • All tests across all packages in the entire program. (This seems like what would be needed for any sort of custom logging/tracing integrated into the test suite, if external integration terraform test -json isn't appropriate, right?)
  • Some sort of grouping based on which tests could potentially run concurrently using t.Parallel? 🤷‍♂️
  • Some other grouping using a concept that doesn't exist in Go today, perhaps analogous to "test suites" in some test harness in other languages.

If there is some need to "program with tests" (to use test cases as data so that we can metaprogram them) then I think it would be good to establish what is the most useful level to do that at, rather than just assuming that a whole package is the right answer because it happens to align with the granularity of today's TestMain. It is possible that all tests in a single package is a good level to do this at, because indeed we already support doing single setup and teardown access across the entire suite in a package, but being more granular there seems to interfere with parallel testing, as discussed above.


Separately, the need to support t.Parallel makes me wonder about an inversion of control situation where e.g. the testing package calls a callback for each setup/teardown, with those potentially being called concurrently and the TestMain function therefore having to take some care with locking.

It seems like the current design hinges on the main loop belonging to M.Run rather than to TestMain. Furthermore, if TestMain had originally been able to own the main testing loop as you propose then it may not have been possible to add parallel testing as it exists today, which suggests that exposing the internals too much here may impede further evolution of the testing API in future.

@rsc
Copy link
Contributor

rsc commented Sep 7, 2022

No change in consensus, so declined.
— rsc for the proposal review group

@mattdbush
Copy link
Author

mattdbush commented Oct 11, 2022 via email

@golang golang locked and limited conversation to collaborators Oct 11, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
No open projects
Development

No branches or pull requests

7 participants