Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: testing: expose testing.T constructor so a test with subtests can be verified to fail #39903

Closed
eli-darkly opened this issue Jun 28, 2020 · 13 comments

Comments

@eli-darkly
Copy link

eli-darkly commented Jun 28, 2020

This is the same proposal as #28021 - but I'd like to raise it again because the proposed solution (passing in a testing.TB instead of a *testing.T) isn't applicable to my use case.

1. Basic scenario (same as previous proposal)

I've defined a component interface which may have many implementations. All of the implementations are expected to adhere to the same contract for various standard operations. So I've created a base test suite that, given an implementation instance, runs a standard set of contract tests against it. Something like this:

func RunStandardTests(t *testing.T, implementationInstance MyInterface)

Now, if I'm going to tell everyone to use this test suite to validate their implementations, I want to make sure it is actually testing what it's supposed to be testing. So I have a test suite for the test suite (TSftTS for short). It creates a mock implementation of MyInterface which is guaranteed to adhere to the contract, and it runs RunStandardTests on that, which should pass.

However, I also need to make sure the test suite fails when it's supposed to fail, by instrumenting my mock implementation of MyInterface to break the contract in various ways. But I can't run this logic against the *testing.T that is passed into the TSftTS, because even though I would be able to detect the failure with t.Failed(), it would still cause the TSftTS itself to fail which is the opposite of what I want.

2. The previously proposed solution

Change RunStandardTests to take a testing.TB instead of a *testing.T. Create a mock implementation of testing.B which records failures, run the test suite against the deliberately-bad component with this mock implementation, and verify its state afterward.

3. My new concern

There are many tests in this test suite. So, I've used Run to produce nicely organized results:

func RunStandardTests(t *testing.T, implementationInstance MyInterface) {
    t.Run("test operation A", func(t *testing.T) { ... })
    t.Run("a bunch of tests for operation B", func (t *testing.T) {
        t.Run("B scenario 1", func (t *testing.T) { ... }) // etc., etc.
    })
}

Unfortunately, for obvious reasons the testing.TB interface does not have a Run(string, testing.TB) method.

Possible solutions

Preferred

What would be nicest from my point of view is to— as the original issue reporter requested— give me a way to create a testing.T that is not coupled to the current test environment, so if it fails, it sets its own Failed() state to true but does not cause the parent test to fail.

Workaround

One workaround would be to define yet another interface.

type MyTesting interface {
    testing.TB
    Run(string, func(MyTesting))
}

type RealTesting struct {
    t *testing.T
}

type MockTesting struct {
    failed bool
}

func (r *RealTesting) Errorf(format string, args ...interface{}) {
    r.t.Errorf(format, args...)
}
// also implement the other testing.TB methods for RealTesting here

func (r *RealTesting) Run(name string, action func(MyTesting)) {
    r.t.Run(name, func(tt *testing.T) { action(RealTesting{tt}) }
}

func (m *MockTesting) Errorf(format string, args ...interface{}) {
    m.failed = true
}

func (m *MockTesting) Run(name string, action func(MyTesting)) {
    action(m)
}

func RunStandardTests(t *testing.T, implementationInstance MyInterface) {
    runStandardTestsInternal(RealTesting{t}, implementationInstance)
}

func runStandardTestsInternal(t MyTesting, implementationInstance MyInterface) {
    t.Run("test operation A", func(t MyTesting) { ... })
    t.Run("a bunch of tests for operation B", func (t MyTesting) {
        t.Run("B scenario 1", func (t MyTesting) { ... }) // etc., etc.
    })
}

func TestTheTestSuite(t *testing.T) {
    // now I want to make it fail
    badInstance := createImplementationInstanceThatBreaksTheContract
    var mockTest MockTesting
    runStandardTestsInternal(mockTest)
    assert.True(t, mockTest.failed)
}

That ought to work, but it's ungainly. It also means that if I have any test helpers that take a *testing.T, which are also used by other test code that uses *testing.T, I would have to create new versions of them that take MyTesting instead.

(I'm using go1.13.7. The rest of my environment details aren't relevant)

@gopherbot gopherbot added this to the Proposal milestone Jun 28, 2020
@eli-darkly
Copy link
Author

Another limitation of the MyTesting workaround is that I can't use this with any test code that uses t.Parallel().

@ianlancetaylor ianlancetaylor changed the title proposal: expose testing.T constructor so a test with subtests can be verified to fail proposal: testing: expose testing.T constructor so a test with subtests can be verified to fail Jun 28, 2020
@ianlancetaylor ianlancetaylor added this to Incoming in Proposals (old) Jun 28, 2020
@rsc rsc moved this from Incoming to Active in Proposals (old) Jul 8, 2020
@rsc
Copy link
Contributor

rsc commented Jul 15, 2020

To summarize briefly, #28021 was about how to test a testing helper that takes a *testing.T. The answer was to make it take a testing.TB instead, and then pass any mock implementation of testing.TB (embedding a nil testing.TB to satisfy the private method). Or define a custom interface that has only the methods needed (like Errorf, Fatal).

The problem raised in this issue is that the mock implementation strategy does not work for helpers that use t.Run, because there is no way to write an interface for Run compatible with testing.T's Run, unless Run takes a *testing.T.

There are at least two possibilities:

  1. Do nothing. Find other ways to write such code.
  2. Add direct support for a "mockable" *testing.T.

It's unclear to me exactly how much complexity option 2 would introduce, and how much benefit it would provide. Without that, it's hard to do the cost-benefit analysis to see whether we should do it.

What would a mockable *testing.T mean exactly? How many changes are involved?

@eli-darkly
Copy link
Author

eli-darkly commented Jul 15, 2020

What would a mockable *testing.T mean exactly? How many changes are involved?

I haven't tried to actually do this, and my understanding of the current implementation may not be correct, but my impression is that it might not involve any changes to testing.T itself. Instead there would be a new function like this:

func RunDetached(testFn func(t *testing.T)) *testing.T

I'm imagining that RunDetached would be implemented roughly like so:

  • Do more or less the same setup that's done here to create the base testing.T instance and its testContext. I think the match field of the testContext would be irrelevant in this case, since it would be doing any discovery of test methods.
  • Pass that to tRunner, just as we normally do, providing testFn as the second parameter. If I understand correctly, that takes care of all the business of running it on a separate goroutine, detecting early termination, etc.
  • Return the base testing.T instance that we created, which (I think) will now contain the relevant state information about whether the test failed, or was skipped.
  • Or maybe returning this object isn't a good idea, because maybe bad things would happen if the caller then called other methods on that instance (like Run or FailNow) outside of the scope of RunDetached. In that case it could just return those state properties by themselves.

@eli-darkly
Copy link
Author

So, my original description isn't quite right— this would not actually be a constructor, because you wouldn't be doing this:

    fakeTestingT := testing.NewMockTesting()
    MyTestFunction(fakeTestingT)
    if fakeTestingT.Failed() { ... }

Instead, it would be more like this:

    var result *testing.T := testing.RunDetached(MyTestFunction)
    if result.Failed() { ... }

Or, alternately:

    failed, skipped := testing.RunDetached(MyTestFunction)
    if failed { ... }

@bcmills
Copy link
Contributor

bcmills commented Jul 21, 2020

To verify that a test expected to fail actually fails, you can run the test binary as a subprocess and execute the subtest in question using the -test.run flag. (https://github.com/bcmills/unsafeslice/blob/master/safe_test.go is an example of one such test.)

That achieves the goal of running the test using a fresh *testing.T, without the need to worry about how the “independent” *testing.T should interact with the rest of the testing package (such as, for example, the testing.failfast flag).

It's not exactly a smooth API, but the general pattern could be encapsulated in a package or function to make it smoother, and it would sidestep a lot of other subtle details.

@eli-darkly
Copy link
Author

eli-darkly commented Jul 21, 2020

@bcmills Unless I'm misunderstanding, I don't see how you can encapsulate that in a way that can just be called from test code— it would require you to design your whole build around the fact that you're going to be using this, since you would need to have built a standalone test binary.

It also means that the subtest you're running would need to have its inputs baked in (assuming they aren't just simple values that could be passed in an environment variable). That is, if what you want is to verify that SomeTestHelper(t, param) causes the test to fail if param is A but not if it's B— with the approach that I proposed, you would have a test that calls testing.RunDetached(func(t *testing.T) { SomeTestHelper(t, X) }) and checks the result, once for X = A and once for X = B. But with your approach, I would need to hard-code a func SomeTestWithA(t) and a func SomeTestWithB(t) as top-level tests.

It would get the job done, but it's elaborate enough that I think there would no longer be an advantage over just writing the relevant tests against some other abstraction instead of testing.T.

@eli-darkly
Copy link
Author

eli-darkly commented Jul 21, 2020

@bcmills As for testing.failfast, you're right that that's one place where some special-casing would be required, but it's the only one I've seen so far. That is, I think that might be the only place where anything actually needs to be changed in any fields or methods of testing.T itself. Detached instances would need to have a boolean that tells them to ignore failfast. Either that, or have the failFast flag be part of testContext, instead of being global like it is now, and have it normally be set the same way it is now but be always false for a detached instance.

@eli-darkly
Copy link
Author

Well, there is at least one other relevant mutable global, numFailed. Personally I prefer to try to avoid using so many globals, as that pretty much locks you into the assumption that this stuff will be run in a single straight-through execution context, but maybe that is just too baked in at this point.

@rsc
Copy link
Contributor

rsc commented Jul 22, 2020

@eli-darkly, the usual pattern that @bcmills was alluding to is to do something like:

  1. Define a flag

    var failFlag = flag.Bool("fail", false, "run tests expected to fail")

  2. Define a TestFooShouldFail that starts with

    if !*failFlag { t.Skip("skipping without -fail flag") }

  3. In another test, re-invoke os.Args[0] (same binary as the main test) with the arguments -run=FooShouldFail -fail.
    Check that it really does fail.

One benefit of this is that you don't have to find all the ways a test might fail hard, like panicking in a newly started goroutine, which you can't recover from. The OS takes care of that.

@eli-darkly
Copy link
Author

eli-darkly commented Jul 22, 2020

@rsc I understood the general idea, although I didn't realize that os.Args[0] can be relied upon during every test run to be a full compiled test binary. I think the rest of my point still stands though: it is clearly workable, but to me it is a clunky pattern that uses a pretty large-scale action with string-based semantics—invoking the whole test runner command from scratch with a test name filter—to do something that conceptually is very close to things that can be done in a nice statically verifiable way in the existing framework (that is, if I reference a test method from test code as an actual method, rather than a string, I know for sure at compile time that such a method exists and that no other tests will be selected).

Also, like I said, it means that you need a separate top-level test method for every kind of input that is supposed to cause a failure (unless it can be entirely represented in terms of environment variables) so that you must have a top-level TestFooShouldFail, TestBarShouldFail, etc. instead of being able to just call the target code with various parameters inside of a test. Also, you need to make sure that those tests do not get run during your regular test runs, since they will fail. Not terrible, but again, conceptually less clean since that's a kind of thing you can very easily do in regular tests.

One benefit of this is that you don't have to find all the ways a test might fail hard, like panicking in a newly started goroutine, which you can't recover from

Well, at least with regard to panics, I don't consider that to be an advantage. Test code that I'm verifying in this way should not panic; if it does, I want that to bring everything down, so that I realize something's badly wrong. I don't want that to just look like a test failure.

I don't want to keep arguing this at length, just wanted to be clear about what I meant. I realize that for many people this is just not a big deal and the workarounds that have been suggested are fine; I just wanted to raise it as a possibility, in case I was right that it wouldn't be terribly hard to implement in the standard library. Thanks for the thoughts and I'll keep an eye on this issue.

@rsc
Copy link
Contributor

rsc commented Aug 5, 2020

@eli-darkly, I agree it's a little bit of a pain to make a subprocess, but there really isn't a clear way to address the problem otherwise right now. Until we get a clearly right design, we typically leave well enough alone. That seems like the best thing to do right now here.

@rsc
Copy link
Contributor

rsc commented Aug 12, 2020

Based on the discussion above, this seems like a likely decline.

@rsc
Copy link
Contributor

rsc commented Aug 26, 2020

No change in consensus, so declined.

@rsc rsc closed this as completed Aug 26, 2020
@rsc rsc moved this from Likely Decline to Declined in Proposals (old) Aug 26, 2020
@golang golang locked and limited conversation to collaborators Aug 26, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
No open projects
Development

No branches or pull requests

4 participants