Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: testing: add a flag to detect unnecessary skips #25951

Open
bcmills opened this issue Jun 18, 2018 · 12 comments
Open

proposal: testing: add a flag to detect unnecessary skips #25951

bcmills opened this issue Jun 18, 2018 · 12 comments

Comments

@bcmills
Copy link
Contributor

bcmills commented Jun 18, 2018

Motivation

We often use (*testing.T).Skip to skip tests that fail due to known bugs in the Go toolchain, the standard library, or a specific platform where the test runs.

When the underlying bugs are fixed, it is important that we remove the skips to prevent regressions.

However, bugs are sometimes fixed without us noticing — especially if they are platform bugs. In such cases, it would be helpful to have some way to detect that situation short of editing and recompiling all of the tests in a project.

Here is a concrete example in the standard library: a t.Skip call that references an issue that was marked closed back in 2016 (#17065). Given how easy that call was to find, I expect that similarly outdated skips are pervasive.

Proposal

A new boolean flag for the testing package, tentatively named -test.unskip, with the corresponding go test flag -unskip. -test.unskip takes a regular expression, which is matched against the formatted arguments of t.Skip or t.Skipf or the most recent test log entry before t.SkipNow.

If a Skip matches the regular expression, the test does not stop its execution as usual: instead, it continues to run. If the skipped test passes, the test is recorded as unnecessarily-skipped and the binary will exit with a nonzero exit code. If the skipped test fails, the test log (and the exit code of the test binary) ignores the failure and proceeds as usual.

Comparison

Test frameworks in a few other languages provide an explicit mechanism for expecting a test to fail. For example, Python has @unittest.expectedFailure; Boost has an expected_failures decorator; RSpec has the pending keyword.

@gopherbot gopherbot added this to the Proposal milestone Jun 18, 2018
@ianlancetaylor
Copy link
Contributor

It's an interesting idea but I note that some tests are skipped not because they fail but because they impose excessive resource costs on some systems (beyond what -test.short blocks) or because they cause some systems to crash. So this will push us in the direction of adding additional ways to skip a test.

@bcmills
Copy link
Contributor Author

bcmills commented Jun 18, 2018

some tests are skipped not because they fail but because they impose excessive resource costs on some systems (beyond what -test.short blocks) or because they cause some systems to crash.

That sort of use-case is part of the reason I proposed a regular expression rather than a boolean flag.

However, it certainly remains a usability consideration: for example, we wouldn't want to encourage people to run -unskip=. on unfamiliar low-level code (or on machines that can't tolerate a crash or accidental forkbomb).

@ianlancetaylor
Copy link
Contributor

It would also be nice to handle flaky tests somehow. Some tests are skipped because they are flaky on some platforms. So the fact that they pass once doesn't indicate much one way or another.

For non-flaky tests I wonder if test.XFail("reason") would suffice to address this problem. It would run the test as usual but flip the meaning of the result: a failure is a pass, a pass is a failure. You mentioned this possibility in the original proposal; why not use it for Go?

@mvdan
Copy link
Member

mvdan commented Jun 19, 2018

I agree that having an "expected failure" mode would be helpful.

A potentially related issue I've encountered in the past is making sure that CI doesn't skip tests when it shouldn't. For example, suppose that you write your tests to be skipped if a SQL database isn't running, and you set up one as part of CI. There currently isn't a way to tell go test never to skip those tests, other than doing go test -v and manually checking the SKIP output lines from time to time.

@bcmills
Copy link
Contributor Author

bcmills commented Jun 19, 2018

For non-flaky tests I wonder if test.XFail("reason") would suffice to address this problem.

That would probably suffice going forward, but introduces a migration problem: existing tests already use Skip.

That said, migrating Skips to XFails is a one-time cost. I'd be happy enough with it, I suspect.

@rsc
Copy link
Contributor

rsc commented Jul 9, 2018

XFail sounds interesting on its own right, but it still doesn't handle Skip-for-flaky. Let's put this on hold and circle back to package testing at some point later, during the Go 2 library phase.

@13rac1
Copy link
Contributor

13rac1 commented Aug 26, 2018

I had the same feature request and decided to implement it as a learning exercise and potential contribution. It does not require a new flag. It includes tests, but needs additional documentation: 13rac1@a9a08b59

Passing Test:

func TestFail(t *testing.T) {
	t.ExpectFail()
	t.Fail()
}

func TestFailNow(t *testing.T) {
	t.ExpectFail()
	t.FailNow()
}

func TestPanic(t *testing.T) {
	t.ExpectFail()
	panic("test")
}

Passing Results:

=== RUN   TestFail
--- PASS: TestFail (0.00s)
    testing.go:828: expected failure; test failed
=== RUN   TestFailNow
--- PASS: TestFailNow (0.00s)
    testing.go:828: expected failure; test failed
=== RUN   TestPanic
--- PASS: TestPanic (0.00s)
    testing.go:817: panic: test
    testing.go:828: expected failure; test failed
PASS

Failing Test:

func TestPass(t *testing.T) {
	t.ExpectFail()
	t.Log("pass")
}

Failing Results:

=== RUN   TestPass
--- FAIL: TestPass (0.00s)
    main_test.go:7: pass
    testing.go:831: expected failure; test did not fail
FAIL

edited SHA after rebase

@bcmills
Copy link
Contributor Author

bcmills commented Nov 4, 2019

Here's a neat example:

go/src/cmd/go/go_test.go

Lines 4692 to 4694 in a813d3c

if runtime.GOOS == "openbsd" {
t.Skipf("test case does not work on %s, missing os.Executable", runtime.GOOS)
}

Added in CL 42533 (May 4, 2017), rendered obsolete in CL 46004 (June 16, 2017), and skipping the test needlessly on an entire OS ever since.

@stdedos
Copy link

stdedos commented Apr 11, 2022

While this code may not satisfy the original purpose stated here, it's still very valuable for everyone coming from pytest testing methodologies.

tl;dr you are able to merge tests as they come, and if your CI is based on "everything passes or go home" it is not the end of the world; neither does code rot away in some personal repo, inactive.

As the change is not "utterly disruptive", it could very well be on any minor version before v2.

@marco-m
Copy link

marco-m commented Jun 5, 2022

@rsc @bcmills @ianlancetaylor

XFail sounds interesting on its own right, but it still doesn't handle Skip-for-flaky. Let's put this on hold and circle back to package testing at some point later, during the Go 2 library phase.

The decision to put this proposal on hold is from June 2018; would it be possible to reconsider it now, since in my (limited) understanding Go 2 is still undecided?

My 2 cents is that also a simple XFail, that doesn't cover the case for skip-for-flaky, is still very useful in its own right, as the examples from @bcmills show.

Another advantage of a simple XFail is that it is "always on": no need to run the tests with a special flag "just in case something has been fixed and we should remove the Skip". As soon as a test marked XFail returns successfully, it will be reported as unexpected success.

Last advantage of a simple XFail is its immediate understandability by anybody seeing it for the first time.

@treuherz
Copy link

Having something like XFail wiuld also be helfpul for when building, or building on top of, testing helpers and frameworks. For example, I'm developing a custom matcher for gomock, and like a test ensuring that certain inputs cause the test to fail. Currently there's no way I can see to do that.

The bare-bones approach of testing has made it easy for other projects to build tools and helpers for common cases on top of the library, and they typically take a very open approach to extensibility. Being able to develop or extend tools like this without being able to make assertions about how they'll interact with testing itself makes this much trickier.

@AlekSi
Copy link
Contributor

AlekSi commented Apr 17, 2024

See also #39903

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants