Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: testing: way to list tests and benchmarks without running them #17209

Closed
nemith opened this issue Sep 23, 2016 · 19 comments
Closed

proposal: testing: way to list tests and benchmarks without running them #17209

nemith opened this issue Sep 23, 2016 · 19 comments

Comments

@nemith
Copy link
Contributor

nemith commented Sep 23, 2016

Proposal feature enhancement for discussion:

Proposal is to add two new flags: 'test.list' and 'test.listbench'. When executed from the test binary it will list all tests (including examples with output set) or benches

The tests will be printed to stdout with one test per line and then exit with a exit code of 0. No tests would be ran or any other operation. It would be nice to include maybe some filering, but it's not nessisary for my use case. This would probably be easiest just be added at the top of M.Run()

Use case. We have a custom test runner that unifies all languages test output and part of the procedure is test discovery. Right there is isn't a great way to get a list of tests that are included in a test binary.

Not included in this feature enhancement is any modifications to the 'go test' tool to map through the setting .

@bradfitz
Copy link
Contributor

I would like this too for Go's distributed build system. It would make its test sharding more efficient.

@bradfitz bradfitz changed the title testing: proposal: testing: way to list tests and benchmarks without running them Sep 23, 2016
@minux
Copy link
Member

minux commented Sep 23, 2016 via email

@bradfitz
Copy link
Contributor

@minux, I'm not sure I see the difference.

Why is that more useful? It sounds like you're just proposing a different flag name.

@minux
Copy link
Member

minux commented Sep 23, 2016 via email

@bradfitz
Copy link
Contributor

I see. That's fine. I like the proposed behavior, at least, even if I'm not crazy about the name "dry run". But I suppose dryrun is consistent with a bunch of other tools.

@nemith
Copy link
Contributor Author

nemith commented Sep 23, 2016

I agree being able to do filtering would be nice and I agree dryrun seems a bit off but as long as the functionality is there I am ok.

If it is implemented with the filering I guess the best place to try to implement it would be in the RunTests, RunExamples and runBenchmarkInternal then?

@minux
Copy link
Member

minux commented Sep 24, 2016

One complication is subtest and subbenchmarks.

In general, we can't know if a test has any subtests without
running the test. Similarly for subbenchmarks, but at least it
should be faster to probe for subbenchmarks with b.N = 1.

That is, to query all the tests, we basically have to run all
the tests once. I think this drawback negates all the benefit
of querying for a list of tests in the binary (because we can
always parse the test output if we are allowed to run the
tests.)

One compromise solution is this:
-test.dryrun: it only outputs a list of top-level tests and
benchmarks that will be run if -test.dryrun is not given.
subtests and subbenchmarks for included tests are not
included.

Any opinions on this?

(Update:)
We can also add that if the -test.run or -test.bench
pattern uses the subtest/subbenmark matching format
(i.e. including '/'), then matched subtests and subbenchmarks
will be included.

@adg
Copy link
Contributor

adg commented Sep 26, 2016

cc @mpvl

Agree that this can't work with subtests. Otherwise I think it makes sense.

@mpvl
Copy link
Contributor

mpvl commented Sep 27, 2016

Indeed it won't work with subtests and only partially with benchmarks. I thus think that they "-dryrun" flag is a bit misleading. You can now achieve the same by using "-benchtime=1ns". This also more accurately indicates what is going on in reality. BTW, we actually thought for a moment to have the trial be N=0 instead of N=1, which would have made this more feasible, but it caused too many incompatibilities.

Having a feature that just lists top-level tests, benchmarks, and examples without running might still make sense, but adding -dryrun for displaying either matched subtests or sub-benchmarks doesn't make sense to me, especially as you can't do anything more than what you can do today.

I could imagine doing something like only "probing" tests/benchmarks if static analysis shows that it will call Run. This will go a long way in providing the wanted functionality , but I'm not sure that we want to go there.

@minux
Copy link
Member

minux commented Sep 27, 2016 via email

@mpvl
Copy link
Contributor

mpvl commented Sep 27, 2016

Maybe I misunderstood, to just show the the top-level tests/benchmarks yes, it make sense, but not to list sub-benchmarks. I find it misleading to start running tests/bencmarks as soon as the pattern includes a '/', for example. It is not a dryrun anymore at that point and you can simulate the same by using the --benchtime=1ns.

@nemith
Copy link
Contributor Author

nemith commented Sep 28, 2016

@minux Well that still runs the tests which is not wanted. A test that takes more than a trivial amount of time will block.

./test_bin -test.v -test.bench=. -test.benchtime=1ns === RUN TestSleep --- PASS: TestSleep (30.00s) PASS

For my use case no filtering is needed and if filtering is required I think grep or other unix utilities would be just fine. Like you said filtering for subtests really is just showing the match for the top anyway. I think a simple loop around testing M.tests, m.Benchmarks and m.Examples is all that is really needed.

Here is all i was originally imagining in m.Run()

    if *listTests {
        for _, test := range m.tests {
            fmt.Println(test.Name)
        }
        for _, example := range m.examples {
            if example.Output != "" {
                fmt.Println(example.Name)
            }
        }
        os.Exit(0)
    }

    if *listBench {
        for _, bench := range m.benchmarks {
            fmt.Println(bench.Name)
        }
        os.Exit(0)
    }

I am not sure I see anything more than this is really nessisary.

There are examples in other languages / test packages:

In Gtests: https://github.com/google/googletest/blob/master/googletest/docs/AdvancedGuide.md#listing-test-names
Boost Tests: http://www.boost.org/doc/libs/1_61_0/libs/test/doc/html/boost_test/utf_reference/rt_param_reference/list_content.html

This is just used our internal the test runner, which is only passed the compiled binary, to get a list of tests to run.

@minux
Copy link
Member

minux commented Sep 28, 2016 via email

@nemith
Copy link
Contributor Author

nemith commented Sep 28, 2016

Except my test runner doesn't actually interact with source code. It interacts with the test binary.

@minux
Copy link
Member

minux commented Sep 28, 2016 via email

@nemith
Copy link
Contributor Author

nemith commented Sep 28, 2016

I can't get Examples that have output (i.e will be run like a test), but it is better than nothing i guess.

@quentinmit quentinmit modified the milestone: Proposal Oct 4, 2016
@rsc
Copy link
Contributor

rsc commented Nov 7, 2016

Subtests make this impossible to do at subtest granularity, but we could still have -list=pattern list all the tests - Test*, Benchmark*, and Example* - that match. That seems fine.

@rsc rsc modified the milestones: Go1.9, Proposal Nov 7, 2016
@nemith
Copy link
Contributor Author

nemith commented Apr 20, 2017

@gopherbot
Copy link

CL https://golang.org/cl/41195 mentions this issue.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

8 participants