Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

testing: exclude statically unreachable code in test coverage #31280

Open
santhosh-tekuri opened this issue Apr 5, 2019 · 50 comments
Open

testing: exclude statically unreachable code in test coverage #31280

santhosh-tekuri opened this issue Apr 5, 2019 · 50 comments
Labels
FeatureRequest NeedsDecision Feedback is required from experts, contributors, and/or the community before a change can be made.
Milestone

Comments

@santhosh-tekuri
Copy link
Contributor

What version of Go are you using (go version)?

$ go version
go version go1.12.1 darwin/amd64

Does this issue reproduce with the latest release?

yes

What operating system and processor architecture are you using (go env)?

go env Output
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/santhosh/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/santhosh/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/v5/44ncgqws3dg3pmjnktpt_l2w0000gq/T/go-build063045597=/tmp/go-build -gno-record-gcc-switches -fno-common"

i have code like this in my project:

const trace = false

func doSomething() {
    if trace {
        fmt.Println("some information")
    }
}

because trace is false, the coverage reports the tracing code as not covered
can such code be shown as "not tracked"

I enable trace if we are debugging some issue in our code using build tags
I can enable trace during coverage generation, but tracing output is large and it takes significant time.

I have seen types package from stdlib also uses if trace
so may be this can be taken as enhancement

@bcmills
Copy link
Contributor

bcmills commented Apr 10, 2019

Flagging statically-unreachable code seems like a useful signal to me, and whether the compiler is able to determine statically that it is unreachable is an implementation detail.

100% code coverage is usually not an appropriate goal for Go projects. Can you give some more detail as to why this particular behavior is important?

@bcmills bcmills added the WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided. label Apr 10, 2019
@santhosh-tekuri
Copy link
Contributor Author

I am not targeting for 100% coverage. the code coverage numbers does not reflect expected ones. the html report generated shows all tracing code as red. so when seeing reports, I have to mentally discard all those tracing lines of code highlighted in red.

currently I am doing a workaround of enable trace variable but do not generate any trace during tests suing build tags. this keeps code coverage track those lines.

@bcmills bcmills added NeedsDecision Feedback is required from experts, contributors, and/or the community before a change can be made. and removed WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided. labels Apr 13, 2019
@bcmills bcmills added this to the Unplanned milestone Apr 13, 2019
@sam-github
Copy link

I was looking for a similar feature, not so much to reach 100% coverage, but because when reviewing coverage to determine if uncovered lines should be covered, its useful to have an annotation that says "not reachable, expected to be red". I guess I could literally put such a comment in my code, but I think it'd be pretty nice to get a report that said "92% covered, 6%uncovered, and 2% marked unreachable".

I think of uncovered code a bit like lint output or compiler warnings. Developers shouldn't have to reevaluate every time they checkout a project and run lint if the output is relevant. If its been decided that it is irrelevant, its handy to suppress it with in-line markup, and then nobody has to see the output again and then go look to see if its valid.

@maxim-ge

This comment has been minimized.

@faraonc

This comment has been minimized.

@bcmills

This comment has been minimized.

@kstenerud
Copy link

kstenerud commented May 19, 2020

I'm not sure I understand the rationale behind "100% code coverage is usually not an appropriate goal for Go projects". What causes a project written in Go to need different coverage metrics than the same project written in another language such as Python or Java? Those languages support annotations to mark uncoverable regions (like untestable error handling), such that you can tell at a glance that 100% of your coverable code is in fact covered.

Uncovered code is code that has no evidence whatsoever that it does what it's supposed to under any circumstances, nor is it robust against regressions.

@bcmills
Copy link
Contributor

bcmills commented May 20, 2020

@kstenerud, a project written in Go doesn't need different coverage metrics than the same project written in Java. (100% coverage is not usually an appropriate goal for Java projects either.)

However, a Go project usually does not require the same coverage as a project written in Python, unless the Python project is making extensive use of analyzers such as PyChecker or mypy. That is because you can rely more on the Go compiler itself to flag common issues on rare paths (such as misspelled variable names in panic statements). In that sense, the type-checker itself is a form of “test coverage”, and necessarily already covers 100% of lines — so if you're comparing to a Python project without an additional analyzer, the comparable Go coverage is effectively already 100%.

@jcollum
Copy link

jcollum commented Jun 1, 2020

(100% coverage is not usually an appropriate goal for Java projects either.)

100% code coverage is typically not worth the time investment required, regardless of language. I agree that higher code coverage is a good goal in non-typed languages.

All that being said, I think it would be helpful to mark some lines as ignored for code coverage purposes. It can be very difficult from a unit test perspective to get the code in to some states.

This ticket is over a year old and still open. That's a shame.

@kstenerud
Copy link

kstenerud commented Jun 4, 2020

@bcmills Static type checking is not code testing. They are completely different things. If tests and coverage were only for type checking, then there'd be no need for unit tests in go at all.

The point of 100% test coverage with "unreachable" annotation support is so that you can run your coverage tool and see at a glance (or with CI rules) that 100% of the code that you WANT to have covered has in fact been covered. As it is now, you have to manually step through the entire report, checking every red line to make sure that it's a code path you actually don't intend to test, ON EVERY CHECKIN (and doing this automatically in your CI is impossible). This leads to human error, and libraries that aren't properly tested and will instead blow up in the wild on some edge case because getBaseFactor() deep in your library should have returned 5113 instead of 5118, but someone forgot to write a test covering it, and it got missed in the 10,000 line coverage report (that nobody reads anymore because they don't want to spend 20 minutes scanning it on every checkin).

@bcmills
Copy link
Contributor

bcmills commented Jun 4, 2020

@kstenerud

Static type checking is not code testing. They are completely different things.

They are different points in a continuum of “tools that verify assertions about program behavior”.

A type-checker verifies assertions inline in the program source about program deviations from the type system. Note that the same program may compile with type declarations of different strengths: for example, a program that uses direction-annotated channels will also compile, but with less type information, if the direction annotations are all removed.

A unit-test or regression-test verifies assertions written in the test function about program deviations from the behavior expected by the test.

You can fairly easily trade off between the two: for example, you could use the type interface{} everywhere and type-assert variables at all points of use, then write unit-tests to ensure that the assertions are correct.

So what about 100% coverage? 100% test coverage asserts a very trivial property: namely, “there exists some condition under which the line does not panic without a corresponding recover”. That is a very, very weak property — on par with “the line does not contain a type error”, and certainly not enough to be confident that the code is actually adequately-tested. So either way you end up needing to think about which scenarios you've tested — not just which lines.

Some further reading:

@bcmills
Copy link
Contributor

bcmills commented Jun 4, 2020

@santhosh-tekuri, I've been thinking about your use-case some more.

I think the const-guarded lines really should be marked as uncovered. If you have untested lines of code that are unreachable, but that could be made reachable by changing some constant (which you do intend to change), then that code really is untested. When you change that constant and re-run the code, you might get arbitrary panics from that portion of the code, and it might not do at all what you expected it to do.

So for that kind of use-case, perhaps it would be better to use a build constraint to adjust the constant. Then you can run the tests once with and once without the matching tag, and merge the resulting coverage to find out what code is actually covered by the net effect of running the test both ways.

There are still a few cases that are legitimately genuinely unreachable — such as lines that panic due to the violation of some invariant that should be guaranteed to always hold — but even those can often be tested by intentionally violating the invariant in a program run as a subprocess.

(Merging coverage profiles from multiple processes is #28235.)

@bcmills
Copy link
Contributor

bcmills commented Jun 4, 2020

(For genuinely-unreachable code that diagnoses invariant violations, see also #30582.)

@kstenerud
Copy link

@bcmills please don't patronize me; I've been writing unit tests for 20 years. Any system can be cheated. You can write good tests, and you can write bad tests, and no policy or system is going to force you. All you have at the end of the day is your own discipline, and the tools you use to help you maintain that discipline.

This isn't about some magical "100% green" badge of approval. It's about catching human error via good hygiene. If you keep a "100% coverage" policy in your CI system, you'll automatically be notified by the system that your code change is probably not covered by new unit tests to test your reason for the change. THAT's why Python does it, and THAT's why Java does it. The type system doesn't even enter into it.

Of course 100% coverage can be a weak property if that's how you treat it. But as part of a disciplined team it's a godsend. Your tools warn you of potential problems, and of course how you react to those warnings is your own business. You CAN ignore that the canary has died, or maybe nail it to its perch and say everything's fine, but some of us would rather take more proactive measures when we see warning signs, or make an informed decision that it's a false alarm (add a "don't report test coverage for this" annotation). All we want are the tools so that we CAN practice good hygiene in go, like being notified whenever we might have forgotten to test some parts of the code changes we'd planned to.

@eli-darkly
Copy link

Besides agreeing with everything that @kstenerud just said, I'd like to emphasize that the following statement by @bcmills is in my opinion a very inaccurate description of what test coverage can tell us:

100% test coverage asserts a very trivial property: namely, “there exists some condition under which the line does not panic without a corresponding recover”. That is a very, very weak property — on par with “the line does not contain a type error”, and certainly not enough to be confident that the code is actually adequately-tested.

The point of covering a code path is not just to prove that it doesn't panic. It's also to prove that the code path was executed, and therefore if there is a logic mistake within that path, there is a chance that tests will notice it. If a code path has no coverage, you may wrongly think you are testing it when you're not.

It's not hard to think of situations where what appeared to be a successful test actually just means that you have not accounted for every relevant permutation of preconditions. We've probably all done this quite a few times. Take this deliberately silly example:

func F(n int, b bool) int {
    if n == 5 && b {
        return 0
    }
    return n + 1
}

If none of our tests happen to invoke F with the parameters (5, true), then we can't say whether every related piece of our code was written with a correct understanding of how F behaves— so something might break under those conditions. Of course, it's a basic principle of test design that if you know there is a specific conditional behavior like this, then you should make sure your tests cover all relevant permutations. But it is very easy to forget that. And even if you don't forget, it's also easy to write a test that at the time you wrote it exercised all the code paths... but then due to a logic change somewhere else, those preconditions no longer result in calling F(5, true).

So it is very handy to be able to look at a coverage report and see that the return 0 block is never being called in tests. That has nothing to do with thinking it might panic.

No one here has claimed that perfect coverage means your tests are perfect. The point is the converse: if there's a gap in coverage that you did not anticipate, then you know your tests are not perfect. And therefore, it's desirable to be able to exclude gaps that you're already aware of and have determined are either not significant or impossible to cover in tests, so that any new gaps stand out.

@imsodin
Copy link

imsodin commented Jun 6, 2020

I agree with the utility and ultimately I'd be nice to have something like this in go itself, however does that have to be the first step? The impression I get isn't that go has excess developer resources, so an external tool seems more appropriate. If that gets lots of attention and usage, I'd assume chances will also be higher to get changes about this issue into go itself. I am currently only aware of https://github.com/dave/courtney that already does e.g. marking code as not tested, maybe there's other tools that could be used and extended.

@kstenerud
Copy link

kstenerud commented Jun 6, 2020

Ah perfect! I'll use courtney then. TBH I'm rather surprised that Google hasn't put some effort into this kind of code coverage tool. A tool that alerts you when your code changes probably haven't been adequately tested will reduce your fail rate in production, and that always makes the bean counters happy.

@krader1961
Copy link

krader1961 commented Aug 9, 2020

FWIW, I agree there should be a mechanism for explicitly marking some statements as never reachable when evaluating test coverage. In any non-trivial program there will always be some code paths that will only ever be executed in what are effectively impossible scenarios. Yet you don’t want to continue as if everything was okay if the impossible happens. So you attempt to handle the impossible case. Injecting a mock to exercise those scenarios can be difficult to justify. Note that such cases should be a negligible percentage, well under 0.001%, of all testable statements. This isn't, in my opinion, about gaming the numbers. It's about making it easy for a project to annotate the code so it is clear what code falls into the "we don't test this because it can't happen in practice, but theoretically could" bucket while making the coverage data more useful.

Update since I wrote the above on Aug 8:

I was looking at the Elvish shell test coverage and noticed one file had exactly one line that was not covered: https://codecov.io/gh/elves/elvish/src/master/pkg/wcwidth/wcwidth.go. Specifically the panic in this block:

if w < 0 {
 panic("negative width")
}

It's essentially an assertion that validates the caller of the function is not violating its contract. There is no meaningful way, or reason to, test that. Having a way to mark that block as excluded from coverage reports would be useful.

@darioblanco
Copy link

darioblanco commented Aug 21, 2021

I agree with everything @krader1961 said. I follow a unit test approach in my TypeScript projects where I aim to have 100% test coverage, as it is easy to enforce a team to write unit tests for any code they develop. If there is of course part of the code that makes no sense to unit test, it can be excluded with annotations, those annotations are checked within a code review, and it is easier to audit in the codebase why some parts of the application are excluded. This approach looks rock solid to me because there is no discussion about what the code coverage in a project should be: you put 100% in the CI as a threshold (everything else will be rejected) and any case that should be excluded is documented within the code.

When there is no such tool available, that practice can't be enforced, and even with the most disciplined team, some tests will be missed and the quality of the application will be reduced because we are humans and we constantly need tools to aid our work. It would be grand to have that option within go itself, but I will give courtney a try in my Go conversion. But I really really miss this feature. I want robots (CI) to enforce test coverage, not humans.

PS: it seems that Courtney is a bit outdated and does not even support go modules, it would be nice to know if somebody have experienced a better alternative

@thierry-f-78
Copy link

+1 for a system marking code untestable

the main reason is:

  • external function could fail if these inputs are wrong
  • the inputs are tested, and the function never fails, so the panic is not coverable
  • could i remove the panic ? no: someone can break the input tests or something like, and I want a panic

why I prefer understanding the uncoverable code. When I write tests, I use the HTML display to see the un tested code. If this HTML contains many red code this is hard to read. I can't quick scroll looking for red line, because a lot of red lines are useless.

I propose this kind of result with all metrics:

coverage: 98% of statements, 1% untestable so 99% of total.

and yellow color in the generated HTML for code marked untestable.

@a4z
Copy link

a4z commented Jan 3, 2022

Google search brought mere here, because I would like to have some parts of code ignored (not read in the report to quickly see what is missing without reading everything)

@kstenerud, a project written in Go doesn't need different coverage metrics than the same project written in Java. (100% coverage is not usually an appropriate goal for Java projects either.)

if you do not need it fine, but I find it obscure to force your personal preference to all users, since there are obvious quite some that would like to have the possibility to ignore some lines from coverage.

if you ignore it from coverage, or just from the report, is an implementation detail I do not care, but it would be nice to have usable html reports that show exactly which parts of the code (paths) needs more testing.

so what is the status of that issue, ignorance, rejected or maybe some times there will be something that allows us to get more useful html reports than the noisy reports that are now produced?

@Goldziher
Copy link

Goldziher commented Feb 27, 2022

+1 for this feature. Its a standard feature in most other languages' testing frameworks. Adding a comment to ignore a statement doesn't seem that hard and will be quite useful.

Consider what this package is doing: https://github.com/dave/courtney

It should really be part of the builtin testing lib IMHO.

@aexvir
Copy link

aexvir commented May 11, 2022

also chiming in here to express my support for this feature
we're usually all agreeing that 100% test coverage doesn't make sense, because we know that not all the code is really worth testing, but there is no "numbers" way of knowing how much of the code that we do want to be tested is covered, and we can't enforce any check based on that

having a native way to explicitly specify what code paths aren't worth testing and having those excluded from the coverage is a win win here, as at any point is clear what needs and what doesn't need to be tested, and what's the progress getting to that state

@earthboundkid
Copy link
Contributor

This is pretty obvious, but just to be clear, the ideal way for this to work would be a magic comment:

if x < 0 {
   //go:cover ignore
   panic("unreachable")
}

And then panic("unreachable") would be colored yellow in the coverage HTML.

Can someone tag this as a proposal?

@ianlancetaylor
Copy link
Contributor

@carlmjohnson This issue has a fair amount of discussion and commentary. If you want to make that specific proposal, I suggest doing it in a new issue. Thanks.

@eli-darkly
Copy link

eli-darkly commented Jun 9, 2022

@carlmjohnson, @ianlancetaylor: Many comments in this thread have already suggested adding some kind of magic comment ("its handy to suppress it with in-line markup", "annotations to mark uncoverable regions", etc.). While the original description of the issue was about having the coverage report exclude statically-determined unreachable code, the bulk of the discussion since then has been about whether it's desirable to have any way to exclude any code at all, preferably by adding some kind of decoration to that code— and the responses of Go maintainers so far have been that it isn't. So I would not expect filing a second issue on that subject to produce a different response.

@eli-darkly
Copy link

And, as others immediately pointed out on the newly filed issue, we already have #34639 and #12504, which similarly got very dismissive responses— so much so that I'm unclear on why this issue here hasn't already been closed as "won't fix". Conversely, I'm unclear on why there was so little positive response from other users on those issues (or on the latest one) when comments on this thread in favor of the idea have gotten up to 29 thumbs. Since no Go maintainer has replied here in nearly two years, I can only assume that they are either no longer seeing comments in this thread or are so unconcerned about it that they don't even see the point in closing it.

@spenserblack
Copy link

spenserblack commented Jun 10, 2022

Conversely, I'm unclear on why there was so little positive response from other users on those issues (or on the latest one) when comments on this thread in favor of the idea have gotten up to 29 thumbs.

I'm going to guess that this issue received much more 👍s because its title is much easier to find via a search (when I do a search like this, this issue is one of the top results). Assuming I guessed right, I think this issue is much more representative of the community's interest in this feature than those other two. Of course, it's biased to people who want this issue. Users that don't like the option to ignore code for coverage for whatever reason are much less likely to find this issue and give it a 👎.

As to why this is still open (again, only my assumption): robpike said in #12504 (comment)

I dislike tools forcing people to annotate their code to silence them. That is, I dislike these annotations - they're not about the code, only about the tools that analyze the code.

So perhaps the team is still open to some method of allowing code to be ignored, as long as it doesn't require an annotation? I've seen suggestions for an Unreachable function similar to Rust's unreachable!() (technically that's a macro). If that's implemented, maybe the team would find it reasonable that the coverage tool could automatically ignore blocks that call that function, which would get this issue closer to being resolved (not completely resolved as I don't think Unreachable alone is enough for the issue creator's request).

Edit: Just want to clarify that, in the mention of ignoring Unreachable in the previous paragraph, I was only offering an example of a way to ignore code in coverage reports that doesn't involve annotations and therefore might be more acceptable. My intent wasn't to make a proposal.

@ianlancetaylor
Copy link
Contributor

@eli-darkly I was replying specifically to @carlmjohnson 's suggestion that we turn this issue into a proposal. Anybody can file a proposal, and all proposals will be reviewed by the proposal committee. I was saying that this issue has enough discussion and history that I don't think that we should turn this issue into a proposal. If someone wants to make a specific proposal, they should file a new issue. I did not mean to suggest that that proposal issue would receive a different response. I only meant to explain the best way to turn this issue into a proposal, since that was what @carlmjohnson was discussing.

@ianlancetaylor
Copy link
Contributor

Personally, I, like many others, do not think that 100% code coverage is a useful goal. I definitely do not think that Go code should be contorted or commented to achieve that goal.

That said, I personally have no objection to the suggestion in the initial comment on this issue, to ignore unreachable code when determining coverage percentages. As the coverage tools are currently being rewritten (#51430) perhaps that will be more feasible.

@Goldziher
Copy link

Goldziher commented Jun 11, 2022

Personally, I, like many others, do not think that 100% code coverage is a useful goal.

This isn't about percentages, it's about having a clear picture . But even if this is purely about numbers, there are also many others who think differently. Is there really a compelling reason to force your opinion on them? As you write "personally". Fine, it's your personal determination. Different people, teams and organizations though have different opinions, approaches and methods. The point is to enable not disable with the tools that are given.

@earthboundkid
Copy link
Contributor

The point of having a coverage tool is to guide test writing. If you run the tool and see that such and such file has 97% coverage, you have to click into the file to actually see, oh actually everything is covered, there’s just a single unreachable line. That’s fine for when you’re working, but if you leave something and come back or if someone else wants to add tests, they all have to manually “filter out” the spurious line each time.

@ianlancetaylor
Copy link
Contributor

@Goldziher My opinion is that it would be a bad for the overall Go ecosystem if Go code starts to have magic comments sprinkled around to support project-specific coverage goals. If those goals can be achieved without modifying Go code, then I'm fine with that.

@carlmjohnson I think it could be useful to have tools that report changes in coverage percentages. If the coverage percentage doesn't change, then all is (normally) well.

@Goldziher
Copy link

Goldziher commented Jun 12, 2022

@carlmjohnson , @iamoryanmoshe

As I said, different organizations and teams have different approaches and ideas.

You're arguing for enforcing a narrow restriction on users because of your point of view and assumptions about usage.

As for the comments being "magic comments" - with all due respect that's hyperbolic.

Coverage comments are regular instruction comments available in practically all major languages or testing frameworks or at least a vast majority of these, and that is a fact.

For very smart people, you seem to have very little trust in the competence and intelligence of the go user community. In my personal experience though, go developers are usually more experienced and capable than the average.

@breml
Copy link
Contributor

breml commented Jun 12, 2022

@ianlancetaylor In regards to "... Go code starts to have magic comments ...", I feel like this ship has already sailed. The Go team it self introduced some of these commend (e.g. //go:generate) and the community has adopted this e.g. with //nolint, see golangci-lint - Nolint Directive. As of today, according to Github, golangci-lint is used by 3.9k packages (which most certainly only includes open source packets). From my experience, especially with closed source company code, golangci-lint is in widespread use and with it, magic comments to handle false positives (or accepted exceptions) are common.

Edit: An other example is // Code generated ... DO NOT EDIT..

@ianlancetaylor
Copy link
Contributor

What can I say? I'm personally also opposed to "nolint" comments.

//go:generate isn't great but at least it's not scattered through otherwise ordinary code.

@Goldziher The comments are "magic" because they have semantic meaning, unlike ordinary comments. It's not a reflection of whether they are widely used.

For very smart people, you seem to have very little trust in the competence and intelligence of the go user community.

I'm sure where this criticism is coming from, but I don't think it's accurate. I think it's poor programming practice to scatter magic comments through code. They obscure the intent of the code. They also easily become inaccurate. And in the case of coverage they are being added in pursuit of a goal that I don't think is appropriate. This doesn't have anything to with the competence and intelligence of the Go user community. Go has always been an opinionated language.

@sam-github
Copy link

Having the ability to have nolint or nocoverage inline comments doesn't force anyone to use them, it would still be the author's choice, so it should not impact people who don't use the feature.
I don't think its fair to describe this as a feature "searching for 100% coverage". Its almost the opposite of that... I know that areas of code are not profitably worth adding test coverage for, and what I want to do is tell my tooling (and my collaborators) to stop telling me that I didn't cover them... but I do want the tooling to tell me when I add new uncovered code, or unintentionally stop covering code that used to be tested. A pure x% doesn't tell me that, even if I tell the coverage tool to tell me if coverage goes down, it won't be able to tell me if coverage changes unintentionally, maintaining or event improving the %.
Above is the information I (and probably many others) want, the inline comments are just the only mechanism I'm aware of that would achieve that.
I also don't love inline comments as mechanism, and strive to have as few nolint comments in code as possible. I'd prefer to have the annotation be part of the tool config.
Unfortunately, to my knowledge, there is no common way to refer to a line of code in a file that remains accurate as the file changes (file+lineno obviously doesn't work). Probably there are some researchy ways possible. git can tell when code moves, so its not a totally intractable problem, in theory, but sometimes the simple pragmatic way is best. Of course, just not having the feature at all is simpler :-), but I'm hoping eventually to be able to communicate to my collaborators (and tooling) lines that are intentionally uncovered. Somehow.

@spenserblack
Copy link

spenserblack commented Jun 15, 2022

What can I say? I'm personally also opposed to "nolint" comments.

I think it's poor programming practice to scatter magic comments through code. They obscure the intent of the code. They also easily become inaccurate. And in the case of coverage they are being added in pursuit of a goal that I don't think is appropriate. This doesn't have anything to with the competence and intelligence of the Go user community. Go has always been an opinionated language.

I think it's pretty clear, but just to be absolutely sure: is the opposition to magic comments in general, or magic comments that are specifically for coverage? Because I think it's possible to have magic comments that are useful both as compiler hints for optimization and that also hint to a coverage tool to ignore some lines.

@eli-darkly
Copy link

eli-darkly commented Jun 15, 2022

@ianlancetaylor I'm not sure arguments of this kind can ever reach a consensus, so I don't want to belabor this, but I do want to respond to one of your points that I think is the clearest case of people talking past each other:

And in the case of coverage they are being added in pursuit of a goal that I don't think is appropriate

Variations of this opinion have been stated by Go maintainers several times, but I think that they all rely on a mistaken idea of what the "goal" is from the point of view of people who want such a feature. For instance, "100% code coverage is usually not an appropriate goal for Go projects", or "100% test coverage asserts a very trivial property: namely, there exists some condition under which the line does not panic...". And it's common in all such arguments to see related statements like "just because you have 100% coverage doesn't prove that your code is correct."

I think those are all straw men, and I and other commenters on this thread have put a good deal of effort into explaining why. Literally no one is saying that 100% code coverage is a worthy goal for its own sake, or that it proves everything is correct. It is simply a very useful way of verifying that you haven't accidentally added a logic branch that your unit tests are not covering at all, when you thought they were. I provided an example of that kind of scenario in this comment above.

Now, not everyone approaches development in that way. But it is an extremely common practice that very many developers find useful, and it is not in any way language-specific. Go is opinionated about many things, but they are generally things that are related to characteristics of the language. "I would like my builds to catch me if I accidentally forget to write any tests that exercise a particular computation in my code" is not such a thing, and to say that it's not "appropriate" for developers to have such a desire is the kind of thing an individual developer may be opinionated about, but that we generally do not want language tools to be. There are developers who don't believe you should write comments in your code at all (for reasons like "the things you say in your comments might become untrue later") but no one would seriously suggest to remove comments from the language for that reason.

If Go did not have a code coverage reporting feature at all, then this would be moot. But it does, and such a feature is considerably less useful when there is no way to make it selective.

@ianlancetaylor
Copy link
Contributor

@spenserblack I dislike magic comments in general. I particularly dislike magic comments that appear inside function bodies.

@eli-darkly Thanks. I think I do understand the intent. At least, what you wrote doesn't change my mind. But I think we've covered the point sufficiently.

On this topic, please see @rsc 's comments on #53271:

@dan-lepage
Copy link

Go arguably needs this sort of comment MORE than most languages - the downside of requiring that every possible error be properly handled is that you get a lot of error-handling blocks for errors that aren't actually possible. I've seen lots of code following patterns like this:

func someFunc(input) error {
 thing := getValidThing(input)
 result, err := doSomethingWith(thing)
 if err != nil {
   return err
 }
 // ... do something with result ...
}

where the only way doSomethingWith will ever return an error is if you give it an invalid Thing. There is no way to get full test coverage on this function, because the error condition can't happen (and also there's no reasonable way for the compiler to detect that this is unreachable code).

My editor highlights my scrollbar in red wherever there's code that isn't covered, which is fantastically useful in most languages for making it easy to find any cases I forgot to test, but very frustrating in go because there are frequent red markings that are just noise imposed by this sort of error handling. As someone mentioned earlier, it's also really frustrating when you have files where you really do have 100% coverage apart from some cases like this - it'd be nice to know that

Also, leaving this feature out doesn't mean that there won't be "magic comments"1 in a lot of go code - it just means that any project devs who want accurate coverage numbers will end up inventing their own or using some third-party checker like courtney, so we get magic comments anyway except they're not standardized and they integrate poorly with other tools.

Or, even worse, you'll get novice programmers changing the example above to result, _ := doSomethingWith(thing), and thinking this is better because the red mark in their editor went away; I don't think this is something the language tools should encourage.

Footnotes

  1. I'm not sure I'd actually call // notest a magic comment. When I think of magic comments I think of things like # frozen_string_literal: true in Ruby or # -*- coding: utf-8 -*- in Python, both of which actually change the behavior of the language itself, whereas // notest or # pragma: no cover aren't language features, just annotations for utilities.

@rkollar
Copy link
Contributor

rkollar commented Jul 27, 2022

@dan-lepage I don't think that's a good example. Error handling in Go is for cases when it is possible for the code to fail. And when a failure is possible, it makes sense to have a test for it.
In this case, I would argue that result, _ := doSomethingWith(thing) is actually the correct approach if you're 100% sure it cannot fail. Simply put, if the code can fail, you should find a way to test it. If it can't fail, you shouldn't be doing error checking in the first place.

If you want to be extra paranoid, you can use panic(err) instead of return err. So that you can catch any related programming error during development. Then it would make sense to not include that in coverage.

And even it he case of not being absolutely sure if something can fail and having error handling just to be sure (and not a panic that would kill your program), it wouldn't make sense to not include that in coverage. You would be ignoring something that you're not 100% sure can't happen. You simply don't have it covered and hiding it wouldn't do you any good.

@earthboundkid
Copy link
Contributor

This is another example of why a runtime.Unreachable() that was excluded from coverage but panics on access would be nice.

@krader1961
Copy link

If you want to be extra paranoid, you can use panic(err) instead of return err. So that you can catch any related programming error during development. Then it would make sense to not include that in coverage.

That's the situation I had in mind when I made my comment two years ago. There are a few places in the Elvish shell I've been contributing to that are "can't happen unless a logic bug exists" situation. Such as switch statements with a default: panic() to help ensure that if an impossible condition occurs the program doesn't blithely continue on and report an error message that isn't actionable by the user. This is about ignoring blocks that aren't covered by unit tests because those blocks don't represent normal error cases (which should have test coverage).

@josharian
Copy link
Contributor

Maybe the specific line of code panic("unreachable") (spelled exactly like that) could play the role of a magic comment. All statements in a basic block containing that statement would be ignored.

It is clear, unambiguous, doesn't litter the code unnecessarily, doesn't add API, and fails loudly if the implicit assumption of untestability is violated.

@dan-lepage
Copy link

@rkollar:

In this case, I would argue that result, _ := doSomethingWith(thing) is actually the correct approach if you're 100% sure it cannot fail.

I'm 100% sure that it cannot fail as long as nobody changes the doSomethingWith function in a way that breaks it. If I just discard the error, though, then someone in the future could break doSomethingWith and someFunc would now be invisibly swallowing the error.

But at the same time I want to exclude the error handling from coverage because as long as doSomethingWith is correctly implemented it is completely impossible for me to write a test case that executes that path, and indeed if a test ever does cover that path it is a sign that something is broken.

@josharian:

Maybe the specific line of code panic("unreachable") (spelled exactly like that) could play the role of a magic comment.

The downside to this is that if someone does break the code, the only message they get is "Unreachable" instead of actually getting to see the error that broke things.

Panicking also feels more fragile to me - if someone breaks doSomethingWith later on in a really subtle way and the error makes it into production before anyone notices, I'd much rather learn about it from a user who saw something odd in their logs than from a user reporting a total crash. I suppose that's just an argument for better fault barriers, though.

But I also think that looking at the specific "unreachable code" use case and finding ways around it is sidestepping a more general point, that the ability to indicate "I have chosen deliberately not to test this block of code" is a useful feature in a variety of cases.

Here's another sadly common use case: you have just inherited a big block of poorly-designed code that's hard to test - maybe there are conditionals that will only be met if certain operations take hours, or blocks that are hard-coded to use production resources, etc. etc. And this library is already being used in production (presumably due to mistakes that other people made) and you have been tasked with adding tests for as much of it as possible. Obviously the ideal would be that nobody had made those mistakes in the first place, and the next best thing would be to redesign the library to be properly testable and properly tested (and update all the existing code that's using the current poorly-written API), but that might be weeks of effort when what you really need, right now, are some tests for as much as you can test. Then it's invaluable to be able to go through the library writing tests and confirm that every part of it is either at least minimally tested or explicitly marked as "this is not worth the effort it would take to test it right now".

I get that there are a number of strong arguments of the form "If you're writing good Go code you shouldn't need this feature", but I can't possibly be the only person who's ever had to work with existing poorly-written code1.

Perhaps this is drifting away a bit from the original issue of specifically excluding unreachable code, and maybe I should be lobbying for this in a separate issue, but I definitely believe there's a strong case for being able to mark code as deliberately untested, regardless of whether it's because the code is theoretically impossible to reach or because you have other reasons for not wanting to test that code right now.

Footnotes

  1. To be clear, I'm not trying to blame all these problems on novices or junior devs - sometimes the code is poorly-written because I wrote it and managed to screwed it up. That doesn't change the fact that I might not have the time to fix it right away.

@FloatingSunfish
Copy link

FloatingSunfish commented Aug 16, 2022

This feature needs to be added to Go so its official code coverage tool will become more robust.

As other commenters here have already pointed out, having the flexibility to exclude code that doesn't need to be tested from code coverage is very useful because it reduces noise when reading code coverage reports.

This isn't about "chasing 100% code coverage." It's about explicitly declaring what code you don't want to cover with tests, which could be for any number of reasons (e.g. it's not practical to test, it's something you never expect to happen, it doesn't contain any meaningful logic, etc.).
This helps you and your team focus on code that does need to be tested.

I hope the Go Team changes their minds about this because it's a very practical feature to have.

@a4z
Copy link

a4z commented Aug 16, 2022

I have, not too lang ago and somehow motivated through that topic here, written this blog post: The 100% test coverage topic , that gives a little bit more/additional context why it's a good idea to have an exclude possibility.

But on the other hand, since I found courtney, and this works reasonable well, plus it has some additional features, I am fine. However, having that all out of the box would be of course awesome.

@acidjazz
Copy link

acidjazz commented Oct 1, 2022

+1 for flagging unreachable code - i agree on the topic of you and your team deciding on what 100% is

@adbenitez
Copy link

I am using https://github.com/dave/courtney just because of this issue, mainly because I have a lot of:

err := somelib.callTo3rdPartyFunc()
if err != nil {
   return err
}

it is hard to make the call to somelib.callTo3rdPartyFunc() fail, and I don't need to test the 3rd party function, from the point of view of my code, there is no need to force an error just to cover the execution of return err statements

if go tooling ignores the coverage for such trivial return err statements that would be already an improvement without needing "magic comments"

@santhosh-tekuri santhosh-tekuri changed the title testing: exclude unreachable code in test coverage testing: exclude statically unreachable code in test coverage Apr 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
FeatureRequest NeedsDecision Feedback is required from experts, contributors, and/or the community before a change can be made.
Projects
None yet
Development

No branches or pull requests