New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: cmd/go: go test should always fuzz a bit even without -fuzz #50969
Comments
Sorry, but i don't want to have to wait and use resources for something I didn't ask for. If -fuzz should be on by default we should add an environment variable that enables this. Something like GOTESTFLAGS. |
I don't want |
It's worth noting that fuzz tests already run as part of a plain
And, historically, On one hand, I think this would be a good idea for the sake of good defaults, and for fuzzing to be a better drop-in replacement for
Do you know how long this would be? testing/quick seems to default to 100 checks, so we'd have to somehow convert that to a reasonable time unit. |
I think this is a good idea. If I've got a fuzz test, I think it's reasonable to run it for at least a few milliseconds rather than just once. |
I'm happy with 100 iterations if X milliseconds is deemed too long or too difficult to estimate. |
Just thinking outloud: the cost of a fuzz function could vary wildly. I have one that takes in the order of tens of milliseconds per input, so 100 iterations would easily get you to a couple of seconds of wall time. I admit that I don't know how |
IMO running a small # of deterministic tests fits in well with existing unit testing expectations. Imagine running Resolving whether this fuzz test does a good job at fuzzing or testing SoT are already covered well by Aside about how to determine whether this fuzz test do a "good" job at fuzzing. This is covered well by fuzzing for a short duration (10-60s) and characterizing its behavior. If this is gaining no coverage, maybe it needs to be rewritten? If it is saturating coverage already, maybe it is not worth running? (If it is crashing, it either needs to be rewritten or bugs need to be fixed.) Does each iteration take too long (>100ms)? Does it stay in your memory budget? You can even automate the decision for whether this is a "healthy" fuzzer as a [roughly] pre/post submit time CI check. |
I don't think anybody questions that In other words, if you agree that |
I expect to run tests without fuzz testing a lot more often than fuzz tests. I like saving electricity on my old refurbished computers as I talked about before on this issue tracker. Fuzz testing is something I plan on doing on "finished" software only, while normal go testing I do all the time during development. |
@beoran if that is the case in general, I imagine you could use |
I'd rather not have another thing to turn off, really. And go vet is rather predictable in resource usage, while fuzz testing can be less so. By the way, should it not also be -fuzz=off for consistency, or has that ship sailed already? |
I think this level of unpredictability is not in line with what people expect between runs of OTOH I would also encourage folks to rerun their fuzz tests as code changes. I would be very happy to have My current opinion is that turning fuzzing on should be "opt in" instead of "opt out". |
This is misleading.
I don't understand why you're happy to have fuzzing in CI where there's a risk of losing the reproducer, but not during manual |
I think where we are disagreeing is what the defaults for Running for a fixed time duration on randomly generated tests could make a test change whether it passes from one run to the next due to differences in timing, machine, load, or seed choice. Repeated executions would almost certainly find more crashes than a single fixed sequence of testing inputs, but it would do so non-deterministically. |
If a deterministic test finds a low-priority bug, that part of the test can normally be skipped to avoid masking new failures with known-but-unimportant existing failure modes. However, there isn't a straightforward way to tell the fuzzer to skip a known-bad codepath: restructuring a fuzz test to avoid a bad tree of inputs can sometimes be even more work than fixing that codepath! The difficulty of skipping known-bad inputs is a problem that fuzzing shares in common with other nondeterministic tests, but with one important caveat: fuzz tests also have a deterministic component. The behaviors for the inputs added by calls to So, to me, it is important that the “run deterministic fuzzer inputs” step be separate from the “run nondeterministic fuzz inputs” step, because that allows the existing inputs to continue to function as regression tests even in the presence of other known bugs. |
I'm confused about the proposal. How much additional time should every invocation of 'go test' take? |
@bcmills Being able to skip tests is a valid concern I had not really considered. I still think there is value in each fuzz test being run deterministically >0 times by default on |
This proposal has been added to the active column of the proposals project |
It does seem like we could at least invoke the |
Even if this does uncover something "unexpected", I think that is totally fine. The input is deterministic, so this is effectively the same as a unit test and should be easy to reproduce. And if it does need to be skipped, the fuzz function could skip on this (known) input. |
@bcmills For other contexts that use essentially only |
How much longer would the typical test on file save and |
I like this idea of fuzzing a bit with deterministic input. If you make the count 100, it matches
Given the objections to unexpected non-deterministic input, I now believe changing my proposal to do 100 fuzz calls with deterministic input is the best option. |
Fuzzing is not terribly cheap. Doing it always is almost certainly not a good idea. |
Based on the discussion above, this proposal seems like a likely decline. |
Isn't doing just a few rounds of fuzzing cheap? In general a single fuzzing iteration tends to be quite cheap, otherwise there's not much point in fuzzing because there wouldn't be much hope of getting good code coverage. Most of the functionality that I've seen fuzzed runs in the order of a millisecond or so per iteration at most. So I'd support running a few rounds of fuzzing (maybe even just one) just to make sure the code works in principle, much as we do with examples. |
Even if this proposal is eventually declined, please expand this comment to make the expense clearer. What do you mean by not terribly cheap? Compared to what? And is it relevant in the presence of test result caching? |
Fuzzing requires creating a subprocess, setting up shared memory, feeding data, and just tons more machinery than running a simple test binary. And then there are the fuzzing functions themselves, which may also not be very cheap. |
No change in consensus, so declined. |
It's mentioned in the documentation that
go test
without the-fuzz
flag only tests the seed corpus for Fuzz* functions. I'd like to convert tests that usetesting/quick.Check
to Fuzz* functions, but I'm too lazy to come up with a seed corpus for otherwise succeeding tests. I propose thatgo test
always fuzz for a bit, even without the-fuzz
flag, similar to usingtesting/quick
.The text was updated successfully, but these errors were encountered: