New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime: fix testing.AllocsPerRun #8734
Labels
Milestone
Comments
AllocsPerRun *is* not useful as we use it. net/http package also amortizes allocations. We can't fix all potential amortizations and never add new ones. A number of allocations per 1000 iterations would be more stable and indicative. Then one can assert that number of allocations is between e.g. 7001 and 8000. |
This is difficult because when we free an object, we don't know how many sub-objects were there. Also if we increment MemStats.Mallocs[TinySizeClass].Allocs than that will lead to inconsistency between MemStats.Alloc and sum of MemStats.Mallocs[i].Allocs*MemStats.Mallocs[i].Size. From runtime point of view these are not allocations in any way, shape or form. |
It doesn't matter what the runtime thinks. It matters what users think. If they call new(T) and it doesn't go on the stack, that should add exactly 1.0 to AllocsPerRun in package testing. You are writing "Mallocs[" but you mean "BySize[". The runtime needs to track two more numbers: total number of tiny blocks allocated, and total number of tiny allocations from those blocks. Then it can either expose them as new fields in MemStats or it can adjust the overall MemStats.Mallocs (here I mean Mallocs, not BySize) by adding TinyAllocs - TinyBlocks before returning the stats. |
OK, I see what you mean. But then we lie about number of live objects in heap (Mallocs-Frees). It will trigger any leak detection monitoring out there, basically number of objects in heap grows infinitely. I am still not convinced that we need to fix anything here. MemStats are not about user program as written, it's about what happens in runtime (e.g. we do not count new(T) that is demoted to stack). For runtime these allocations are not allocations. Just as number of allocations can fluctuate after recompilation due to changes in escaping, it can change from iteration to iteration if execution environment manages to avoid some allocations on some iterations. We do have the same effect due to defer caching, we do have the same effect in chan operations and mutexes due to sudog caching, we do have the same effect in any package that uses caching. |
> We need to do something to make testing.AllocsPerRun report the truth (the number of calls to mallocgc) again. Calls to mallocgc are not a part of any public contract. And then AllocsPerRun never reported even that. Zero-sized allocations has never been reported (while most of C mallocs would consider that as an allocation). Sometimes it counted mallocgc twice (when settype inside wanted to allocate memory). Other things like deferproc can be counted as well. But if/when we move some of defers to stack, this implementation detail will change again. Things can become more complicated when we have bump the pointer allocator with inlines fast path. It does not seem to me that there is strong enough notion of a "user allocation" that we can preserve over time. Runtime reports what *it* thinks is an allocation. If we want to do checking of allocs as a QoI measure we can check that number of allocations per N interations fits into a permissible range. |
Dmitry, clearly you have a different definition of the word "Allocs" than I think was intended when we named the function "testing.AllocsPerRun". Think of it instead as "testing.DoesDoingThisContributeToTheGarbageCollectorRunningAndHowMuch". But that's an inconvenient name, so it's called "AllocsPerRun". Users don't care how memory was allocated. testing.AllocsPerRun is 99% of the time about locking in heap allocation behavior. Whether that's heap-in-slab or heap-in-tiny is irrelevant. If the cost of incrementing a counter is expensive, it only has to be done while testing.AllocsPerRun is active (only in tests). |
This bug is not about redefining AllocsPerRun. It is about FIXING it. We run the thing 100 times or something like that and then divide by 100. All the things that happen significantly less than once per run get rounded away. What's left is the consistent allocations. Except that the tiny-alloc code path is breaking that. The code path needs to be changed to keep AllocsPerRun working. Dmitriy, are you willing to fix this? If not, let me know and I will reassign the bug and fix it myself. |
> We run the thing 100 times or something like that and then divide by 100. All the things that happen significantly less than once per run get rounded away. When you divide by 100 and round down, you eliminate not just everything that happens significantly less than once per run. You eliminate everything that did not happen in at least one run. If that was the intention that we need to add 0.9 before rounding down. By the way, the rounding logic was introduced to mask another implementation detail -- episodic allocations in settype. Settype allocations were indeed rare, and they have gone. I can't fix this issue, because I don't understand what you want to measure. Feel free to reassign to yourself. |
CL https://golang.org/cl/143150043 mentions this issue. |
This issue was closed by revision e19d8a4. Status changed to Fixed. |
wheatman
pushed a commit
to wheatman/go-akaros
that referenced
this issue
Jun 25, 2018
Fixes golang#8734. LGTM=r, bradfitz, dvyukov R=bradfitz, r, dvyukov CC=golang-codereviews, iant, khr https://golang.org/cl/143150043
wheatman
pushed a commit
to wheatman/go-akaros
that referenced
this issue
Jul 9, 2018
Fixes golang#8734. LGTM=r, bradfitz, dvyukov R=bradfitz, r, dvyukov CC=golang-codereviews, iant, khr https://golang.org/cl/143150043
wheatman
pushed a commit
to wheatman/go-akaros
that referenced
this issue
Jul 30, 2018
Fixes golang#8734. LGTM=r, bradfitz, dvyukov R=bradfitz, r, dvyukov CC=golang-codereviews, iant, khr https://golang.org/cl/143150043
This issue was closed.
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The text was updated successfully, but these errors were encountered: