-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime/pprof: weird allocation counts after reading a file of a certain minimum size #11343
Comments
Marking this Go1.5 so we at least understand what's happening. |
The heap profile reports a snapshot as of the most recent garbage collection. It omits more recent allocations, because otherwise the recent garbage in the profile can drown out the older live data. What's going on in your program is that the size of the read affects whether a garbage collection happens after the multiwriters are created. If so, they appear in the profile; if not, they don't. If you want to see a snapshot of live data as of the writing of the profile, you need to call runtime.GC before writing out the profile. If you only care about allocation counts and not what is live, then you can also use GOGC=off. When the GC is off, the profile reports all allocations (since there's no "last GC"). So this is working as intended, if possibly mysteriously. I will update the runtime/pprof doc comment to make this clear. |
Thanks so much for the explanation. Sticking something similar in the docs sounds good to me. |
Oh, to clarify: when you say "see a snapshot of live data", do you mean to say only objects that are still live? Or can I call runtime.GC just before writing the file to see historical allocation counts, even for objects that have long since been collected? |
CL https://golang.org/cl/12556 mentions this issue. |
@jacobsa Each profile line gives n: m [N: M] for a particular allocation stack, where n: m is currently allocated (live) objects, and N: M is all objects. pprof --inuse_xxx uses n: m and pprof --alloc_xxx uses N: M. What I mean by "a snapshot of live data" is really "a snapshot of right now", because those are updated only at each GC. |
Got it, thanks for confirming. |
Here is a gist containing a program that does the following:
runtime.MemProfileRate
to 1./dev/zero
usingioutil.ReadAll
andio.LimitReader
.io.MultiWriter
1024 times in a loop, encouraging it to make allocations./tmp/mem.pprof
.When I run the program with the flag that tells it to read 2096128 bytes before calling
io.MultiWriter
, I see the expected number of calls in the profile:However, with any value above this I get unreliable results (these change from run to run, but are never the full count and are often zero):
This shows up as nonsense callgraphs in pprof, containing lots of runtime frames and the wrong counts for the functions I'm interested in. Compare this one to this one. Each is from an invocation of pprof that looks like:
I have been unable to simplify the test case further. In particular, merely allocating a slice of bytes of the given size isn't good enough; it seems I have to open and read a file. I've also seen the problem when not setting
runtime.MemProfileRate
to a non-default value, but it is harder to reproduce.go version:
The text was updated successfully, but these errors were encountered: