-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime: profile goroutine creation #16894
Comments
There's already a goroutine profile; search for "goroutine" in https://golang.org/pkg/runtime/pprof/. I think that that covers this. |
That goroutine profile shows the current stack of the goroutines and a single address of the location of the If this existed today, it would likely be called "goroutinecreate". |
Like in #16379, I'm a bit worried about monitoring slowly taking up all the CPU we have available. The nice thing about the current goroutine profile is that it only costs something when you ask for it. It doesn't slow down the program always. I suppose we could sample the goroutine creations, but that might be too confusing. |
/cc @RLH @aclements |
@rsc Similar to memory profiling, I think having the user pick a profile rate (for both goroutine creation and file descriptor profiling) makes a lot of sense. It could be 0 by default, although I imagine even having a 1% sample profile rate will be useful. Similar to memory leaks, if there's tens of thousands of goroutines leaking, having 1% of the creations would be enough to find the source. |
+1
|
@mandarjog, did you try looking at the CPU profile or the memory profile in graphical form (the 'web' command)? I would expect goroutine creation to appear in both of those. |
@rsc Following is the graphical view. Should I be looking at something else? |
@mandarjog, sorry, I guess the profiles have changed since I last saw them. You're right that that's not helpful. @aclements, we've had problems trying to do a backtrace off the system stack onto the main stack for CPU profiling, but maybe we can do it for memory allocation profiling, which doesn't happen at random program counters? |
Most unbounded Go memory leaks I've encountered have been due to goroutine leaks. When diagnosing these leaks, it would be helpful to know what function call stack led to the creation of the goroutines so we can link the goroutines back to the non-stdlib code that created them—why the goroutine exists, in addition to its start PC. This is the same vein in which the current heap profiles include the stack that allocated the memory rather than just the PC and type of the allocation.
Related proposal for file descriptor profiling: #16379
A popular way to leak goroutines is by making http requests and not closing the response body. That problem could be targeted more specifically with a good static analysis tool, similar to vet's new -lostcancel check. Both approaches would be helpful.
The text was updated successfully, but these errors were encountered: