Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: runtime/metrics: provide histogram of goroutines' on-CPU time #63341

Open
rhysh opened this issue Oct 2, 2023 · 4 comments
Open

proposal: runtime/metrics: provide histogram of goroutines' on-CPU time #63341

rhysh opened this issue Oct 2, 2023 · 4 comments
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. Proposal
Milestone

Comments

@rhysh
Copy link
Contributor

rhysh commented Oct 2, 2023

The /sched/latencies:seconds metric is a sampled view into how long goroutines need to wait after becoming runnable before they receive on-CPU time. I propose adding another metric, tentatively named /sched/running:seconds, showing how long goroutines spend in the running state before either cooperatively yielding or being successfully preempted. It would be sampled at the same rate (an undocumented 1 in 8) as the "latencies" metric.

On the short end: Scheduling a goroutine onto a thread consumes resources, and applications that do very little of their own work each time aren't able to amortize that overhead very well. Putting that information in the same context as the metric for scheduling latency will help paint a more complete picture of the application's use of the scheduler (though of course not as complete as the runtime/trace package provides). It will allow easier comparison against general recommendations for sizing the units of work in concurrent programs (such as "split tasks into about 100µs") in a way that's accessible both at runtime and to test code.

On the long end: Goroutines that stay in the running state for more than 2 scheduler quanta (20ms), when async preemption is enabled, indicate that part of the app (or its use of the runtime) operates with preemption disabled for too long. This can make it easier to notice and track down problems like #31222 where a long non-preemptible unit of work ends up delaying the entire app by delaying one of the GC's stop-the-worlds. (The /gc/pauses:seconds metric shows the non-preemptible regions, but only when they coincide with a GC phase transition. On top of that, maybe we should have a finer-grained measure of how long preemption remains disabled; 20ms is a very long time already.) It can also indicate when the Go scheduler assigned a goroutine to a software thread, but the OS didn't schedule that software thread onto a hardware thread.

Throughout: Being able to calculate a continuously-updating (sampled) sum of on-CPU time allows comparisons against other perspectives such as RUSAGE_SELF or CLOCK_PROCESS_CPUTIME_ID. This lets the application notice that it's exceeding the OS-provided resources.

Existing:

/sched/latencies:seconds
	Distribution of the time goroutines have spent in the scheduler
	in a runnable state before actually running. Bucket counts
	increase monotonically.

Proposed addition:

/sched/running:seconds
	Distribution of the time goroutines have spent continuously in
	the running state before yielding. Bucket counts increase
	monotonically.
@rhysh rhysh added the Proposal label Oct 2, 2023
@gopherbot gopherbot added this to the Proposal milestone Oct 2, 2023
@ianlancetaylor
Copy link
Contributor

CC @golang/runtime

@prattmic prattmic added the compiler/runtime Issues related to the Go compiler and/or runtime. label Oct 3, 2023
@prattmic
Copy link
Member

prattmic commented Oct 3, 2023

This is closely related to the accepted proposal #15490. The difference here is this is a histogram of run time rather than just a count.

cc @mknyszek

@rhysh
Copy link
Contributor Author

rhysh commented Oct 4, 2023

That proposal includes /sched/goroutines/running:goroutines, a count of how many goroutines are running at this instant, rather than a cumulative histogram of how long they remained in that state. They're related, but I don't think we'd be able to use the value of one to derive the other.

@aktau
Copy link
Contributor

aktau commented Jan 23, 2024

Throughout: Being able to calculate a continuously-updating (sampled) sum of on-CPU time allows comparisons against other perspectives such as RUSAGE_SELF or CLOCK_PROCESS_CPUTIME_ID. This lets the application notice that it's exceeding the OS-provided resources.

This would indeed be interesting.

But, histograms have a downside: values are only added to the bucket count when the event has passed. In this case, the duration for which a specific goroutine was RUNNING can only be determined after it has transitioned to another state.

On the edge, it's possible that goroutines rarely (or ever) transition to other states. It is also possible that the scheduler itself is having hiccups,

This is closely related to the accepted proposal #15490. The difference here is this is a histogram of run time rather than just a count.

As @rhysh highlights, the instantaneous count of RUNNING/RUNNABLE/... goroutines would be interesting from a debugging point of view, but (I suspect) less useful from a runtime tuning/controller perspective than having a sum of the time spent in each state. In general cumulative values are better for this, as one create ones own deltas based on some polling period, and not lose data. A general strategy to make something cumulative is to add a time component, which naturally leads to something like /sched/running/total:seconds. Contrasting:

/sched/latencies:seconds: Histogram of time spent in RUNNABLE state.
/sched/latencies/total:seconds: Sum of time spent RUNNABLE by all goroutines (up until metric collection time). NOTE: This value would be different from summing the buckets*counts of /sched/running:seconds, not only due to the imprecision of the bucket boundaries, but also because samples are only added to /sched/running:seconds when goroutines switch state, whereas they are sampled at collection time in /sched/running/total:seconds.
/sched/running:seconds: Histogram of time spent in RUNNING state.
/sched/running/total:seconds: Sum of time spent RUNNING by all goroutines (up until metric collection time). NOTE: This value would be different from summing the buckets*counts of /sched/running:seconds, not only due to the imprecision of the bucket boundaries, but also because samples are only added to /sched/running:seconds when goroutines switch state, whereas they are sampled at collection time in /sched/running/total:seconds.

Does this make sense? What do others think? Should this go in a separate proposal?

Implementation wise, this would be either a single atomic value (incremented when goroutines switch state), to which the current time-spent-in-RUNNABLE for all RUNNABLE goroutines is added at collection time. Alternatively, a per-thread counter that can be updated without atomics, and is summed when read. I would prefer the latter, so as to not introduce more possible contention points in the runtime. Metric reads are far less frequent than writes, and could take the hit.

References:

  • /sched/running/total:seconds is not the same as the /sched/goroutine/running:nanoseconds proposed by @irfansharif in https://go.dev/cl/387874. The latter would only return RUNNING time for the current (querying) goroutine, while the former returns the sum over all goroutines
  • The values returned by /sched/running/total:seconds would be similar to those returned by CumulativeNormalizedRunnableGoroutines mentioned in proposal: runtime: add per-goroutine CPU stats #41554 (comment). But, the /sched/running/total:seconds proposal seems better as it does not have a discretization step in-between (losing data).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. Proposal
Projects
Status: No status
Status: Incoming
Development

No branches or pull requests

5 participants