Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: non-cooperative goroutine preemption #24543

Closed
aclements opened this issue Mar 26, 2018 · 122 comments
Closed

runtime: non-cooperative goroutine preemption #24543

aclements opened this issue Mar 26, 2018 · 122 comments

Comments

@aclements
Copy link
Member

aclements commented Mar 26, 2018

I propose that we solve #10958 (preemption of tight loops) using non-cooperative preemption techniques. I have a detailed design proposal, which I will post shortly. This issue will track this specific implementation approach, as opposed to the general problem.

Edit: Design doc

Currently, Go currently uses compiler-inserted cooperative preemption points in function prologues. The majority of the time, this is good enough to allow Go developers to ignore preemption and focus on writing clear parallel code, but it has sharp edges that we've seen degrade the developer experience time and time again. When it goes wrong, it goes spectacularly wrong, leading to mysterious system-wide latency issues (#17831, #19241) and sometimes complete freezes (#543, #12553, #13546, #14561, #15442, #17174, #20793, #21053). And because this is a language implementation issue that exists outside of Go's language semantics, these failures are surprising and very difficult to debug.

@dr2chase has put significant effort into prototyping cooperative preemption points in loops, which is one way to solve this problem. However, even sophisticated approaches to this led to unacceptable slow-downs in tight loops (where slow-downs are generally least acceptable).

I propose that the Go implementation switch to non-cooperative preemption using stack and register maps at (essentially) every instruction. This would allow goroutines to be preempted without explicit
preemption checks. This approach will solve the problem of delayed preemption with zero run-time overhead and have side benefits for debugger function calls (#21678).

I've already prototyped significant components of this solution, including constructing register maps and recording stack and register maps at every instruction and so far the results are quite promising.

/cc @drchase @RLH @randall77 @minux

@aclements aclements added this to the Go1.12 milestone Mar 26, 2018
@aclements aclements self-assigned this Mar 26, 2018
@gopherbot
Copy link

Change https://golang.org/cl/102600 mentions this issue: design: add 24543-non-cooperative-preemption

@gopherbot
Copy link

Change https://golang.org/cl/102603 mentions this issue: cmd/compile: detect simple inductive facts in prove

@gopherbot
Copy link

Change https://golang.org/cl/102604 mentions this issue: cmd/compile: don't produce a past-the-end pointer in range loops

@aclements
Copy link
Member Author

Forwarding some questions from @hyangah on the CL:

Are code in cgo (or outside Go) considered non-safe points?

All of cgo is currently considered a safe-point (one of the reasons it's relatively expensive to enter and exit cgo) and this won't change.

Or will runtime be careful not to send signal to the threads who may be in cgo land?

I don't think the runtime can avoid sending signals to threads that may be in cgo without expensive synchronization on common paths, but I don't think it matters. When it enters the runtime signal handler it can recognize that it was in cgo and do the appropriate thing (which will probably be to just ignore it, or maybe queue up an action like stack scanning).

Should users or cgo code avoid using the signal?

It should be okay if cgo code uses the signal, as long as it's correctly chained. I'm hoping to use POSIX real-time signals on systems where they're available, so the runtime will attempt to find one that's unused (which is usually all of them anyway), though that isn't an option on Darwin.

And a question from @randall77 (which I answered on the CL, but should have answered here):

Will we stop using the current preemption technique (the dummy large stack bound) altogether, or will the non-coop preemption just be a backstop?

There's really no cost to the current technique and we'll continue to rely on it in the runtime for the foreseeable future, so my current plan is to leave it in. However, we could be much more aggressive about removing stack bounds checks (for example if we can prove that a whole call tree will fit in the nosplit zone).

@TocarIP
Copy link
Contributor

TocarIP commented Mar 27, 2018

So it is still possible to make goroutine nonpreemptable with something like:
sha256.Sum(make([]byte,1000000000))
where inner loop is written in asm?

@aclements
Copy link
Member Author

Yes, that would still make a goroutine non-preemptible. However, with some extra annotations in the assembly to indicate registers containing pointers it will become preemptible without any extra work or run-time overhead to reach an explicit safe-point. In the case of sha256.Sum these annotations would probably be trivial since it will never construct a pointer that isn't shadowed by the arguments (so it can claim there are no pointers in registers).

I'll add a paragraph to the design doc about this.

@komuw
Copy link
Contributor

komuw commented Mar 28, 2018

will the design doc be posted here?

@aclements
Copy link
Member Author

aclements commented Mar 28, 2018 via email

gopherbot pushed a commit to golang/proposal that referenced this issue Mar 28, 2018
For golang/go#24543.

Change-Id: Iba313a963aafcd93521bb9e006cb32d1f242301b
Reviewed-on: https://go-review.googlesource.com/102600
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
@aclements
Copy link
Member Author

The doc is now submitted: Proposal: Non-cooperative goroutine preemption

@mtstickney
Copy link

Disclaimer: I'm not a platform expert, or an expert on language implementations, or involved with go aside from having written a few toy programs in it. That said:

There's a (potentially) fatal flaw here: GetThreadContext doesn't actually work on Windows (see here for details). There are several lisp implementations that have exhibited crashes on that platform because they tried to use GetThreadContext/SetThreadContext to implement preemptive signals on Windows.

As some old notes for SBCL point out, Windows has no working version of preemptive signals without loading a kernel driver, which is generally prohibitive for applications.

@JamesBielby
Copy link

I think the example code to avoid creating a past-the-end pointer has a problem if the slice has a capacity of 0. You need to declare _p after the first if statement.

@creker
Copy link

creker commented Mar 31, 2018

@mtstickney looks like it's true but we can look for other implementations, how they go about the same problem. CoreCLR talks about the same problem - they need to preempt threads for GC and talk about the same bugs with wrong thread context. And they also talk about how they solve it without ditching SuspendThread altogether by using redirection.

I'm not an expert in this kind of stuff so I'm sorry if this has nothing to do with solving the problem here.

@mtstickney
Copy link

@creker Nor me, so we're in the same boat there. I hadn't seen the CoreCLR reference before, but that's the same idea as the lisp approach: SuspendThread, retrieve the current register set with GetThreadContext, change IP to point to the signal code to be run, ResumeThread, then when the handler is finished restore the original registers with SetThreadContext.

The trick is capturing the original register set: you can either do it with an OS primitive (GetThreadContext, which is buggy), or roll your own code for it. If you do the latter, you're at risk for getting a bogus set of registers because your register-collecting code is in user-mode, and could be preempted by a kernel APC.

It looks like on some Windows versions, some of the time, you can detect and avoid the race conditions with GetThreadContext (see this post, particularly the comments concerning CONTEXT_EXCEPTION_REQUEST). The CoreCLR code seems to make some attempts to work around the race condition, although I don't know if it's suitable here.

@aclements
Copy link
Member Author

Thanks for the pointers about GetThreadContext! That's really interesting and good to know, but I think it's actually not a problem.

For GC preemption, we can always resume the same goroutine on the same thread after preemption, so there's no need to call SetThreadContext to hijack the thread. We just need to observe its state; not run something else on that thread. Furthermore, my understanding is that GetThreadContext doesn't reliably return all registers if the thread is in a syscall, but in this case there won't be any live pointers in registers anyway (any pointer arguments to the syscall are shadowed on the Go wrapper's stack). Hence, we only need to retrieve the PC and SP in this case. Even this may not matter, since we currently treat a syscall as a giant GC safe-point, so we already save the information we need on the way in to the syscall.

For scheduler preemption, things are a bit more complicated, but I think still okay. In this case we would need to call SetThreadContext to hijack the thread, but we would only do this to threads at Go safe-points, meaning we'd never preempt something in a syscall. Today, if a goroutine has been in a syscall for too long, we don't hijack the thread, we simply flag that it should block upon returning from the syscall and schedule the next goroutine on a different thread (creating a new one or going to a pool). We would keep using that mechanism for rescheduling goroutines that are in system calls.

@gopherbot
Copy link

Change https://golang.org/cl/108497 mentions this issue: cmd/compile: teach Haspointer about TSSA and TTUPLE

@gopherbot
Copy link

Change https://golang.org/cl/108496 mentions this issue: cmd/compile: don't lower OpConvert

@gopherbot
Copy link

Change https://golang.org/cl/108498 mentions this issue: cmd/compile: don't compact liveness maps in place

gopherbot pushed a commit that referenced this issue Apr 20, 2018
Currently, each architecture lowers OpConvert to an arch-specific
OpXXXconvert. This is silly because OpConvert means the same thing on
all architectures and is logically a no-op that exists only to keep
track of conversions to and from unsafe.Pointer. Furthermore, lowering
it makes it harder to recognize in other analyses, particularly
liveness analysis.

This CL eliminates the lowering of OpConvert, leaving it as the
generic op until code generation time.

The main complexity here is that we still need to register-allocate
OpConvert operations. Currently, each arch's lowered OpConvert
specifies all GP registers in its register mask. Ideally, OpConvert
wouldn't affect value homing at all, and we could just copy the home
of OpConvert's source, but this can potentially home an OpConvert in a
LocalSlot, which neither regalloc nor stackalloc expect. Rather than
try to disentangle this assumption from regalloc and stackalloc, we
continue to register-allocate OpConvert, but teach regalloc that
OpConvert can be allocated to any allocatable GP register.

For #24543.

Change-Id: I795a6aee5fd94d4444a7bafac3838a400c9f7bb6
Reviewed-on: https://go-review.googlesource.com/108496
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
gopherbot pushed a commit that referenced this issue Apr 20, 2018
These will appear when tracking live pointers in registers, so we need
to know whether they have pointers.

For #24543.

Change-Id: I2edccee39ca989473db4b3e7875ff166808ac141
Reviewed-on: https://go-review.googlesource.com/108497
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: David Chase <drchase@google.com>
gopherbot pushed a commit that referenced this issue Apr 23, 2018
Currently Liveness.compact rewrites the Liveness.livevars slice in
place. However, we're about to add register maps, which we'll want to
track in livevars, but compact independently from the stack maps.
Hence, this CL modifies Liveness.compact to consume Liveness.livevars
and produce a new slice of deduplicated stack maps. This is somewhat
clearer anyway because it avoids potential confusion over how
Liveness.livevars is indexed.

Passes toolstash -cmp.

For #24543.

Change-Id: I7093fbc71143f8a29e677aa30c96e501f953ca2b
Reviewed-on: https://go-review.googlesource.com/108498
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
@gopherbot
Copy link

Change https://golang.org/cl/109351 mentions this issue: cmd/compile: dense numbering for GP registers

@gopherbot
Copy link

Change https://golang.org/cl/109353 mentions this issue: cmd/compile, cmd/internal/obj: record register maps in binary

@gopherbot
Copy link

Change https://golang.org/cl/213837 mentions this issue: runtime: protect against external code calling ExitProcess

gopherbot pushed a commit that referenced this issue Jan 9, 2020
On Windows, we implement asynchronous preemption using SuspendThread
to suspend other threads in our process. However, SuspendThread is
itself actually asynchronous (it enqueues a kernel "asynchronous
procedure call" and returns). Unfortunately, Windows' ExitProcess API
kills all threads except the calling one and then runs APCs. As a
result, if SuspendThread and ExitProcess are called simultaneously,
the exiting thread can be suspended and the suspending thread can be
exited, leaving behind a ghost process consisting of a single thread
that's suspended.

We've already protected against the runtime's own calls to
ExitProcess, but if Go code calls external code, there's nothing
stopping that code from calling ExitProcess. For example, in #35775,
our own call to racefini leads to C code calling ExitProcess and
occasionally causing a deadlock.

This CL fixes this by introducing synchronization between calling
external code on Windows and preemption. It adds an atomic field to
the M that participates in a simple CAS-based synchronization protocol
to prevent suspending a thread running external code. We use this to
protect cgocall (which is used for both cgo calls and system calls on
Windows) and racefini.

Tested by running the flag package's TestParse test compiled in race
mode in a loop. Before this change, this would reliably deadlock after
a few minutes.

Fixes #35775.
Updates #10958, #24543.

Change-Id: I50d847abcdc2688b4f71eee6a75eca0f2fee892c
Reviewed-on: https://go-review.googlesource.com/c/go/+/213837
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: David Chase <drchase@google.com>
@networkimprov
Copy link

List of todo items posted in #36365

@ianlancetaylor
Copy link
Contributor

Is there a reason to leave this issue open, given the existence of #36365?

@aclements
Copy link
Member Author

Nope! Closing.

@szmcdull
Copy link

szmcdull commented Jul 3, 2020

Is there any way to disable preemption altogether? In most situation I want a simple single-threaded asynchronous model similar to node.js, where locks are mostly not needed.

Now even if I specify runtime.GOMAXPROCS(1), I have to protect against things like panic: concurrent map iteration and map write.

@networkimprov
Copy link

@szmcdull have you tried this runtime switch? Note that Go had preemption before 1.14...

$ GODEBUG=asyncpreemptoff=1 ./your_app arguments ...

@szmcdull
Copy link

szmcdull commented Jul 3, 2020

@szmcdull have you tried this runtime switch? Note that Go had preemption before 1.14...

$ GODEBUG=asyncpreemptoff=1 ./your_app arguments ...

Yes I tried. But still got panic: concurrent map iteration and map

@networkimprov
Copy link

I think you were just lucky that you didn't see that before 1.14 :-)

Try https://golang.org/pkg/sync/#Map

For further Q's, I refer you to golang-nuts. You'll get more & faster responses there, generally.

fengguang pushed a commit to 0day-ci/linux that referenced this issue Feb 20, 2021
The user notifier feature allows for filtering of seccomp notifications in
userspace. While the user notifier is handling the syscall, the notifying
process can be preempted, thus ending the notification. This has become a
growing problem, as Golang has adopted signal based async preemption[1]. In
this, it will preempt every 10ms, thus leaving the supervisor less than
10ms to respond to a given notification. If the syscall require I/O (mount,
connect) on behalf of the process, it can easily take 10ms.

This allows the supervisor to set a flag that moves the process into a
state where it is only killable by terminating signals as opposed to all
signals.

Signed-off-by: Sargun Dhillon <sargun@sargun.me>

[1]: golang/go#24543
sargun added a commit to Netflix-Skunkworks/linux that referenced this issue Mar 3, 2021
The user notifier feature allows for filtering of seccomp notifications in
userspace. While the user notifier is handling the syscall, the notifying
process can be preempted, thus ending the notification. This has become a
growing problem, as Golang has adopted signal based async preemption[1]. In
this, it will preempt every 10ms, thus leaving the supervisor less than
10ms to respond to a given notification. If the syscall require I/O (mount,
connect) on behalf of the process, it can easily take 10ms.

This allows the supervisor to set a flag that moves the process into a
state where it is only killable by terminating signals as opposed to all
signals. The process can still be terminated before the supervisor receives
the notification.

Signed-off-by: Sargun Dhillon <sargun@sargun.me>

[1]: golang/go#24543
fengguang pushed a commit to 0day-ci/linux that referenced this issue Mar 18, 2021
The user notifier feature allows for filtering of seccomp notifications in
userspace. While the user notifier is handling the syscall, the notifying
process can be preempted, thus ending the notification. This has become a
growing problem, as Golang has adopted signal based async preemption[1]. In
this, it will preempt every 10ms, thus leaving the supervisor less than
10ms to respond to a given notification. If the syscall require I/O (mount,
connect) on behalf of the process, it can easily take 10ms.

This allows the supervisor to set a flag that moves the process into a
state where it is only killable by terminating signals as opposed to all
signals. The process can still be terminated before the supervisor receives
the notification.

Signed-off-by: Sargun Dhillon <sargun@sargun.me>

[1]: golang/go#24543
fengguang pushed a commit to 0day-ci/linux that referenced this issue Apr 26, 2021
The user notifier feature allows for filtering of seccomp notifications in
userspace. While the user notifier is handling the syscall, the notifying
process can be preempted, thus ending the notification. This has become a
growing problem, as Golang has adopted signal based async preemption[1]. In
this, it will preempt every 10ms, thus leaving the supervisor less than
10ms to respond to a given notification. If the syscall require I/O (mount,
connect) on behalf of the process, it can easily take 10ms.

This allows the supervisor to set a flag that moves the process into a
state where it is only killable by terminating signals as opposed to all
signals. The process can still be terminated before the supervisor receives
the notification.

Signed-off-by: Sargun Dhillon <sargun@sargun.me>

[1]: golang/go#24543
fengguang pushed a commit to 0day-ci/linux that referenced this issue Apr 30, 2021
The user notifier feature allows for filtering of seccomp notifications in
userspace. While the user notifier is handling the syscall, the notifying
process can be preempted, thus ending the notification. This has become a
growing problem, as Golang has adopted signal based async preemption[1]. In
this, it will preempt every 10ms, thus leaving the supervisor less than
10ms to respond to a given notification. If the syscall require I/O (mount,
connect) on behalf of the process, it can easily take 10ms.

This allows the supervisor to set a flag that moves the process into a
state where it is only killable by terminating signals as opposed to all
signals. The process can still be terminated before the supervisor receives
the notification.

Signed-off-by: Sargun Dhillon <sargun@sargun.me>

[1]: golang/go#24543
@golang golang locked and limited conversation to collaborators Jul 3, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests