Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: GC should wake up idle Ps #14179

Open
aclements opened this issue Feb 1, 2016 · 8 comments
Open

runtime: GC should wake up idle Ps #14179

aclements opened this issue Feb 1, 2016 · 8 comments
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsFix The path to resolution is known, but the work has not been done. Performance
Milestone

Comments

@aclements
Copy link
Member

Currently the GC doesn't always wake up idle Ps, and hence may not take full advantage of idle marking during the concurrent mark phase. This can happen during mark 2 because mark 1 completion preempts all workers; if the Ps running those workers have nothing else to do they will simply park, and there's no mechanism to wake them up after we allow mark workers to start again. It's possible this can happen during mark 1 as well, though it may be since we allow mark workers to run before starting the world for mark 1 that all of the Ps start running.

/cc @RLH

@aclements aclements self-assigned this Feb 1, 2016
@aclements aclements added this to the Go1.7 milestone Feb 1, 2016
@rsc
Copy link
Contributor

rsc commented May 18, 2016

ping @aclements for 1.7 vs 1.8 triage

@aclements
Copy link
Member Author

Not a serious issue, so moving to 1.8.

@aclements aclements modified the milestones: Go1.8, Go1.7, Go1.8Early May 18, 2016
@aclements
Copy link
Member Author

This is in fact an issue during mark 1, but for a different reason: I believe we do always start the idle workers for mark 1, but it looks like they can sometimes run out of work very early and exit, causing the Ps to go idle again. We don't wake those up once more work is enqueued, so they stay idle.

@quentinmit
Copy link
Contributor

@aclements Are we still on track to do this for Go1.8?

@quentinmit quentinmit added the NeedsFix The path to resolution is known, but the work has not been done. label Sep 28, 2016
@rsc rsc modified the milestones: Go1.8, Go1.8Early Oct 20, 2016
@gopherbot
Copy link

CL https://golang.org/cl/32434 mentions this issue.

@rsc rsc reopened this Feb 15, 2017
@aclements aclements modified the milestones: Go1.9Early, Go1.8 Feb 15, 2017
gopherbot pushed a commit that referenced this issue Feb 15, 2017
We have seen one instance of a production job suddenly spinning to
100% CPU and becoming unresponsive. In that one instance, a SIGQUIT
was sent after 328 minutes of spinning, and the stacks showed a single
goroutine in "IO wait (scan)" state.

Looking for things that might get stuck if a goroutine got stuck in
scanning a stack, we found that injectglist does:

	lock(&sched.lock)
	var n int
	for n = 0; glist != nil; n++ {
		gp := glist
		glist = gp.schedlink.ptr()
		casgstatus(gp, _Gwaiting, _Grunnable)
		globrunqput(gp)
	}
	unlock(&sched.lock)

and that casgstatus spins on gp.atomicstatus until the _Gscan bit goes
away. Essentially, this code locks sched.lock and then while holding
sched.lock, waits to lock gp.atomicstatus.

The code that is doing the scan is:

	if castogscanstatus(gp, s, s|_Gscan) {
		if !gp.gcscandone {
			scanstack(gp, gcw)
			gp.gcscandone = true
		}
		restartg(gp)
		break loop
	}

More analysis showed that scanstack can, in a rare case, end up
calling back into code that acquires sched.lock. For example:

	runtime.scanstack at proc.go:866
	calls runtime.gentraceback at mgcmark.go:842
	calls runtime.scanstack$1 at traceback.go:378
	calls runtime.scanframeworker at mgcmark.go:819
	calls runtime.scanblock at mgcmark.go:904
	calls runtime.greyobject at mgcmark.go:1221
	calls (*runtime.gcWork).put at mgcmark.go:1412
	calls (*runtime.gcControllerState).enlistWorker at mgcwork.go:127
	calls runtime.wakep at mgc.go:632
	calls runtime.startm at proc.go:1779
	acquires runtime.sched.lock at proc.go:1675

This path was found with an automated deadlock-detecting tool.
There are many such paths but they all go through enlistWorker -> wakep.

The evidence strongly suggests that one of these paths is what caused
the deadlock we observed. We're running those jobs with
GOTRACEBACK=crash now to try to get more information if it happens
again.

Further refinement and analysis shows that if we drop the wakep call
from enlistWorker, the remaining few deadlock cycles found by the tool
are all false positives caused by not understanding the effect of calls
to func variables.

The enlistWorker -> wakep call was intended only as a performance
optimization, it rarely executes, and if it does execute at just the
wrong time it can (and plausibly did) cause the deadlock we saw.

Comment it out, to avoid the potential deadlock.

Fixes #19112.
Unfixes #14179.

Change-Id: I6f7e10b890b991c11e79fab7aeefaf70b5d5a07b
Reviewed-on: https://go-review.googlesource.com/37093
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
@gopherbot
Copy link

CL https://golang.org/cl/37022 mentions this issue.

@rsc
Copy link
Contributor

rsc commented Feb 16, 2017

For the record, unfixing this in Go 1.8 slowed the garbage benchmark by about 4%, STW time by about 3%. It does seem worthwhile to fix again for Go 1.9.

@rsc
Copy link
Contributor

rsc commented Feb 16, 2017

FWIW, the previous claim was based on 40+ runs of each benchmark, but a second run of benchmark results is not showing any difference. So possibly no win at all to fixing this. Easy to run the experiment; hard to interpret the results.

gopherbot pushed a commit that referenced this issue Feb 16, 2017
…to avoid possible deadlock

We have seen one instance of a production job suddenly spinning to
100% CPU and becoming unresponsive. In that one instance, a SIGQUIT
was sent after 328 minutes of spinning, and the stacks showed a single
goroutine in "IO wait (scan)" state.

Looking for things that might get stuck if a goroutine got stuck in
scanning a stack, we found that injectglist does:

	lock(&sched.lock)
	var n int
	for n = 0; glist != nil; n++ {
		gp := glist
		glist = gp.schedlink.ptr()
		casgstatus(gp, _Gwaiting, _Grunnable)
		globrunqput(gp)
	}
	unlock(&sched.lock)

and that casgstatus spins on gp.atomicstatus until the _Gscan bit goes
away. Essentially, this code locks sched.lock and then while holding
sched.lock, waits to lock gp.atomicstatus.

The code that is doing the scan is:

	if castogscanstatus(gp, s, s|_Gscan) {
		if !gp.gcscandone {
			scanstack(gp, gcw)
			gp.gcscandone = true
		}
		restartg(gp)
		break loop
	}

More analysis showed that scanstack can, in a rare case, end up
calling back into code that acquires sched.lock. For example:

	runtime.scanstack at proc.go:866
	calls runtime.gentraceback at mgcmark.go:842
	calls runtime.scanstack$1 at traceback.go:378
	calls runtime.scanframeworker at mgcmark.go:819
	calls runtime.scanblock at mgcmark.go:904
	calls runtime.greyobject at mgcmark.go:1221
	calls (*runtime.gcWork).put at mgcmark.go:1412
	calls (*runtime.gcControllerState).enlistWorker at mgcwork.go:127
	calls runtime.wakep at mgc.go:632
	calls runtime.startm at proc.go:1779
	acquires runtime.sched.lock at proc.go:1675

This path was found with an automated deadlock-detecting tool.
There are many such paths but they all go through enlistWorker -> wakep.

The evidence strongly suggests that one of these paths is what caused
the deadlock we observed. We're running those jobs with
GOTRACEBACK=crash now to try to get more information if it happens
again.

Further refinement and analysis shows that if we drop the wakep call
from enlistWorker, the remaining few deadlock cycles found by the tool
are all false positives caused by not understanding the effect of calls
to func variables.

The enlistWorker -> wakep call was intended only as a performance
optimization, it rarely executes, and if it does execute at just the
wrong time it can (and plausibly did) cause the deadlock we saw.

Comment it out, to avoid the potential deadlock.

Fixes #19112.
Unfixes #14179.

Change-Id: I6f7e10b890b991c11e79fab7aeefaf70b5d5a07b
Reviewed-on: https://go-review.googlesource.com/37093
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-on: https://go-review.googlesource.com/37022
TryBot-Result: Gobot Gobot <gobot@golang.org>
albertjin pushed a commit to albertjin/golang-go that referenced this issue Feb 17, 2017
…to avoid possible deadlock

We have seen one instance of a production job suddenly spinning to
100% CPU and becoming unresponsive. In that one instance, a SIGQUIT
was sent after 328 minutes of spinning, and the stacks showed a single
goroutine in "IO wait (scan)" state.

Looking for things that might get stuck if a goroutine got stuck in
scanning a stack, we found that injectglist does:

	lock(&sched.lock)
	var n int
	for n = 0; glist != nil; n++ {
		gp := glist
		glist = gp.schedlink.ptr()
		casgstatus(gp, _Gwaiting, _Grunnable)
		globrunqput(gp)
	}
	unlock(&sched.lock)

and that casgstatus spins on gp.atomicstatus until the _Gscan bit goes
away. Essentially, this code locks sched.lock and then while holding
sched.lock, waits to lock gp.atomicstatus.

The code that is doing the scan is:

	if castogscanstatus(gp, s, s|_Gscan) {
		if !gp.gcscandone {
			scanstack(gp, gcw)
			gp.gcscandone = true
		}
		restartg(gp)
		break loop
	}

More analysis showed that scanstack can, in a rare case, end up
calling back into code that acquires sched.lock. For example:

	runtime.scanstack at proc.go:866
	calls runtime.gentraceback at mgcmark.go:842
	calls runtime.scanstack$1 at traceback.go:378
	calls runtime.scanframeworker at mgcmark.go:819
	calls runtime.scanblock at mgcmark.go:904
	calls runtime.greyobject at mgcmark.go:1221
	calls (*runtime.gcWork).put at mgcmark.go:1412
	calls (*runtime.gcControllerState).enlistWorker at mgcwork.go:127
	calls runtime.wakep at mgc.go:632
	calls runtime.startm at proc.go:1779
	acquires runtime.sched.lock at proc.go:1675

This path was found with an automated deadlock-detecting tool.
There are many such paths but they all go through enlistWorker -> wakep.

The evidence strongly suggests that one of these paths is what caused
the deadlock we observed. We're running those jobs with
GOTRACEBACK=crash now to try to get more information if it happens
again.

Further refinement and analysis shows that if we drop the wakep call
from enlistWorker, the remaining few deadlock cycles found by the tool
are all false positives caused by not understanding the effect of calls
to func variables.

The enlistWorker -> wakep call was intended only as a performance
optimization, it rarely executes, and if it does execute at just the
wrong time it can (and plausibly did) cause the deadlock we saw.

Comment it out, to avoid the potential deadlock.

Fixes golang#19112.
Unfixes golang#14179.

Change-Id: I6f7e10b890b991c11e79fab7aeefaf70b5d5a07b
Reviewed-on: https://go-review.googlesource.com/37093
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-on: https://go-review.googlesource.com/37022
TryBot-Result: Gobot Gobot <gobot@golang.org>
@bradfitz bradfitz modified the milestones: Go1.9Maybe, Go1.9Early May 3, 2017
@bradfitz bradfitz modified the milestones: Go1.10, Go1.9Maybe Jul 14, 2017
@rsc rsc modified the milestones: Go1.10, Go1.11 Nov 22, 2017
@bradfitz bradfitz modified the milestones: Go1.11, Go1.12 Jun 20, 2018
@aclements aclements modified the milestones: Go1.12, Go1.13 Dec 18, 2018
@aclements aclements modified the milestones: Go1.13, Go1.14 Jun 25, 2019
@rsc rsc modified the milestones: Go1.14, Backlog Oct 9, 2019
@gopherbot gopherbot added the compiler/runtime Issues related to the Go compiler and/or runtime. label Jul 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsFix The path to resolution is known, but the work has not been done. Performance
Projects
Status: Triage Backlog
Development

No branches or pull requests

5 participants