New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime: support larger _MaxGomaxprocs #15131
Comments
I am going to propose the following patch:
|
While at it can we do 8192? SGI UV 3000 supports 8192 Threads with Intel CPU's. |
Btw, this would increase the size of two tables which are in all binaries. Currently:
At 1<<12, it would go up 16x, to 160 KB. That's quite an increase. 1<<13 would be 320KB. That's getting kinda large. @crawshaw has been trying to make binaries smaller. We might have to switch to allocating those tables dynamically. |
Those tables are zero initialized, so they won't take space in binaries,
however, I do share the concern about increasing memory footprint for all
Go programs to benefit a few rare systems.
We need to allocate those structures dynamically according to
runtime.NumCPU().
|
Ah, indeed. And confirmed. |
I agree there is no need to make 99.999% of the system suffer for a few big iron systems if the change increases memory footprint. go1.4 seems to work fine on the SGI UV 2000 i have access too. Looks like allp is a static array so dynamically sizing it as slice might still have some performance implications. I quickly tested the change and go tip works now but i got one error running tests that i guess is just due to the server using an older suse linux enterprise/gcc version and not related:
|
There is another issue here, which is that Go programs crash on startup on systems where ncpu() returns > _MaxGomaxprocs. That seems like it should be an easy fix for 1.6.1? (By just clamping the value) |
Right. I understand, that, if the machine has more than _MaxGomaxprocs, the application should still be able to run, but not exploiting all the CPUs, just _MaxGomaxprocs. As today, the application crashes. |
So that all Go processes do not die on startup on a system with >256 CPUs. I tested this by hacking osinit to set ncpu to 1000. Updates #15131 Change-Id: I52e061a0de97be41d684dd8b748fa9087d6f1aef Reviewed-on: https://go-review.googlesource.com/21599 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
I'd like to propose the fix I just landed for 1.6.X. Should I file a separate issue just for the clamping fix and leave this one for how to handle a larger ncpu more gracefully? |
CL https://golang.org/cl/21630 mentions this issue. |
@mwhudson, Go 1.6.x are generally only for things with no workarounds. There is a workaround for this, though, right: the user can set GOMAXPROCS=N explicitly in their environment? |
I ... guess so? Seems pretty hostile to a random user who just wants to run docker though. (I'll pull this into Go in 16.04 anyway I suspect so I don't really care but...) |
That's a way to look at it: there's nothing that Docker can do to work around it to give their users a working binary. So maybe it is fair game. It definitely seems low risk. |
On Thu, Apr 7, 2016 at 12:42 PM, Michael Hudson-Doyle <
|
This will be a problem for the SPARC port too; some systems have up to 512 cores / 4,096 strands. |
Any feedback on this problem? |
There are still around two weeks left before Go 1.7 freeze, is it possible
that we come up with a dynamic _MaxGomaxprocs solution?
Increasing it statically will increase memory footprint of all Go programs
while benefiting only a tiny amount of systems/programs (i.e. Docker can
build using a custom runtime with increased _MaxGomaxprocs.)
|
CL https://golang.org/cl/22206 mentions this issue. |
So that all Go processes do not die on startup on a system with >256 CPUs. I tested this by hacking osinit to set ncpu to 1000. Updates #15131 Fixes #15160 Change-Id: I52e061a0de97be41d684dd8b748fa9087d6f1aef Reviewed-on: https://go-review.googlesource.com/21599 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-on: https://go-review.googlesource.com/22206
CL https://golang.org/cl/45575 mentions this issue. |
CL https://golang.org/cl/45572 mentions this issue. |
CL https://golang.org/cl/45573 mentions this issue. |
CL https://golang.org/cl/45574 mentions this issue. |
CL https://golang.org/cl/45673 mentions this issue. |
Currently MaxGomaxprocs is 256. The previous CL saved enough per-P static space that we can quadruple MaxGomaxprocs (and hence the static size of allp) and still come out ahead. This is safe for Go 1.9. In Go 1.10 we'll eliminate the hard-coded limit entirely. Updates #15131. Change-Id: I919ea821c1ce64c27812541dccd7cd7db4122d16 Reviewed-on: https://go-review.googlesource.com/45673 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
There are currently two arrays indexed by P ID: allp and pdesc. Consolidate these by moving the pdesc fields into type p so they can be indexed off allp along with all other per-P state. For #15131. Change-Id: Ib6c4e6e7612281a1171ba4a0d62e52fd59e960b4 Reviewed-on: https://go-review.googlesource.com/45572 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
Increased the limit to 1024 for Go 1.9 (without increasing the memory footprint). The remaining queued CLs will eliminate the limit for Go 1.10. |
This makes it possible to eliminate the hard cap on GOMAXPROCS. Updates #15131. Change-Id: I4c422b340791621584c118a6be1b38e8a44f8b70 Reviewed-on: https://go-review.googlesource.com/45573 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
allp now has length gomaxprocs, which means none of allp[i] are nil or in state _Pdead. This lets replace several different styles of loops over allp with normal range loops. for i := 0; i < gomaxprocs; i++ { ... } loops can simply range over allp. Likewise, range loops over allp[:gomaxprocs] can just range over allp. Loops that check for p == nil || p.state == _Pdead don't need to check this any more. Loops that check for p == nil don't have to check this *if* dead Ps don't affect them. I checked that all such loops are, in fact, unaffected by dead Ps. One loop was potentially affected, which this fixes by zeroing p.gcAssistTime in procresize. Updates #15131. Change-Id: Ifa1c2a86ed59892eca0610360a75bb613bc6dcee Reviewed-on: https://go-review.googlesource.com/45575 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
1. What version of Go are you using (
go version
)?1.6 and Upstream release
2. What operating system and processor architecture are you using (
go env
)?Ubuntu 16.04 Final Beta:
3. What did you do?
Run docker command on my environment.
4. What did you expect to see?
Docker running without a stack trace.
5. What did you see instead?
An error and unexpected exit, as:
procresize: invalid arg
The text was updated successfully, but these errors were encountered: