New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime: panic before malloc heap initialized on amd64/linux with no swap #5236
Labels
Milestone
Comments
Leaving for Dmitry to prioritize for Go1.1 or not. Owner changed to @dvyukov. Status changed to Accepted. |
The fact is that a few people already reported that Go1.1 does not work for them because of the VM limitations. The MAP_NORESERVE change is easy to implement, at least for linux (as far as I understand that's most hostings). The problem is that with MAP_NORESERVE programs will crash later on first memory access to the pages. I don't know how cryptic it will look for users. It must be possible to allocate heap lazily, it's a more pervasive change. Marking it as Go1.1 to not forget about it. Labels changed: added go1.1, removed priority-triage. |
Here is the changelist: https://golang.org/cl/8530043 Can you test that it fixes the issue and behaves correctly if OOM happens during run time? |
Comment 7 by daniel@heroku.com: Per directive at https://groups.google.com/d/msg/golang-nuts/IaR7TaUqz7M/43oP4uZqYVAJ In abstract: some people care about VM commit charge in Go programs. On Tue, Apr 9, 2013 at 10:19 AM, Keith Rarick <kr@xph.us> wrote: > On Wed, Apr 10, 2013 at 1:59 AM, Dmitry Vyukov <dvyukov@google.com> wrote: >> Sorry, I do not understand why you care about virtual memory and how >> it is related to ENOMEM in your case. > > I'll try to restate as concretely as I can, but I may be misunderstanding > part of our situation. Dan, please correct me if I'm wrong. > > Postgres consists of several processes that handle ENOMEM and > continue to operate normally (instead of, say, crashing), and they can > make good use of as much physical memory as is available. At the > same time, we run a few Go processes that require very little actual > memory but take a lot of virtual memory. Since overcommit is off, the > unused virtual memory of the Go processes consumes and wastes > physical memory. We'd prefer to make that physical memory available > to postgres. |
Comment 9 by daniel@heroku.com: For the purposes of small agents, a *non* adaptive solution is perfectly all right. There's a class of user (myself) where such programs are better off dead than large (as-is I restart the process frequently to prevent VM bloat) |
I took a look at https://golang.org/cl/8530043 . Has anyone tried it to see if it solves the problem? The change is actually fairly contained, which is not to say without its own worries. |
In the mailing list some people said that they think that it would not solve the problem for them (something about overcommit level 2 disabling, I do not understand). I am pretty sure it solves the problem in a normal case of turning off swap file. I am not sure how to test this change reliably (the case when we actually get SIGSEGV on first access to heap). I think it long term we must be more careful with VM allocation, and allocate everything lazily. |
See also issue #5049. |
Issue #5402 has been merged into this issue. |
Issue #5402 has been merged into this issue. |
Issue #5402 has been merged into this issue. |
For Go1.1, for 64 bit windows, I think we should just reduce the arena size for now. Having several Go services running on windows, I will probably do that manually so I don't reserve needed memory. Alex, this would put memory usage back at the Go1.0 level, correct? I don't know of anyone running supper enterprise windows required to have more then 64 GB of memory anyway. See: http://msdn.microsoft.com/en-us/library/windows/desktop/aa366778(v=vs.85).aspx |
Re #22, revert https://golang.org/cl/6826088/diff/11001/src/pkg/runtime/malloc.goc#newcode347 and the memory usage will be back. i'm not sure if we should reduce the maximum heap size on windows, but i do think it's serious problem for windows land (esp. when one develops client programs in Go). |
Re #23: Right, I'm aware the change is a single const value and it is easy to do (I compile Go from source anyway) but I don't see a benefit in Windows having that large of arena to begin with as Windows maxes out at 4 TB anyway. And it evidently has significant cost to use the current mechanism on that OS. |
sorry, #23 is wrong, you just need to reduce const MHeapMap_Bits in: https://golang.org/cl/6826088/diff/11001/src/pkg/runtime/malloc.h#newcode120 i will send a CL ASAP to reduce arena size to 32 GB on windows. |
Comment 27 by daniel@heroku.com: I poked around at MAP_NORESERVE patch above, and I don't think MAP_NORESERVE will do the trick as far as overcommit-off systems go, per https://www.kernel.org/doc/Documentation/vm/overcommit-accounting (Gotchas section). There is an old communication from Alan Cox as to why this is the case: http://lkml.indiana.edu/hypermail/linux/kernel/0508.0/1769.html, and I see no evidence to suggest that things have changed much since then based on a quick skim through Linux's commits. |
Re #27, yes, i also raised that on the mailing list discussion some time ago. we just need to lazily map mheap metadata to fix the problem on linux (that is, reserve like what we did for the heap, and then remap on SIGSEGV) however, for windows, we actually need to be able to support non-contiguous heap so that we don't need to reserve that much of VM at program start. so arguably this issue and issue #5402 are not the same. |
This issue was updated by revision b3b1efd. Update issue #5402 This CL reduces gofmt's committed memory from 545864 KiB to 139568 KiB. Note: Go 1.0.3 uses about 70MiB. R=golang-dev, r, iant, nightlyone CC=golang-dev https://golang.org/cl/9245043 |
I suspect this is fixed by https://golang.org/cl/9791044/ Can somebody with the appropriate machine retest this issue? |
This issue was closed.
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
by Dean.Sinaean:
The text was updated successfully, but these errors were encountered: