-
Notifications
You must be signed in to change notification settings - Fork 18k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime: Odd crash with go1.5rc1 #12176
Comments
What processor? What OS? |
At least on GNU/Linux the "fatal error: runtime: out of memory" message can only occur when mmap returns ENOMEM. |
24-core Linux box. CentOS, kernel 2.6.32-431.29.2.el6.x86_64 From /proc/cpuinfo: model name : Intel(R) Xeon(R) CPU L5640 @ 2.27GHz |
I have the output from
And the output from
(trimmed to remove everything other than the process)
|
I suppose I should point out too this ran for months rock solid on 1.4.2. |
Are you sure that you're not running out of process/threads (both are accounted the same way on Linux)? In Go 1.5, GOMAXPROCS has changed from 1 to 24 on your Linux box. Could this lead to many additional threads being created when you perform certain kind of operations (especially blocking operations other than sleeping or socket I/O). Do you do any file I/O? What's the output of |
I'm going to see if this still fails with 1.5. I have reason to believe that after ~10 days of traffic, one of the metrics gets enough leaves that the radix tree supporting prefix queries is determined to be "full" and freed, leaving a very large amount of work for the garbage collector to do. If this is related to 3ae1704 , then this patch was not present in rc1 but was present in 1.5 final so the bug should not be occurring. I will build the server with 1.5 final on Monday and report back in a few weeks. |
Crashed again with 1.5, after a little more than two weeks. Currently my assumption is that it might be an issue with one of the queries we process, which (for whatever reason), nobody had tried while running 1.4.2. |
Closing this for now, as I'm pretty sure this is a dup of #12233 . For a currently running server:
|
I have an in-memory time-series data store ( https://github.com/dgryski/carbonmem ).
I had a crash with Go 1.5beta2 but due to lack of logging lots the reason. I upgraded to 1.5beta3, and had another panic, this time with
I upgraded to rc1, and today had another crash, this time with
It's highly unlikely this box is actually running out of memory. It has 384G of RAM, and monitor the actual memory usage on the box doesn't show any sort of leak or spike in memory usage before the crash.
The only connection I can see between the crashes is that they've each happened approximately 10 days apart.
I understand this is not a particularly useful bug report. I'll try to get some more information from the process (repeated dumps of /debug/vars, etc) and maybe we can track it down to something in our environment rather than a bug in the Go runtime.
The text was updated successfully, but these errors were encountered: