Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory leak involving finalizers #631

Closed
hoisie opened this issue Mar 1, 2010 · 17 comments
Closed

memory leak involving finalizers #631

hoisie opened this issue Mar 1, 2010 · 17 comments

Comments

@hoisie
Copy link
Contributor

hoisie commented Mar 1, 2010

What steps will reproduce the problem?
1. Run the http crawler and server from here: http://pastie.org/847755
2. Monitor the process with top

What is the expected output? What do you see instead?
I expect the memory usage to stabilize. Instead it keeps growing. 

What is your $GOOS?  $GOARCH?
linux/amd64

Which revision are you using?  (hg identify)
release 02/27

Please provide any additional information below.
This seems to be a regression. If I use Go from 02/04, the memory usage 
stabilizes.
@hoisie
Copy link
Contributor Author

hoisie commented Mar 1, 2010

Comment 1:

Just ran a bisect, and got this:
The first bad revision is:
changeset:   4816:b45e8d6389ae
user:        Russ Cox <rsc@golang.org>
date:        Mon Feb 08 21:41:54 2010 -0800
summary:     runtime: allow arbitrary return type in SetFinalizer.

@hoisie
Copy link
Contributor Author

hoisie commented Mar 1, 2010

Comment 2:

Work-around for this issue (hg clpatch 223086):
http://golang.org/cl/223086
It disables the finalizer functionality, which might cause problems if you don't 
cleanup channels or files properly.

@adg
Copy link
Contributor

adg commented Mar 2, 2010

Comment 3:

Owner changed to r...@golang.org.

@rsc
Copy link
Contributor

rsc commented Mar 3, 2010

Comment 4:

This is a fundamental consequence of finalizers:
if all the allocated objects have finalizers, the gc
will never converge.  (This falls under finalizers do nasty things.)
The only real solution is to run garbage collection more often.
For the specific case of files we could disable the finalizer
once the file is closed, but doing that doesn't seem to
solve your leak.

Status changed to LongTerm.

@hoisie
Copy link
Contributor Author

hoisie commented Mar 3, 2010

Comment 5:

I tried testing whether channels have the same leak, but they don't:
http://pastie.org/852058
This means that the leak isn't in the implementation of the finalizer table itself,
which 
is what I originally thought. My guess is that there's a bug in the gc while collecting 
files. What's the main difference in how the gc handles files vs. channels? Also, what's
a 
good way to trace the lifecycle of a file object?

@rsc
Copy link
Contributor

rsc commented Mar 3, 2010

Comment 6:

The easiest way to check for file leaks is to use lsof.
Disable the finalizer and see if files are actually getting closed.
If not, then we need to find that leak.

@hoisie
Copy link
Contributor Author

hoisie commented Mar 3, 2010

Comment 7:

I don't think this is a file leak. Both the client and the server in the sample 
explicitly close their files, and the lsof output stabilizies for both the client and 
server (around 30), whether or not the finalizers are enabled. 
My guess is that the gc isn't freeing file objects properly.

@rsc
Copy link
Contributor

rsc commented Mar 3, 2010

Comment 8:

Please try this.  Add the finalizer back in package os,
and then in the *File Close method, add
runtime.SetFinalizer(file, nil)
and see if the memory usage stabilizes.
If (a) finalizers are the problem and (b) files are being
closed properly, then this should disable the finalizer
on an explicit close, which should resolve the problem.
I tried this and it didn't work for me, but it is possible
that I did not run the test correctly.  Please let me know
if it works for you.

@hoisie
Copy link
Contributor Author

hoisie commented Mar 3, 2010

Comment 9:

The memory leak still exists whether or not runtime.SetFinalizer in os/file.go is 
there. 
However, the memory leak goes away if I remove the call to:
pkg/runtime/chan.c:     //addfinalizer(c, destroychan, 0);
So this leak seems to be related to channels?

@rsc
Copy link
Contributor

rsc commented Mar 3, 2010

Comment 10:

Does your program actually use channels?
It didn't look like it did.

@hoisie
Copy link
Contributor Author

hoisie commented Mar 3, 2010

Comment 11:

No, but it seems that newFD in net/sock.go creates two channels, and this is called by 
both accpept() and socket() in pkg/net

@rsc
Copy link
Contributor

rsc commented Mar 3, 2010

Comment 12:

Okay, that makes sense.  Now we know where the finalization-ready
objects are coming from, so it's definitely "memory leak involving finalizers"
as the subject now says.
What happens is that the finalizers keep objects alive for an 
extra garbage collection cycle, and then that trips up the 
garbage collector's calculation of how often to garbage collect.
If the collector runs and ends up with N objects still in use,
it arranges to collect once N more have been allocated (total 2N),
in the hope that in a steady state the collection will cut the
number of objects from 2N back to N and repeat.  Let's assume
that N is the steady state number of live objects.
So suppose that there are 2N objects allocated, N of them are dead but
have finalizers.  They get added to the finalizer queue (not collected)
so that at the end of the collection, there are still 2N objects,
so the garbage collector schedules the next collection at 4N.
The finalizers run and the program continues, and at the next collection
there are still N live objects, N dead (finalized) objects,
and 2N dead but not yet finalized objects.  So the collection 
ends with 3N objects and schedules the next for 6N.
The one at 6N will find N live, 2N dead, and 3N finalizer-ready, 
so it will have 4N live objects and schedule for 8N, which will
in turn schedule for 10N, and so on.
This assumes that the objects are all finalizer-ready (channels)
but even if there's just a small fraction of finalizer-ready objects
you get the same compounding.
I haven't figured out how to reconcile this kind of finalizer behavior
with the garbage collector schedule.  The workaround for now is to
run the garbage collection more frequently by calling runtime.GC.
That's obviously unsatisfactory, but I don't have a better solution yet.

@gopherbot
Copy link

Comment 13 by beoran:

I think that the core problem is that finalizers revive their object's reference. 
This is not neccesary. In Ruby, the finalizer is called /after/ the object has been
collected. It is called without passing it any object references. 
You may think, how is this useful? Well, you can make the cleanup function a closure
that refers to the individual object's members that need cleanup. For example, let's
say we  have file-like struct:
package fakefile
type File struct { 
  handle int
}
closeFile(handle int) {
  os.CloseHande(handle)
}
OpenFile(string name) (* File) {
 file := new(File)
 file.handle = os.OpenFile(name)
 toclose := file.handle 
 cleanup := func() {
   closeFile(toclose)
 }  
 runtime := SetFinalizer(x, cleanup)
 return file
}
...
f := fakefile.OpenFile("foo")
Now, the finalizer will can do it's work work all without needing any references to
the file object itself. This avoids tons of problems, including the one found here, 
and alows te object to be collected before it's finalizer is run.

@rsc
Copy link
Contributor

rsc commented Mar 25, 2010

Comment 14:

Issue #691 has been merged into this issue.

@rsc
Copy link
Contributor

rsc commented Apr 7, 2010

Comment 15:

I think this is fixed now.
Recent changes to the garbage collector cause it to
increase finalization and gc rate if finalizers are
being run, so that it can keep up.

@rsc
Copy link
Contributor

rsc commented Apr 7, 2010

Comment 16:

Status changed to Fixed.

@hoisie
Copy link
Contributor Author

hoisie commented Apr 7, 2010

Comment 17:

Awesome, just re-ran my test and both the client have stable memory usage. I'm looking 
forward to the next release; I'll finally be able to update go on my production servers 
:)

@golang golang locked and limited conversation to collaborators Jun 24, 2016
@rsc rsc removed their assignment Jun 22, 2022
This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants