Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http: memory leak #5314

Closed
gpaul opened this issue Apr 18, 2013 · 18 comments
Closed

net/http: memory leak #5314

gpaul opened this issue Apr 18, 2013 · 18 comments

Comments

@gpaul
Copy link
Contributor

gpaul commented Apr 18, 2013

I'm seeing really weird GC behaviour when using the net/http package.

If I call a function that creates a bunch of garbage slices inside a ServeMux.HandleFunc
the issue rears its head. If I then also don't close the req.Body, the issue gets way
worse.

If I call the same function on its own, no memory is leaked. If I call the same function
in a goroutine and wait for it to finish, no memory is leaked either.

It seems like I'm doing something weird with net/http, or there's a bug lurking here.
I'm reporting this as I'm out of ideas.

go version:
go version devel +58f8a30f5b78 Tue Apr 16 14:20:06 2013 -0700 linux/amd64

uname -a:
Linux me 3.3.8-1.fc16.x86_64 #1 SMP Mon Jun 4 20:49:02 UTC 2012 x86_64 x86_64 x86_64
GNU/Linux

Code to reproduce is here:
http://play.golang.org/p/ALlr3_MzSX

$ GOGCTRACE=1 go run ram_issue.go | grep gc
@gpaul
Copy link
Contributor Author

gpaul commented Apr 18, 2013

Comment 1:

This slightly modified version allocates the listener and outside the memoryleak()
function and produces the leak:
http://play.golang.org/p/rvgNusTfxy
This one additionally moves the initialization of the server to outside the function,
resulting in no leak:
http://play.golang.org/p/wcuuRQA1B3

@gpaul
Copy link
Contributor Author

gpaul commented Apr 18, 2013

Comment 2:

I managed to remove most of the code. It seems like allocating the listener and calling
http.Serve inside the loop leads to a memory leak:
http://play.golang.org/p/srnTF9CPz9
whereas allocating them once leads to stable memory usage:
http://play.golang.org/p/uaSVkgQ2bs

@gpaul
Copy link
Contributor Author

gpaul commented Apr 19, 2013

Comment 3:

Here's a memory profile showing that almost all of the heap was allocated inside
Handle() which has no arguments and no return values - so to my mind everything it
allocates should be easily GC'd as soon as it finishes executing.

@gpaul
Copy link
Contributor Author

gpaul commented Apr 19, 2013

Comment 4:

Attached

Attachments:

  1. mem.svg (23757 bytes)

@davecheney
Copy link
Contributor

Comment 5:

In cases like this Rémy usually asks folks to turn off inlining and optimisations (-N
-l) and see if that fixes the problem, could you try that please.

@gpaul
Copy link
Contributor Author

gpaul commented Apr 19, 2013

Comment 6:

Here you go. On the contrary it severely aggravates the problem.
$ go test -gcflags="-N -l" -c -i ram && GOGCTRACE=1 ./ram.test -test.v
-test.memprofile=mem.out
gc1(2): 0+0+0 ms, 0 -> 0 MB 37 -> 39 (40-1) objects, 1(3) handoff, 3(10) steal, 16/14/4
yields
=== RUN TestBadRam-5
gc2(2): 0+0+0 ms, 0 -> 0 MB 486 -> 447 (517-70) objects, 7(387) handoff, 1(4) steal,
91/25/0 yields
gc3(3): 0+0+0 ms, 64 -> 64 MB 641 -> 592 (715-123) objects, 2(6) handoff, 5(34) steal,
55/29/9 yields
gc4(4): 0+0+0 ms, 72 -> 64 MB 66200 -> 624 (66325-65701) objects, 4(114) handoff,
11(135) steal, 84/37/1 yields
gc5(5): 0+0+0 ms, 129 -> 129 MB 8435 -> 8360 (74140-65780) objects, 81(6342) handoff,
20(378) steal, 388/163/15 yields
gc6(5): 0+0+0 ms, 135 -> 128 MB 66297 -> 735 (132079-131344) objects, 2(82) handoff,
9(99) steal, 107/48/3 yields
gc7(5): 0+0+0 ms, 200 -> 129 MB 66470 -> 843 (197819-196976) objects, 7(166) handoff,
7(127) steal, 148/78/7 yields
gc8(5): 0+0+0 ms, 200 -> 193 MB 66582 -> 951 (263563-262612) objects, 2(79) handoff,
15(219) steal, 120/74/7 yields
gc9(5): 0+0+0 ms, 264 -> 257 MB 66687 -> 1056 (329304-328248) objects, 7(72) handoff,
7(105) steal, 151/98/6 yields
gc10(5): 0+0+0 ms, 328 -> 321 MB 66792 -> 1161 (395045-393884) objects, 3(27) handoff,
12(236) steal, 124/86/7 yields
gc11(5): 0+0+0 ms, 392 -> 385 MB 66896 -> 1271 (460785-459514) objects, 1(4) handoff,
14(213) steal, 100/80/8 yields
gc12(5): 0+0+0 ms, 456 -> 449 MB 67006 -> 1370 (526525-525155) objects, 1(4) handoff,
10(173) steal, 100/80/8 yields
gc13(5): 0+0+0 ms, 520 -> 513 MB 67106 -> 1481 (592268-590787) objects, 3(17) handoff,
11(173) steal, 128/65/5 yields
gc14(5): 0+0+0 ms, 584 -> 577 MB 67214 -> 1578 (658006-656428) objects, 2(35) handoff,
16(211) steal, 110/79/5 yields
gc15(5): 0+0+0 ms, 648 -> 641 MB 67313 -> 1687 (723746-722059) objects, 3(9) handoff,
12(210) steal, 105/73/7 yields
gc16(5): 0+0+0 ms, 712 -> 705 MB 67423 -> 1788 (789487-787699) objects, 1(4) handoff,
11(209) steal, 115/68/5 yields
gc17(5): 0+0+0 ms, 776 -> 769 MB 67520 -> 1889 (855224-853335) objects, 2(8) handoff,
7(182) steal, 110/76/7 yields
gc18(5): 0+0+0 ms, 840 -> 833 MB 67625 -> 1994 (920965-918971) objects, 2(8) handoff,
14(243) steal, 115/72/7 yields
gc19(5): 0+0+0 ms, 904 -> 897 MB 67725 -> 2094 (986703-984609) objects, 4(12) handoff,
16(213) steal, 121/82/7 yields
gc20(5): 0+0+0 ms, 968 -> 961 MB 67828 -> 2197 (1052442-1050245) objects, 1(4) handoff,
16(263) steal, 100/80/7 yields

@gpaul
Copy link
Contributor Author

gpaul commented Apr 19, 2013

Comment 7:

With inlining and compiler optimisations the leak is much slower. It seems stable for
about 45 GCs then it jumps up, seems stable again, and keeps going that way.

Attachments:

  1. gc.txt (33563 bytes)

@gpaul
Copy link
Contributor Author

gpaul commented Apr 19, 2013

Comment 8:

pmap output:

Attachments:

  1. pmap.txt (2165 bytes)

@gpaul
Copy link
Contributor Author

gpaul commented Apr 19, 2013

Comment 9:

After another minute, pmap:

Attachments:

  1. pmap.txt (2165 bytes)

@gpaul
Copy link
Contributor Author

gpaul commented Apr 19, 2013

Comment 10:

Running that test long enough leads to 'too many open file descriptors' and lots of
these:
goroutine 9 [IO wait]:
net.runtime_pollWait(0x7ffa4bcc4dc0, 0x72, 0x0)
        /build/go/go/src/pkg/runtime/znetpoll_linux_amd64.c:118 +0x82
net.(*pollDesc).WaitRead(0xc2000d01a0, 0xb, 0xc2000815a0)
        /build/go/go/src/pkg/net/fd_poll_runtime.go:75 +0x31
net.(*netFD).Read(0xc2000d0120, 0xc2000fa000, 0x1000, 0x1000, 0x0, ...)
        /build/go/go/src/pkg/net/fd_unix.go:195 +0x2b3
net.(*conn).Read(0xc2000002c8, 0xc2000fa000, 0x1000, 0x1000, 0x0, ...)
        /build/go/go/src/pkg/net/net.go:123 +0xc3
net/http.(*liveSwitchReader).Read(0xc2000d01d8, 0xc2000fa000, 0x1000, 0x1000, 0x2, ...)
        /build/go/go/src/pkg/net/http/server.go:205 +0x91
io.(*LimitedReader).Read(0xc2000cb640, 0xc2000fa000, 0x1000, 0x1000, 0x101000100a8, ...)
        /build/go/go/src/pkg/io/io.go:385 +0xc0
net/http.(*switchReader).Read(0xc2000ab760, 0xc2000fa000, 0x1000, 0x1000, 0x0, ...)
        /build/go/go/src/pkg/net/http/chunked.go:0 +0x62
bufio.(*Reader).fill(0xc2000d9480)
        /build/go/go/src/pkg/bufio/bufio.go:79 +0x10c
bufio.(*Reader).ReadSlice(0xc2000d9480, 0x40b10a, 0x0, 0x0, 0x0, ...)
        /build/go/go/src/pkg/bufio/bufio.go:262 +0x202
bufio.(*Reader).ReadLine(0xc2000d9480, 0x0, 0x0, 0x0, 0x7ffa5e6a3c00, ...)
        /build/go/go/src/pkg/bufio/bufio.go:293 +0x61
net/textproto.(*Reader).readLineSlice(0xc200081f90, 0xe004c4a9b, 0x41f98f, 0xc2000bb820,
0x645920, ...)
        /build/go/go/src/pkg/net/textproto/reader.go:55 +0x51
net/textproto.(*Reader).ReadLine(0xc200081f90, 0xc2000bb820, 0x1000, 0x74, 0x0, ...)
        /build/go/go/src/pkg/net/textproto/reader.go:36 +0x25
net/http.ReadRequest(0xc2000d9480, 0xc2000bb820, 0x0, 0x0)
        /build/go/go/src/pkg/net/http/request.go:510 +0x86
net/http.(*conn).readRequest(0xc2000d01b0, 0x0, 0x0, 0x0)
        /build/go/go/src/pkg/net/http/server.go:547 +0x1bc
net/http.(*conn).serve(0xc2000d01b0)
        /build/go/go/src/pkg/net/http/server.go:1052 +0x398
created by net/http.(*Server).Serve
        /build/go/go/src/pkg/net/http/server.go:1564 +0x266
Shouldn't we close these when a server's listener is closed?

@gpaul
Copy link
Contributor Author

gpaul commented Apr 19, 2013

Comment 11:

Latest incarnation of the code:
http://play.golang.org/p/r3KsVZKrQb

@gpaul
Copy link
Contributor Author

gpaul commented Apr 19, 2013

Comment 12:

Ignore listener closing -> conn closing.
Anyway, the fd leak is solved by doing a req:=http.NewRequest and setting req.Close =
true. Which makes sense.
What doesn't make sense is that the memory leak is also fixed then!
See the new memoryLeak() function:
http://play.golang.org/p/H4uBcouqYr

@gpaul
Copy link
Contributor Author

gpaul commented Apr 19, 2013

Comment 13:

In summary, if I use http.Get, memory is leaked. Introducing a 2 second sleep in the
loop does not prevent the leak.
If I instead create an http.Request, set its Close = true and perform a Client.Do(req)
then there is no leak.

@gpaul
Copy link
Contributor Author

gpaul commented Apr 19, 2013

Comment 14:

Hmm - it seems to have to do with bufio.NewReader. This is not a net/http bug. Its only
crime seems to be that it wraps a net.Conn in a bufio.NewReader.
I opened a new issue now that I've realized this: issue #5318

@davecheney
Copy link
Contributor

Comment 15:

Excellent debugging, would you like me to close this issue as a duplicate of 5318 so we
have a link between the two ?

Status changed to WaitingForReply.

@DanielMorsing
Copy link
Contributor

Comment 16:

I have my doubts about blaming bufio.
Setting a request.Close to true is the same as disabling keep alives. I think the http
client connection cache is holding on to connections when it shouldn't, because of the
server and client running in the same process.
Could you try looking at open file descriptors and disabling keep alives through
http.Transport?

@gpaul
Copy link
Contributor Author

gpaul commented Apr 19, 2013

Comment 17:

Daniel: I have my doubts about bufio too, but it definitely isn't net/http specific:
I've reproduced it using the net package only. As such, I suggest we continue debugging
at issue #5318 which removes net/http from the picture.
Dave: please do

@davecheney
Copy link
Contributor

Comment 18:

Merging with issue #5318 for followup.

Status changed to Duplicate.

Merged into issue #5318.

@golang golang locked and limited conversation to collaborators Jun 24, 2016
This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants