New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net/http: memory leak #5314
Labels
Comments
This slightly modified version allocates the listener and outside the memoryleak() function and produces the leak: http://play.golang.org/p/rvgNusTfxy This one additionally moves the initialization of the server to outside the function, resulting in no leak: http://play.golang.org/p/wcuuRQA1B3 |
I managed to remove most of the code. It seems like allocating the listener and calling http.Serve inside the loop leads to a memory leak: http://play.golang.org/p/srnTF9CPz9 whereas allocating them once leads to stable memory usage: http://play.golang.org/p/uaSVkgQ2bs |
Attached Attachments:
|
Here you go. On the contrary it severely aggravates the problem. $ go test -gcflags="-N -l" -c -i ram && GOGCTRACE=1 ./ram.test -test.v -test.memprofile=mem.out gc1(2): 0+0+0 ms, 0 -> 0 MB 37 -> 39 (40-1) objects, 1(3) handoff, 3(10) steal, 16/14/4 yields === RUN TestBadRam-5 gc2(2): 0+0+0 ms, 0 -> 0 MB 486 -> 447 (517-70) objects, 7(387) handoff, 1(4) steal, 91/25/0 yields gc3(3): 0+0+0 ms, 64 -> 64 MB 641 -> 592 (715-123) objects, 2(6) handoff, 5(34) steal, 55/29/9 yields gc4(4): 0+0+0 ms, 72 -> 64 MB 66200 -> 624 (66325-65701) objects, 4(114) handoff, 11(135) steal, 84/37/1 yields gc5(5): 0+0+0 ms, 129 -> 129 MB 8435 -> 8360 (74140-65780) objects, 81(6342) handoff, 20(378) steal, 388/163/15 yields gc6(5): 0+0+0 ms, 135 -> 128 MB 66297 -> 735 (132079-131344) objects, 2(82) handoff, 9(99) steal, 107/48/3 yields gc7(5): 0+0+0 ms, 200 -> 129 MB 66470 -> 843 (197819-196976) objects, 7(166) handoff, 7(127) steal, 148/78/7 yields gc8(5): 0+0+0 ms, 200 -> 193 MB 66582 -> 951 (263563-262612) objects, 2(79) handoff, 15(219) steal, 120/74/7 yields gc9(5): 0+0+0 ms, 264 -> 257 MB 66687 -> 1056 (329304-328248) objects, 7(72) handoff, 7(105) steal, 151/98/6 yields gc10(5): 0+0+0 ms, 328 -> 321 MB 66792 -> 1161 (395045-393884) objects, 3(27) handoff, 12(236) steal, 124/86/7 yields gc11(5): 0+0+0 ms, 392 -> 385 MB 66896 -> 1271 (460785-459514) objects, 1(4) handoff, 14(213) steal, 100/80/8 yields gc12(5): 0+0+0 ms, 456 -> 449 MB 67006 -> 1370 (526525-525155) objects, 1(4) handoff, 10(173) steal, 100/80/8 yields gc13(5): 0+0+0 ms, 520 -> 513 MB 67106 -> 1481 (592268-590787) objects, 3(17) handoff, 11(173) steal, 128/65/5 yields gc14(5): 0+0+0 ms, 584 -> 577 MB 67214 -> 1578 (658006-656428) objects, 2(35) handoff, 16(211) steal, 110/79/5 yields gc15(5): 0+0+0 ms, 648 -> 641 MB 67313 -> 1687 (723746-722059) objects, 3(9) handoff, 12(210) steal, 105/73/7 yields gc16(5): 0+0+0 ms, 712 -> 705 MB 67423 -> 1788 (789487-787699) objects, 1(4) handoff, 11(209) steal, 115/68/5 yields gc17(5): 0+0+0 ms, 776 -> 769 MB 67520 -> 1889 (855224-853335) objects, 2(8) handoff, 7(182) steal, 110/76/7 yields gc18(5): 0+0+0 ms, 840 -> 833 MB 67625 -> 1994 (920965-918971) objects, 2(8) handoff, 14(243) steal, 115/72/7 yields gc19(5): 0+0+0 ms, 904 -> 897 MB 67725 -> 2094 (986703-984609) objects, 4(12) handoff, 16(213) steal, 121/82/7 yields gc20(5): 0+0+0 ms, 968 -> 961 MB 67828 -> 2197 (1052442-1050245) objects, 1(4) handoff, 16(263) steal, 100/80/7 yields |
With inlining and compiler optimisations the leak is much slower. It seems stable for about 45 GCs then it jumps up, seems stable again, and keeps going that way. Attachments:
|
pmap output: Attachments:
|
After another minute, pmap: Attachments:
|
Running that test long enough leads to 'too many open file descriptors' and lots of these: goroutine 9 [IO wait]: net.runtime_pollWait(0x7ffa4bcc4dc0, 0x72, 0x0) /build/go/go/src/pkg/runtime/znetpoll_linux_amd64.c:118 +0x82 net.(*pollDesc).WaitRead(0xc2000d01a0, 0xb, 0xc2000815a0) /build/go/go/src/pkg/net/fd_poll_runtime.go:75 +0x31 net.(*netFD).Read(0xc2000d0120, 0xc2000fa000, 0x1000, 0x1000, 0x0, ...) /build/go/go/src/pkg/net/fd_unix.go:195 +0x2b3 net.(*conn).Read(0xc2000002c8, 0xc2000fa000, 0x1000, 0x1000, 0x0, ...) /build/go/go/src/pkg/net/net.go:123 +0xc3 net/http.(*liveSwitchReader).Read(0xc2000d01d8, 0xc2000fa000, 0x1000, 0x1000, 0x2, ...) /build/go/go/src/pkg/net/http/server.go:205 +0x91 io.(*LimitedReader).Read(0xc2000cb640, 0xc2000fa000, 0x1000, 0x1000, 0x101000100a8, ...) /build/go/go/src/pkg/io/io.go:385 +0xc0 net/http.(*switchReader).Read(0xc2000ab760, 0xc2000fa000, 0x1000, 0x1000, 0x0, ...) /build/go/go/src/pkg/net/http/chunked.go:0 +0x62 bufio.(*Reader).fill(0xc2000d9480) /build/go/go/src/pkg/bufio/bufio.go:79 +0x10c bufio.(*Reader).ReadSlice(0xc2000d9480, 0x40b10a, 0x0, 0x0, 0x0, ...) /build/go/go/src/pkg/bufio/bufio.go:262 +0x202 bufio.(*Reader).ReadLine(0xc2000d9480, 0x0, 0x0, 0x0, 0x7ffa5e6a3c00, ...) /build/go/go/src/pkg/bufio/bufio.go:293 +0x61 net/textproto.(*Reader).readLineSlice(0xc200081f90, 0xe004c4a9b, 0x41f98f, 0xc2000bb820, 0x645920, ...) /build/go/go/src/pkg/net/textproto/reader.go:55 +0x51 net/textproto.(*Reader).ReadLine(0xc200081f90, 0xc2000bb820, 0x1000, 0x74, 0x0, ...) /build/go/go/src/pkg/net/textproto/reader.go:36 +0x25 net/http.ReadRequest(0xc2000d9480, 0xc2000bb820, 0x0, 0x0) /build/go/go/src/pkg/net/http/request.go:510 +0x86 net/http.(*conn).readRequest(0xc2000d01b0, 0x0, 0x0, 0x0) /build/go/go/src/pkg/net/http/server.go:547 +0x1bc net/http.(*conn).serve(0xc2000d01b0) /build/go/go/src/pkg/net/http/server.go:1052 +0x398 created by net/http.(*Server).Serve /build/go/go/src/pkg/net/http/server.go:1564 +0x266 Shouldn't we close these when a server's listener is closed? |
Latest incarnation of the code: http://play.golang.org/p/r3KsVZKrQb |
Ignore listener closing -> conn closing. Anyway, the fd leak is solved by doing a req:=http.NewRequest and setting req.Close = true. Which makes sense. What doesn't make sense is that the memory leak is also fixed then! See the new memoryLeak() function: http://play.golang.org/p/H4uBcouqYr |
I have my doubts about blaming bufio. Setting a request.Close to true is the same as disabling keep alives. I think the http client connection cache is holding on to connections when it shouldn't, because of the server and client running in the same process. Could you try looking at open file descriptors and disabling keep alives through http.Transport? |
Merging with issue #5318 for followup. Status changed to Duplicate. Merged into issue #5318. |
This issue was closed.
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The text was updated successfully, but these errors were encountered: