Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

http server and ioutil.readfile leaks memory #41818

Closed
blinkinglight opened this issue Oct 6, 2020 · 16 comments
Closed

http server and ioutil.readfile leaks memory #41818

blinkinglight opened this issue Oct 6, 2020 · 16 comments
Labels
FrozenDueToAge WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided.

Comments

@blinkinglight
Copy link

What version of Go are you using (go version)?

$ go version go1.15.2

Does this issue reproduce with the latest release?

yes

What operating system and processor architecture are you using (go env)?

go env Output
$ go env
GOARCH="amd64"
GOOS="linux"

What did you do?

dd if=/dev/null of=file.ts bs=4M count=1
ab -n 1000 -c 1000 http://localhost:10001/

package main

import (
	"io/ioutil"
	"log"
	"net/http"
)

func main() {

	http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
		b, err := ioutil.ReadFile("file.ts")
		if err != nil {
			log.Printf("%v", err)
		}
		w.Write(b)

	})
	http.ListenAndServe(":10001", nil)

}

What did you expect to see?

max 100Mb memory usage and no errors

What did you see instead?

3.3gb memory usage and no errors. debug.FreeOSMemory and runtime.GC doesnt helps

@davecheney
Copy link
Contributor

Can you please update your example with correct error handling; ie. don’t log the error the continue to use b, which is in an unknown state, then confirm if the issue still occurs. Thank you.

@davecheney davecheney added the WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided. label Oct 6, 2020
@blinkinglight
Copy link
Author

package main

import (
	"io/ioutil"
	"log"
	"net/http"
)

func main() {

	http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
		b, err := ioutil.ReadFile("file.ts")
		if err != nil {
			log.Printf("%v", err)
			w.WriteHeader(500)
			return
		}
		_, err = w.Write(b)
		if err != nil {
			log.Printf("%v", err)
		}

	})
	log.Fatal(http.ListenAndServe(":10001", nil))
}

issue remains - no errors

@davecheney
Copy link
Contributor

Thank you for updating your example. Can you please run your test with this environment variable set and paste the response, GODEBUG=gctrace=1. This will print a debug line every time the garbage collector runs which will confirm the amount of memory the go runtime thinks it is using. https://golang.org/pkg/runtime/

@davecheney
Copy link
Contributor

@blinkinglight could you also explain why you feel the server should use 100mb of memory, based on the inputs you explained, 4mb file, 1000 concurrent requests, the memory used of 4mb * 1000 is closer to the 3.3gb you reported.

@blinkinglight
Copy link
Author

@blinkinglight could you also explain why you feel the server should use 100mb of memory, based on the inputs you explained, 4mb file, 1000 concurrent requests, the memory used of 4mb * 1000 is closer to the 3.3gb you reported.

the problem is that then requests finishes, process doesnt release memory and after second benchmark process grow to 6gb of memory.

@blinkinglight
Copy link
Author

Thank you for updating your example. Can you please run your test with this environment variable set and paste the response, GODEBUG=gctrace=1. This will print a debug line every time the garbage collector runs which will confirm the amount of memory the go runtime thinks it is using. https://golang.org/pkg/runtime/

$ GODEBUG=gctrace=1 go run main.go
gc 1 @0.003s 2%: 0.025+0.43+0.018 ms clock, 0.20+0.28/0.33/0+0.14 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 2 @0.005s 4%: 0.041+0.26+0.026 ms clock, 0.33+0.58/0.30/0+0.21 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 3 @0.008s 5%: 0.039+0.23+0.026 ms clock, 0.31+0.64/0.32/0+0.21 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 4 @0.011s 5%: 0.058+0.28+0.012 ms clock, 0.46+0.32/0.29/0.092+0.10 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 5 @0.021s 3%: 0.021+0.28+0.013 ms clock, 0.17+0.14/0.34/0.52+0.10 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 6 @0.030s 2%: 0.017+0.28+0.004 ms clock, 0.13+0.085/0.36/0.65+0.034 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 7 @0.039s 2%: 0.047+0.30+0.011 ms clock, 0.37+0.15/0.39/0.92+0.093 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 8 @0.048s 2%: 0.019+0.38+0.011 ms clock, 0.15+0.29/0.57/1.0+0.091 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 9 @0.056s 2%: 0.024+0.54+0.013 ms clock, 0.19+0.080/0.83/1.4+0.10 ms cpu, 4->4->1 MB, 5 MB goal, 8 P
gc 10 @0.064s 2%: 0.039+0.67+0.11 ms clock, 0.31+1.1/1.1/0.79+0.93 ms cpu, 4->4->1 MB, 5 MB goal, 8 P
gc 11 @0.072s 2%: 0.022+0.85+0.012 ms clock, 0.18+0.67/1.1/1.5+0.10 ms cpu, 4->4->1 MB, 5 MB goal, 8 P
gc 12 @0.074s 2%: 0.040+0.56+0.004 ms clock, 0.32+0.43/0.94/1.5+0.032 ms cpu, 4->4->1 MB, 5 MB goal, 8 P
gc 13 @0.079s 2%: 0.022+0.68+0.021 ms clock, 0.18+0.21/0.94/1.8+0.17 ms cpu, 4->4->1 MB, 5 MB goal, 8 P
gc 14 @0.082s 3%: 0.079+0.80+0.050 ms clock, 0.63+0.37/1.2/1.3+0.40 ms cpu, 4->4->2 MB, 5 MB goal, 8 P
gc 15 @0.084s 3%: 0.027+1.5+0.011 ms clock, 0.21+0.005/1.2/1.5+0.093 ms cpu, 4->6->3 MB, 5 MB goal, 8 P
gc 16 @0.087s 3%: 0.079+1.3+0.088 ms clock, 0.63+0.42/1.5/1.0+0.71 ms cpu, 6->8->3 MB, 7 MB goal, 8 P
gc 17 @0.090s 3%: 0.070+0.92+0.050 ms clock, 0.56+0.24/1.3/1.4+0.40 ms cpu, 5->6->2 MB, 7 MB goal, 8 P
gc 18 @0.093s 3%: 0.009+0.80+0.011 ms clock, 0.076+0.35/0.98/2.4+0.090 ms cpu, 4->4->1 MB, 5 MB goal, 8 P
# command-line-arguments
gc 1 @0.008s 1%: 0.032+0.83+0.010 ms clock, 0.25+0.081/1.1/1.0+0.083 ms cpu, 4->4->2 MB, 5 MB goal, 8 P
gc 2 @0.013s 2%: 0.007+1.3+0.011 ms clock, 0.059+0/1.5/2.9+0.090 ms cpu, 6->6->5 MB, 7 MB goal, 8 P
# command-line-arguments
gc 1 @0.001s 6%: 0.008+0.57+0.002 ms clock, 0.069+0.083/0.72/0.77+0.023 ms cpu, 4->5->5 MB, 5 MB goal, 8 P
gc 2 @0.007s 4%: 0.016+1.5+0.003 ms clock, 0.12+0/2.3/1.2+0.031 ms cpu, 9->10->7 MB, 10 MB goal, 8 P
gc 3 @0.014s 4%: 0.015+1.9+0.010 ms clock, 0.12+0.094/2.3/0.12+0.082 ms cpu, 13->14->8 MB, 15 MB goal, 8 P
gc 4 @0.025s 4%: 0.021+3.1+0.010 ms clock, 0.17+0/4.4/1.9+0.082 ms cpu, 16->18->13 MB, 17 MB goal, 8 P
gc 5 @0.073s 2%: 0.024+2.7+0.009 ms clock, 0.19+0.18/1.9/3.6+0.077 ms cpu, 25->26->15 MB, 27 MB goal, 8 P
gc 6 @0.113s 1%: 0.031+2.7+0.010 ms clock, 0.25+0/4.4/5.5+0.084 ms cpu, 28->30->21 MB, 30 MB goal, 8 P
gc 1 @1.672s 0%: 0.017+0.35+0.006 ms clock, 0.13+0/0.32/0.40+0.055 ms cpu, 4->4->4 MB, 5 MB goal, 8 P
gc 2 @1.714s 0%: 0.14+0.31+0.008 ms clock, 1.1+0.83/0.38/0+0.071 ms cpu, 25->29->29 MB, 30 MB goal, 8 P
gc 3 @1.715s 0%: 3.5+7.7+0.039 ms clock, 28+0.69/0.59/0+0.31 ms cpu, 81->97->93 MB, 82 MB goal, 8 P
gc 4 @1.755s 0%: 1.2+12+0.040 ms clock, 9.6+0.84/2.0/0+0.32 ms cpu, 149->178->149 MB, 186 MB goal, 8 P
gc 5 @1.819s 0%: 1.6+44+1.9 ms clock, 13+1.1/3.0/0+15 ms cpu, 302->330->173 MB, 303 MB goal, 8 P
gc 6 @1.911s 0%: 3.0+11+0.019 ms clock, 24+0.20/5.4/0+0.15 ms cpu, 286->290->97 MB, 347 MB goal, 8 P
gc 7 @1.956s 1%: 2.4+30+0.007 ms clock, 19+1.4/33/0+0.059 ms cpu, 207->227->142 MB, 208 MB goal, 8 P
gc 8 @2.033s 1%: 3.3+24+0.017 ms clock, 27+0.88/2.3/0+0.13 ms cpu, 279->303->170 MB, 284 MB goal, 8 P
gc 9 @2.114s 1%: 3.8+25+0.069 ms clock, 31+2.1/20/0+0.55 ms cpu, 323->355->286 MB, 340 MB goal, 8 P
gc 10 @2.199s 1%: 1.5+24+1.2 ms clock, 12+29/4.1/0+10 ms cpu, 480->556->555 MB, 572 MB goal, 8 P
gc 11 @2.308s 1%: 5.6+25+0.042 ms clock, 45+0.46/9.0/0+0.33 ms cpu, 926->990->989 MB, 1111 MB goal, 8 P
gc 12 @2.510s 2%: 1.9+33+0.025 ms clock, 15+3.7/30/0.24+0.20 ms cpu, 1766->1870->1870 MB, 1979 MB goal, 8 P
GC forced
gc 19 @120.673s 0%: 0.054+1.7+0.009 ms clock, 0.43+0/3.2/7.8+0.075 ms cpu, 2->2->1 MB, 4 MB goal, 8 P
GC forced
gc 13 @125.597s 0%: 0.069+1.4+0.011 ms clock, 0.55+0/2.6/6.8+0.089 ms cpu, 3354->3354->9 MB, 3740 MB goal, 8 P

@davecheney
Copy link
Contributor

Sadly the output is messed up because you used go run, sorry I should have metioned that.

Can you please do this again using go build && ./binary,

@blinkinglight
Copy link
Author

$ GODEBUG=gctrace=1 ./error 
gc 1 @1.843s 0%: 0.016+0.55+0.006 ms clock, 0.13+0/0.57/0.76+0.053 ms cpu, 4->4->4 MB, 5 MB goal, 8 P
gc 2 @1.878s 0%: 0.027+1.0+0.070 ms clock, 0.22+1.5/1.5/0+0.56 ms cpu, 8->8->8 MB, 9 MB goal, 8 P
gc 3 @1.894s 0%: 0.29+3.3+0.020 ms clock, 2.3+3.6/5.7/0+0.16 ms cpu, 19->19->14 MB, 20 MB goal, 8 P
gc 4 @1.909s 0%: 1.4+29+1.9 ms clock, 11+5.1/3.2/0+15 ms cpu, 1226->1230->1221 MB, 1227 MB goal, 8 P
gc 5 @2.733s 0%: 3.4+103+0.030 ms clock, 27+7.5/47/0+0.24 ms cpu, 2389->2513->1971 MB, 2443 MB goal, 8 P

@davecheney
Copy link
Contributor

davecheney commented Oct 7, 2020

Thank you. It looks like to service 1000 concurrent 4mb responses memory usage is in excess of 2gb. If you leave the process for a few minutes the scavenger should run.

When the Go scavenger finds free pages of memory it will ask the operating system via the madvise(2) syscall with the argument MADV_FREE which informs the operating system that, if needed, it may take the page without swapping its contents to disk. If the operating system does need the page it can take it and this will be reflected in the RSS value reported by tools like top. However, madvise(2) cannot make the operating system release the page, it can only suggest that if needed the operating system can take the page and later return it empty. Operating systems can and will ignore this suggestion unless they are under memory pressure by other processes.

Furthermore, Linux's MADV_FREE is known to not actually modify any RSS counters until pages are purged due to memory pressure. As an alternative we can use madvise(MADV_DONTNEED) does update RSS counters but is more expensive and makes the next page fault on those pages more expensive. We fall back to that if MADV_FREE is not available, or if GODEBUG=madvdontneed=1 is set. I would suggest trying that out to see if the RSS counters look more reasonable.

Lastly, please remember that RSS is not a measure of how much memory is a program using, it is the resident segment size--the current amount of memory in core for the process. Many things influence the value reported in RSS including the availability of swap and how much of the process may be swapped,

The amount of memory that the runtime has released to the operating system is recorded in the HeapReleased field of https://godoc.org/runtime#MemStats.

@blinkinglight
Copy link
Author

GODEBUG=madvdontneed=1 with debug.FreeOSMemory - solved the problem, after few seconds process stays ad 120mb ram

@davecheney
Copy link
Contributor

Thank you, would you close this issue if you are satisfied.

@blinkinglight
Copy link
Author

just one more question, can i set that go debug variable in code? not from env?

@davecheney
Copy link
Contributor

can i set that go debug variable in code? not from env?

no, it is not accessible from the runtime package. If you wish to request such a feature please open a new issue.

@blinkinglight
Copy link
Author

ok. thank you.

@networkimprov
Copy link

That's #40870, which is about to be declined.

But you can write a program or shell script that sets the env var and then starts your app.

@gopherbot
Copy link

Change https://golang.org/cl/267100 mentions this issue: runtime: default to MADV_DONTNEED on Linux

gopherbot pushed a commit that referenced this issue Nov 2, 2020
In Go 1.12, we changed the runtime to use MADV_FREE when available on
Linux (falling back to MADV_DONTNEED) in CL 135395 to address issue
 #23687. While MADV_FREE is somewhat faster than MADV_DONTNEED, it
doesn't affect many of the statistics that MADV_DONTNEED does until
the memory is actually reclaimed under OS memory pressure. This
generally leads to poor user experience, like confusing stats in top
and other monitoring tools; and bad integration with management
systems that respond to memory usage.

We've seen numerous issues about this user experience, including
 #41818, #39295, #37585, #33376, and #30904, many questions on Go
mailing lists, and requests for mechanisms to change this behavior at
run-time, such as #40870. There are also issues that may be a result
of this, but root-causing it can be difficult, such as #41444 and
 #39174. And there's some evidence it may even be incompatible with
Android's process management in #37569.

This CL changes the default to prefer MADV_DONTNEED over MADV_FREE, to
favor user-friendliness and minimal surprise over performance. I think
it's become clear that Linux's implementation of MADV_FREE ultimately
doesn't meet our needs. We've also made many improvements to the
scavenger since Go 1.12. In particular, it is now far more prompt and
it is self-paced, so it will simply trickle memory back to the system
a little more slowly with this change. This can still be overridden by
setting GODEBUG=madvdontneed=0.

Fixes #42330 (meta-issue).

Fixes #41818, #39295, #37585, #33376, #30904 (many of which were
already closed as "working as intended").

Change-Id: Ib6aa7f2dc8419b32516cc5a5fc402faf576c92e4
Reviewed-on: https://go-review.googlesource.com/c/go/+/267100
Trust: Austin Clements <austin@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
@golang golang locked and limited conversation to collaborators Nov 2, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
FrozenDueToAge WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided.
Projects
None yet
Development

No branches or pull requests

4 participants