-
Notifications
You must be signed in to change notification settings - Fork 18k
x/net/http2: receive buffer memory usage increases by 400x with HTTP2 #15930
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Here is a minimal program to reproduce the issue. It downloads a 719 MB file from Google Cloud Storage with a loop that reads 1 MB every 5 ms (i.e. throughput should be up to 200 MB/s, likely bottlenecked on the network connection). This approximates what gcsfuse is doing, since fuse reads 128 KiB with high latency between calls. The following runs are on a GCE VM in asia-east1-a running CentOS 7. I use the
With HTTP/2:
Without HTTP/2:
In other words the HTTP/2 run used 71 times as much memory, and generated a whole lot of garbage while doing so. The difference in throughput was well within the run to run variation that I see on this VM. In contrast I've never seen a run without HTTP/2 that used more than 20 MB of memory. |
In other words, I'm now quite confident that this is a regression in the http package's management of memory for buffering. 900 MiB is enough to buffer over five seconds at that receive rate, which is longer than the entire program even took to run. |
Due to high memory usage. See golang/go#15930.
Due to golang/go#15930. Fixes #173.
I think the issue is that the flow control doesn't take into account the bytes in This change brings the Max RSS of @jacobsa's example program down to ~40mb. --- a/http2/transport.go
+++ b/http2/transport.go
@@ -1458,7 +1458,7 @@ func (b transportResponseBody) Read(p []byte) (n int, err error) {
connAdd = transportDefaultConnFlow - v
cc.inflow.add(connAdd)
}
- if err == nil { // No need to refresh if the stream is over or failed.
+ if err == nil && cs.bufPipe.b.Len() < transportDefaultStreamFlow {
if v := cs.inflow.available(); v < transportDefaultStreamFlow-transportDefaultStreamMinRefresh {
streamAdd = transportDefaultStreamFlow - v
cs.inflow.add(streamAdd) I'm not sure if this is the correct fix. We'll need to write a good test to be sure. |
Note this comment: // transportDefaultStreamFlow is how many stream-level flow
// control tokens we announce to the peer, and how many bytes
// we buffer per stream.
transportDefaultStreamFlow = 4 << 20 It's not actually true with the current behavior. A change like the one above is required to make this the comment accurate. A separate discussion is the size of that buffer. It should probably be smaller. |
CL https://golang.org/cl/23812 mentions this issue. |
@adg Thanks! |
@adg this fix introduces race between Len() call on buffer and Write() call from another goroutine. |
Fixes golang/go#15930 Change-Id: Ib5d2f57361d52364edb29df25ec9a498c3088781 Reviewed-on: https://go-review.googlesource.com/23812 Reviewed-by: Aaron Jacobs <jacobsa@google.com> Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org>
In GoogleCloudPlatform/gcsfuse#173 I discovered that building with Go 1.6.2 increases memory usage by a factor of 400 or so over Go 1.6, apparently due to using HTTP2 by default (406752b). This is on linux/amd64.
I think this is on behalf of the receive buffer allocated in
http2clientConnReadLoop.handleResponse
. Elsewhere in that file I see that there is an attempt to use fixed-size buffers for outgoing requests (correct?), but this buffer appears to grow without bound, no matter whether the caller consumes the response body or not.On the other hand there seems to be connection-wide flow control updated in
http2clientConnReadLoop.processData
. So I'm not sure that this is actually a problem.My question: is it a known issue that memory usage is much much higher when making requests with large responses over HTTP2? Is this RAM "well spent" in the sense that it's buying me more throughput?
The text was updated successfully, but these errors were encountered: