Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http: don't block RoundTrip when the Transport hits MaxConcurrentStreams #27044

Closed
as opened this issue Aug 16, 2018 · 25 comments
Closed

net/http: don't block RoundTrip when the Transport hits MaxConcurrentStreams #27044

as opened this issue Aug 16, 2018 · 25 comments
Labels
FrozenDueToAge NeedsFix The path to resolution is known, but the work has not been done. release-blocker
Milestone

Comments

@as
Copy link
Contributor

as commented Aug 16, 2018

The CL that disables connection pooling for HTTP2 creates a significant discontinuity in throughput when the server specifies a small number of maximum concurrent streams.

https://go-review.googlesource.com/c/net/+/53250

HTTP2 support is automatically enabled in Go under conditions not always specified by the developer. For example, configuration files often alternate between http and https endpoints. When using an http endpoint, go will use HTTP/1, whereas https endpoints use HTTP/2.

The HTTP/1 default transport will create as many connections as needed in the background. The HTTP2 default transport does not (although it used to).

As a result, HTTP1 endpoints get artificially high throughput when compared to HTTP2 endpoints that block waiting for more streams to become available instead of creating a new connection. For example, the AWS ALB limit the maximum number of streams to 128.

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html

This HTTP/2 client is blocked once it hits 128 streams and waits for more to become available. The HTTP/1 client does not. The performance of the HTTP/1 client is orders of magnitude faster as a result. This effect is annoying and creates a leaky abstraction in the net/http package.

The consequence of this is that importers of the net/http package now have to:

1.) Distinguish between HTTP and HTTPS endpoints
2.) Write a custom connection pool for the transport when HTTP2 is enabled

I think the previous pooling functionality should be restored.

@dgryski
Copy link
Contributor

dgryski commented Aug 16, 2018

/cc @bradfitz @tombergan

@meirf
Copy link
Contributor

meirf commented Aug 17, 2018

If we revert to the old behavior, one lowish priority thing we can do is have MaxConnsPerHost apply to the total number of h2 connections as opposed to currently only applying to h2 dials. That way, there can be an opted-in limit.

@theckman
Copy link
Contributor

There looks to be a bit of overlap with #17776. Just wanted to make sure the two issues get linked together.

@twmb
Copy link
Contributor

twmb commented Aug 21, 2018

FWIW, this is one of the reasons I disable http2 with TLSNextProto in a project.

@agnivade agnivade added the NeedsDecision Feedback is required from experts, contributors, and/or the community before a change can be made. label Aug 22, 2018
@agnivade agnivade added this to the Go1.12 milestone Aug 22, 2018
@itsjamie
Copy link
Contributor

If a user has it configured to allow an infinite number of connections per host, then I believe the H2 implementation then should create a new connection when it has reached the server-advertised maximum.

However, there is some level of overhead in creating a new connection, perhaps a deadline on the RoundTrip block to attempt to stay in the server configured maxStreams before offloading the request to a new connection?

@smira
Copy link

smira commented Aug 24, 2018

We're facing problem with this change as well, as we're sending HTTP requests and responses with huge bodies, and performance isn't optimal as all of them are serialized over a single TCP connection. We expect to have around 10 clients against single server endpoint, so with current approach it means 10 concurrent TCP connections overall (1 connection per client-server pair). Limiting MaxConcurrentStreams on server side leads to client requests being blocked while waiting for connection.

Right now in test environment, client is a load-test with N goroutines sending HTTP requests concurrently. These all requests are serialized and queued over a single connection, which leads to actual bandwidth drop when concurrency is increased at some point.

@smira
Copy link

smira commented Aug 24, 2018

To add some numbers, with unpatched HTTP/2 library we get around 1.5Gbit/s (up+down), with same settings and parts of the patch removed to allow more connections to be open we get 7Gbit/s (up+down) over several connections.

Number of concurrent streams is limited on server side to 30, this might not be the optimal number, we'll keep testing but definitely +1 for this change to be reverted or making it configurable.

@liggitt
Copy link
Contributor

liggitt commented Aug 30, 2018

it seems like the client is assuming that all future connections to a given host will hit the same backend that told the client how many concurrent streams it could send over the first connection... that assumption doesn't hold for load balanced services where the same host can be serviced by multiple backends.

A limit for max connections per host makes sense, but MaxConcurrentStreams isn't a great stand-in for it.

@twmb
Copy link
Contributor

twmb commented Aug 30, 2018

Nothing prevents a load balancer from replying with a high max concurrent streams. The problem I see is h2 implementations using the recommended minimum SETTINGS_MAX_CONCURRENT_STREAMS for no reason. I would guess this is because implementors simply choose the only value mentioned. I've also not seen yet an implementation that re-negotiates this value upwards.

Go's current h2 bundle uses a value of 1000 for client connections by default, but only 250 for server connections. Further, the server stream comment refers to the Google Front End using a default value of 100—and I would think that Google servers can handle more than 100 concurrent requests, regardless of whether they are from one host, or, say, a proxy server.

@Gobd
Copy link

Gobd commented Sep 3, 2018

We're seeing significantly reduced throughput after this change using http://google.golang.org/api/bigquery/v2 to stream data into BigQuery.

@rsc
Copy link
Contributor

rsc commented Sep 26, 2018

Can someone say what the HTTP/2 spec says about this setting? (Brad says it doesn't say.)
Failing that, can someone say what the big browsers do with this setting?

@rs
Copy link
Contributor

rs commented Sep 26, 2018

I'm not sure what browsers do is the most relevant for Go. I would assume Go is more often used to build proxies or backend API clients than user facing HTTP clients. Having the HTTP library choosing to block a request with no good way for the caller to control or avoid it, is a big no-no for any low latency projects IMHO.

@twmb
Copy link
Contributor

twmb commented Sep 26, 2018

Section 9.1:

Clients SHOULD NOT open more than one HTTP/2 connection to a given host and port pair, where the host is derived from a URI, a selected alternative service [ALT-SVC], or a configured proxy.
...
A client MAY open multiple connections to the same IP address and TCP port using different Server Name Indication [TLS-EXT] values or to provide different TLS client certificates but SHOULD avoid creating multiple connections with the same configuration.

Echoing @rs, it seems that a lot of the HTTP2 considerations are for browsers, and this max concurrent streams setting unnecessarily limits proxies, especially so when the proxy is talking to a backend that unnecessarily limits the max stream count (e.g. BigQuery replies with a limit of 100).

@smira
Copy link

smira commented Sep 26, 2018

@rsc I think Go HTTP/2 client library should be configurable at least to choose one of the behaviors: block until streams are available or open new connections. It seems that behavior prior to the change (no blocking) might be better default option.

@rs
Copy link
Contributor

rs commented Sep 26, 2018

We also need to be able to monitor in-flight requests at the connection pool level so we can anticipate the need for the opening of a new connections. Here is an old proposal on that: HTTP/2 Custom Connection Pool.

@theckman
Copy link
Contributor

theckman commented Sep 26, 2018

I'd like to second the sentiment that using browsers as our only guidance behind this doesn't feel like the best path, simply because of the different use cases between a user browsing a blog and a service that's built to multiplex lots of requests to backend systems. I think the browser functionality should be considered as part of the decision, but we should also take a look at the HTTP/2 implementations in other languages too.

With gRPC using HTTP/2, and its usage in the industry growing, polyglot interoperability is becoming more prevalent. I think we should make sure Go is going to play nicely in those ecosystems, as well as be a viable option over other languages. It'd be unfortunate for it to have some sort of red mark like this that would prevent people from adopting Go.

@tonyghita
Copy link

tonyghita commented Oct 18, 2018

Hey all,

I had an application communicating with an AWS ALB that got bitten by this issue this week.
The result was many goroutines stuck contending on the http2ClientConn.awaitOpenSlotForRequest lock.

It seems like this is something that should be configured by http.Transport.MaxConnsPerHost:

// MaxConnsPerHost optionally limits the total number of
// connections per host, including connections in the dialing,
// active, and idle states. On limit violation, dials will block.
//
// Zero means no limit.
//
// For HTTP/2, this currently only controls the number of new
// connections being created at a time, instead of the total
// number. In practice, hosts using HTTP/2 only have about one
// idle connection, though.
MaxConnsPerHost int

Is that the case? Or have I misunderstood the configuration option?

At my current understanding it feels like the immediate path forward is to disable HTTP/2. Is the a better alternative?

@bradfitz bradfitz self-assigned this Nov 14, 2018
@bradfitz
Copy link
Contributor

I'm leaning towards reverting this behavior for Go 1.12 and making it more opt-in somehow.

@as
Copy link
Contributor Author

as commented Nov 21, 2018

@bradfitz Thank you for taking attention to this issue. I see that two things are being addressed in the last comment.

A. The decision to revert the behavior
B. The opt-in feature (enabling the behavior)

Do we currently know if both A and B are targets (for the Go 1.12 milestone or otherwise)? I want to ensure I deliver the most-accurate information to my team regarding this issue.

@rsc
Copy link
Contributor

rsc commented Nov 28, 2018

Sounds like the decision is to revert this behavior for Go 1.12.
Maybe new API in Go 1.13.

@rsc rsc added NeedsFix The path to resolution is known, but the work has not been done. release-blocker labels Nov 28, 2018
@gopherbot gopherbot removed the NeedsDecision Feedback is required from experts, contributors, and/or the community before a change can be made. label Nov 28, 2018
@gopherbot
Copy link

Change https://golang.org/cl/151857 mentions this issue: http2: revert Transport's strict interpretation of MAX_CONCURRENT_STREAMS

gopherbot pushed a commit to golang/net that referenced this issue Dec 1, 2018
…EAMS

And add the http2.Transport.StrictMaxConcurrentStreams bool knob to
behavior being reverted.

In CL 53250 for golang/go#13774 (for Go 1.10) we changed the HTTP/2
Transport's policy such that a server's advertisement of a
MAX_CONCURRENT_STREAMS value meant that it was a maximum for the
entire process, instead of just a single connection.

We thought that was a reasonable interpretation of the spec and
provided nice safety against slamming a server from a bunch of
goroutines doing concurrent requests, but it's been largely
unpopular (see golang/go#27044). It's also different behavior from
HTTP/1 and because you're usually not sure which protocol version
you're going to get, you need to limit your outbound HTTP requests
anyway in case you're hitting an HTTP/1 server.

And nowadays we have the Go 1.11 Transport.MaxConnsPerHost knob too
(CL 71272 for golang/go#13957). It doesn't yet work for HTTP/2, but it
will in either Go 1.12 or Go 1.13 (golang/go#27753)

After this is bundled into net/http's, the default HTTP client will
have this knob set false, restoring the old Go 1.9 behavior where new
TCP connections are created as necessary. Users wanting the strict
behavior and import golang.org/x/net/http2 themselves and make a
Transport with StrictMaxConcurrentStreams set to true. Or they can set
Transport.MaxConnsPerHost, once that works for HTTP/2.

Updates golang/go#27044 (fixes after bundle into std)

Change-Id: I4efdad7698feaf674ee8e01032d2dfa5c2f8a3a8
Reviewed-on: https://go-review.googlesource.com/c/151857
Reviewed-by: Andrew Bonventre <andybons@golang.org>
@gopherbot
Copy link

Change https://golang.org/cl/152080 mentions this issue: net/http: update bundled x/net/http2

@DmitriyMV
Copy link

DmitriyMV commented Dec 3, 2018

@bradfitz is it really closed or gopherbot closed it incorrectly (I'm assuming it does so, by detecting words fixed\fixes in the line that contains the issue number)?

@as
Copy link
Contributor Author

as commented Dec 3, 2018

@DmitriyMV The link it provided did not take me directly to the CL.

Here it is: https://go-review.googlesource.com/c/net/+/151857/

To my knowledge, In the Go project issues are typically closed by the authors/contributors after the code in place rather than by the original issue author after verifying the fix.

@bradfitz
Copy link
Contributor

bradfitz commented Dec 3, 2018

@DmitriyMV, this is correctly closed.

froodian pushed a commit to Appboy/net that referenced this issue Jan 7, 2019
…EAMS

And add the http2.Transport.StrictMaxConcurrentStreams bool knob to
behavior being reverted.

In CL 53250 for golang/go#13774 (for Go 1.10) we changed the HTTP/2
Transport's policy such that a server's advertisement of a
MAX_CONCURRENT_STREAMS value meant that it was a maximum for the
entire process, instead of just a single connection.

We thought that was a reasonable interpretation of the spec and
provided nice safety against slamming a server from a bunch of
goroutines doing concurrent requests, but it's been largely
unpopular (see golang/go#27044). It's also different behavior from
HTTP/1 and because you're usually not sure which protocol version
you're going to get, you need to limit your outbound HTTP requests
anyway in case you're hitting an HTTP/1 server.

And nowadays we have the Go 1.11 Transport.MaxConnsPerHost knob too
(CL 71272 for golang/go#13957). It doesn't yet work for HTTP/2, but it
will in either Go 1.12 or Go 1.13 (golang/go#27753)

After this is bundled into net/http's, the default HTTP client will
have this knob set false, restoring the old Go 1.9 behavior where new
TCP connections are created as necessary. Users wanting the strict
behavior and import golang.org/x/net/http2 themselves and make a
Transport with StrictMaxConcurrentStreams set to true. Or they can set
Transport.MaxConnsPerHost, once that works for HTTP/2.

Updates golang/go#27044 (fixes after bundle into std)

Change-Id: I4efdad7698feaf674ee8e01032d2dfa5c2f8a3a8
Reviewed-on: https://go-review.googlesource.com/c/151857
Reviewed-by: Andrew Bonventre <andybons@golang.org>
froodian added a commit to Appboy/net that referenced this issue Jan 9, 2019
…EAMS (#1)

And add the http2.Transport.StrictMaxConcurrentStreams bool knob to
behavior being reverted.

In CL 53250 for golang/go#13774 (for Go 1.10) we changed the HTTP/2
Transport's policy such that a server's advertisement of a
MAX_CONCURRENT_STREAMS value meant that it was a maximum for the
entire process, instead of just a single connection.

We thought that was a reasonable interpretation of the spec and
provided nice safety against slamming a server from a bunch of
goroutines doing concurrent requests, but it's been largely
unpopular (see golang/go#27044). It's also different behavior from
HTTP/1 and because you're usually not sure which protocol version
you're going to get, you need to limit your outbound HTTP requests
anyway in case you're hitting an HTTP/1 server.

And nowadays we have the Go 1.11 Transport.MaxConnsPerHost knob too
(CL 71272 for golang/go#13957). It doesn't yet work for HTTP/2, but it
will in either Go 1.12 or Go 1.13 (golang/go#27753)

After this is bundled into net/http's, the default HTTP client will
have this knob set false, restoring the old Go 1.9 behavior where new
TCP connections are created as necessary. Users wanting the strict
behavior and import golang.org/x/net/http2 themselves and make a
Transport with StrictMaxConcurrentStreams set to true. Or they can set
Transport.MaxConnsPerHost, once that works for HTTP/2.

Updates golang/go#27044 (fixes after bundle into std)

Change-Id: I4efdad7698feaf674ee8e01032d2dfa5c2f8a3a8
Reviewed-on: https://go-review.googlesource.com/c/151857
Reviewed-by: Andrew Bonventre <andybons@golang.org>
@golang golang locked and limited conversation to collaborators Dec 3, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
FrozenDueToAge NeedsFix The path to resolution is known, but the work has not been done. release-blocker
Projects
None yet
Development

No branches or pull requests