Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http: setting a timeout on http requests that use TLS can result in excessive requests to server #59017

Closed
mpoindexter opened this issue Mar 14, 2023 · 9 comments
Assignees
Labels
FixPending Issues that have a fix which has not yet been reviewed or submitted. help wanted NeedsFix The path to resolution is known, but the work has not been done. Performance
Milestone

Comments

@mpoindexter
Copy link

mpoindexter commented Mar 14, 2023

What version of Go are you using (go version)?

go version go1.20.2 darwin/amd64

Does this issue reproduce with the latest release?

Yes

What operating system and processor architecture are you using (go env)?

GOARCH="amd64"
GOOS="darwin"

What did you do?

When running a large number of concurrent requests to Amazon S3 with a request timeout set (either in the http.Client instance or a context directly set on the request), we noticed an unexpectedly large amount of long running requests. When we removed the timeout, the number of long running requests dropped. The long running requests were not directly caused by the timeout being hit - all requests completed in under the timeout.

We created a standalone program to reproduce the problem and added logging via httptrace. In the output from httptrace we observed a large number of requests that were reported with the error context cancelled to the TLSHandshakeDone callback in our trace. These requests did not cause failed requests as reported by the http client.

Digging into the http Transport code, it appears that when a connection is not immediately available for use in the connection pool, the runtime starts a race between obtaining a connection returned to the pool and dialing a new connection. In our case, the "obtain connection returned to the pool" was generally winning the race. The behavior on the losing side of the race differed depending on whether the request used a timeout or not. On requests without a timeout, the losing leg of the connection continued through the TLS handshake, and was then placed into the pool as a valid connection. On requests with a timeout, the losing leg was aborted mid-TLS-handshake due to the cancellation of the request context as the request completed using the connection that was returned to the pool.

The net result of this behavior was that whenever a request legitimately required a new connection to be established, it was often queued up (probably at the server end) behind a large number of TLS handshakes that would be cancelled in flight. This manifested as excessive time to complete the request and noticeably lower throughput.

What did you expect to see?

Client does not produce a large volume of aborted TLS handshakes to server.

What did you see instead?

Slowness caused by excessive TLS handshakes to server

I created a gist that reproduces this at https://gist.github.com/mpoindexter/f6bc9dac16290343efba17129c49f9d5. If you uncomment the timeout on line 56 you can see the throughput of the test drop and stall periodically, but if the timeout remains commented throughput remains steady.

To test the analysis above, I implemented a custom DialTLSContext function that did the TLS dial using a new context if a deadline was set on the original context. This resolved the problem.

func ApplyFix(transport *http.Transport) {
	dialer := &tls.Dialer{
		Config: transport.TLSClientConfig,
	}

	transport.DialTLSContext = func(ctx context.Context, network, addr string) (net.Conn, error) {
		deadline, hasDeadline := ctx.Deadline()
		var dialCtx context.Context
		if hasDeadline {
			timeout := deadline.Sub(time.Now())
			if timeout > 0 {
				var cancel context.CancelFunc
				dialCtx, cancel = context.WithTimeout(context.Background(), timeout)
				defer cancel()
			} else {
				dialCtx = ctx
			}
		} else {
			dialCtx = ctx
		}

		return dialer.DialContext(dialCtx, network, addr)
	}
}
@seankhliao seankhliao added Performance NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. labels Mar 14, 2023
@seankhliao
Copy link
Member

cc @neild

@mark4z

This comment was marked as spam.

@mark4z
Copy link

mark4z commented Apr 4, 2023

same question when using nginx as the server

@neild
Copy link
Contributor

neild commented Apr 6, 2023

We shouldn't be using an individual request's context to dial a connection which may be reused for many requests, to avoid problems exactly like this.

I suspect that someone is relying on passing values down to the dial via the context, however, so we likely need to decouple the request context's values from its cancelation. (Conveniently, #40221 has been implemented now, so this is feasible.)

I'm not certain what the correct cancelation behavior is. Perhaps cancel a dial once there are no requests blocked on it? There might still be some access patterns where we repeatedly cancel an in-progress dial, but this would at least avoid the scenario where we cancel a dial call that a request was hoping to make use of.

@neild neild self-assigned this Apr 6, 2023
@neild neild added help wanted NeedsFix The path to resolution is known, but the work has not been done. and removed NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. labels Apr 6, 2023
@AlexanderYastrebov

This comment was marked as off-topic.

@jarreds
Copy link

jarreds commented Jan 29, 2024

Using the ApplyFix outlined by @mpoindexter, we saw a significant drop in the number of aborted TLS handshakes. Red line is the deploy of that fix:

image

Those graphed metrics are directly out of net/http/httptrace.

One thing that's not clear to me at the moment is if those dialed connections are pooled as idle connections, because metrics on the connection pool are not readily accessible.

@AlexanderYastrebov
Copy link
Contributor

AlexanderYastrebov commented Mar 22, 2024

I suspect that someone is relying on passing values down to the dial via the context

The transport itself uses it for client trace (but it could be propagated from transportRequest if necessary)

go/src/net/http/transport.go

Lines 1629 to 1639 in 27f41bb

func (t *Transport) dialConn(ctx context.Context, cm connectMethod) (pconn *persistConn, err error) {
pconn = &persistConn{
t: t,
cacheKey: cm.key(),
reqch: make(chan requestAndChan, 1),
writech: make(chan writeRequest, 1),
closech: make(chan struct{}),
writeErrCh: make(chan error, 1),
writeLoopDone: make(chan struct{}),
}
trace := httptrace.ContextClientTrace(ctx)

Also means that "ApplyFix" workaround likely breaks client tracing. No, the fix replaces context after dialConn obtains client trace.

@neild
Copy link
Contributor

neild commented Apr 1, 2024

Using context.WithoutCancel, it's fairly straightforward to change the transport dial to inherit the values from the request context, but not the cancelation. This seems like a decent balance between backwards compatibility (we still pass through values from the request context) and reasonable behavior (we don't cancel a dial just because a cached connection became available).

I have one question which I'm not sure on the answer to: How, if at all, should we bound the lifetime of dials? If a user creates a Transport, makes a request, and cancels the request, should the dial call continue indefinitely? Or should there be some way of cleaning up stale in-progress dials?

@gopherbot
Copy link

Change https://go.dev/cl/576555 mentions this issue: net/http: don't cancel Dials when requests are canceled

@dmitshur dmitshur added the FixPending Issues that have a fix which has not yet been reviewed or submitted. label Apr 17, 2024
@dmitshur dmitshur added this to the Go1.23 milestone Apr 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
FixPending Issues that have a fix which has not yet been reviewed or submitted. help wanted NeedsFix The path to resolution is known, but the work has not been done. Performance
Projects
None yet
Development

No branches or pull requests

8 participants