Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

x/net/http2: server should use weights and dependencies when scheduling frame writes #16168

Closed
tombergan opened this issue Jun 23, 2016 · 13 comments
Labels
FrozenDueToAge NeedsFix The path to resolution is known, but the work has not been done.
Milestone

Comments

@tombergan
Copy link
Contributor

As of version 51, Chrome has started using HTTP/2 dependencies to express an ordering for its requests. I'm not sure about other browsers, but I'm sure they will start using dependencies soon if they're not already. I have direct evidence that this improves pageload performance. Unfortunately my example is not easy to make public, but the gist is:

  1. Chrome downloads HTML for a page and makes a bunch of H2 requests for subresources (CSS, JS, etc).
  2. Chrome needs to evaluate these in a specific order, so it forms a linear sequence of dependencies that are communicated directly to the server. The server is expected to deliver subresources in this order.
  3. Eventually, Chrome runs some JS code that imports two other CSS files. These CSS files are needed immediately to start rendering the page -- they are more important than any requests currently in flight. Chrome expresses this importance using stream dependencies.

Go's HTTP/2 server currently does not respect stream dependencies. In fact, frame writes are scheduled in a mostly random order, even though the priority tree is being maintained:
https://github.com/golang/net/blob/master/http2/server.go#L457
https://github.com/golang/net/blob/master/http2/writesched.go#L133

This causes problems in both step 2 (where resources are delivered in a random rather than sequential order) and step 3 (where the immediacy of the new requests is not respected).

At first glance, fixing this seems easy. The difficulty comes from the following scenario: Suppose the server needs to schedule two streams, A and B, where A is the stream parent of B. Suppose the server has data for B right now. Should it eagerly write the available data for B, or should it wait for data on A? This is complicated by the fact that the kernel's TCP send buffer is usually larger than the TCP congestion window, meaning the server can buffer multiple RTTs worth of data for B, then slowly flush that buffer even when data for A has become available. This problem has been seen in practice. There's a long (but interesting) discussion here:
https://insouciant.org/tech/prioritization-only-works-when-theres-pending-data-to-prioritize/

I'm going to do some reading about what other servers do in this scenario, and experiment with a prototype implementation, then report back. We may want to do a simple implementation first that ignores the above problem, then come up with something more complex later. I need this feature for some experiments that I'm doing, so I'll end up implementing a fix of some sort unless someone else has code ready-to-go. Feel free to assign this bug to me.

Note: A related problem is that Go's H2 client currently has no API for expressing stream dependencies to the server, however this bug is entirely about server-side scheduling.

/cc @bradfitz

@bradfitz bradfitz added this to the Go1.8 milestone Jun 23, 2016
@bradfitz
Copy link
Contributor

Did you like my TODO in the linked writesched.go file? :)

// TODO: find the best queue

@tombergan
Copy link
Contributor Author

I did see that :)

Did you intend for those to actually be queues? AFAICT, the only way for a per-stream queue to have more than one queued frame is if you write a response headers frame that doesn't have any headers. This seems pretty rare. In all other cases, ResponseWriter.Write waits for the current frame to be serialized onto the wire before writing the next frame.

@bradfitz
Copy link
Contributor

When an http.Handler exits without having called Flush, aren't there a HEADERS & 0+ DATA frames in the queue?

I admit I wrote the write scheduler before I used it, but I thought the queues were used.

@tombergan
Copy link
Contributor Author

IIUC, frames don't get added to the queue until writeFrameFromHandler is called. When this happens for a data frame, the handler always waits:
https://github.com/golang/net/blob/master/http2/server.go#L773

When this happens for a header frame, the handler waits if the headers are not empty:
https://github.com/golang/net/blob/master/http2/server.go#L1718

There's no wait for 100-continue frames, but those are rare as well:
https://github.com/golang/net/blob/master/http2/server.go#L1741

@bradfitz
Copy link
Contributor

Is it a problem that it's a queue? Seems like a distraction from the main issue. The queues are still being used, even if not much (currently?).

@tombergan
Copy link
Contributor Author

It's not a problem. I was just wondering if you had a larger plan for buffering data frames that was never implemented?

@bradfitz
Copy link
Contributor

Yeah, I think there's more to be done there. The http.Handler goroutine could track its flow control state more and return from ResponseWriter.Write more quickly. It's pretty conservative at the moment. Very little work has gone into performance. I think I'll be looking at that during Go 1.8.

@tombergan
Copy link
Contributor Author

tombergan commented Jun 30, 2016

Possibly relevant for the following reason: I have a prototype running where the scheduler respects priorities. However, there's a race. Suppose we have a priority tree A -> B -> C. A should be served first. But A's frames won't be scheduled until A makes a Write call. The handlers all run concurrently, so it's possible that B or C will happen to make a Write call first, in which case their frames get written first because there's nothing (yet) queued for A. The scheduler doesn't know if it should wait for A (how long might it have to wait?), so it eagerly writes what it has. I have cases where this is a significant performance degradation relative to optimal delivery (10% worse).

One way to make the race less likely is to buffer multiple frames per stream. You still get the race on the first Write call, but at least after that call, A has a chance to generate more frames while its first frame is being written. (I haven't actually evaluated this idea.)

Simon Pelchat and I are also kicking around the following idea: suppose we knew the connection's bandwidth. (We can estimate this from RTT and CWND from the kernel's tcp_info.) We could use this to pace the scheduler -- if the frame size is X, then we write one frame every X/bw seconds. As long as A can make Write calls faster than X/bw, it should always win the race. If it can't, we don't want to wait because we'd be leaving bandwidth on the floor. (Lots of potential problems with this idea, obviously.)

@bradfitz
Copy link
Contributor

The scheduler doesn't know if it should wait for A (how long might it have to wait?), so it eagerly writes what it has. I have cases where this is a significant performance degradation relative to optimal delivery (10% worse).

I think that's fine. I don't think the server should be sending nothing while waiting for something better when it already has something to send.

@tombergan
Copy link
Contributor Author

The point was that sometimes you get better performance by waiting, especially when you only need to wait a few milliseconds at most. (How often does this happen? I don't know yet. But it does happen sometimes. In any case, we should do the simple thing first.)

@bradfitz
Copy link
Contributor

The point was that sometimes you get better performance by waiting,

Better? You never said by which metric. Certainly not bandwidth. I could believe you increased the time-to-first-render for Chrome or something, but you almost certainly delayed the total load time.

I'll entertain the idea of sleeping and more complexity once we do better than random.

@tombergan
Copy link
Contributor Author

tombergan commented Jul 1, 2016

Sorry, I meant better for SpeedIndex in the browser. (And if you're only waiting a few ms, it's unlikely you'll do worse on bandwidth, unless your connection is really fast.)

@gopherbot
Copy link

CL https://golang.org/cl/25366 mentions this issue.

@quentinmit quentinmit added the NeedsFix The path to resolution is known, but the work has not been done. label Oct 5, 2016
@golang golang locked and limited conversation to collaborators Oct 24, 2017
c3mb0 pushed a commit to c3mb0/net that referenced this issue Apr 2, 2018
This adds an interface to support pluggable schedulers. The interface
is defined in writesched.go. The only type needed by this interface is
FrameWriteRequest, which describes a request to write a frame (this used
to be called frameWriteMsg). The scheduler can be configured with a new
field in http2.Server. Two schedulers are implemented:

1) A random scheduler that is essentially equivalent to the existing
   scheduler. This is currently the default scheduler if none is
   configured. The implementation is in writesched_random.go.

2) A scheduler that uses H2 weights and priorities. The H2 priority tree
   is maintained as a tree of priorityNodes. The next frame is chosen by
   walking this tree in topological order, where sibling nodes are ordered
   by their bandwidth usage relative to their H2 weight. Two optional
   features are added to improve performance -- these are configured with
   PriorityWriteSchedulerConfig.

Fixes golang/go#16168

Change-Id: I97ec93e5c58c2efec35455ba2f3c31e849f706af
Reviewed-on: https://go-review.googlesource.com/25366
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
FrozenDueToAge NeedsFix The path to resolution is known, but the work has not been done.
Projects
None yet
Development

No branches or pull requests

4 participants