Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net: add mechanism to wait for readability on a TCPConn #15735

Open
bradfitz opened this issue May 18, 2016 · 124 comments
Open

net: add mechanism to wait for readability on a TCPConn #15735

bradfitz opened this issue May 18, 2016 · 124 comments
Labels
NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Milestone

Comments

@bradfitz
Copy link
Contributor

bradfitz commented May 18, 2016

EDIT: this proposal has shifted. See #15735 (comment) below.

Old:

The net/http package needs a way to wait for readability on a TCPConn without actually reading from it. (See #15224)

http://golang.org/cl/22031 added such a mechanism, making Read(0 bytes) do a wait for readability, followed by returning (0, nil). But maybe that is strange. Windows already works like that, though. (See new tests in that CL)

Reconsider this for Go 1.8.

Maybe we could add a new method to TCPConn instead, like WaitRead.

@bradfitz bradfitz added this to the Go1.8 milestone May 18, 2016
@bradfitz bradfitz self-assigned this May 18, 2016
@bradfitz
Copy link
Contributor Author

/cc @ianlancetaylor @rsc

@gopherbot
Copy link

CL https://golang.org/cl/23227 mentions this issue.

gopherbot pushed a commit that referenced this issue May 19, 2016
Updates #15735

Change-Id: I42ab2345443bbaeaf935d683460fc2c941b7679c
Reviewed-on: https://go-review.googlesource.com/23227
Reviewed-by: Ian Lance Taylor <iant@golang.org>
gopherbot pushed a commit that referenced this issue May 19, 2016
Updates #15735.
Fixes #15741.

Change-Id: Ic4ad7e948e8c3ab5feffef89d7a37417f82722a1
Reviewed-on: https://go-review.googlesource.com/23199
Run-TryBot: Mikio Hara <mikioh.mikioh@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
@RalphCorderoy
Copy link

read(2) with a count of zero may be used to detect errors. Linux man page confirms, as does POSIX's read(3p) here. Mentioning it in case it influences this subverting of a Read(0 bytes) not calling syscall.Read.

@quentinmit quentinmit added the NeedsDecision Feedback is required from experts, contributors, and/or the community before a change can be made. label Oct 7, 2016
@bradfitz
Copy link
Contributor Author

I found a way to do without this in net/http, so punting to Go 1.9.

@bradfitz bradfitz modified the milestones: Go1.9, Go1.8 Oct 21, 2016
@bradfitz
Copy link
Contributor Author

Actually, the more I think about this, I don't even want my idle HTTP/RPC goroutines to stick around blocked in a read call. In addition to the array memory backed by the slice given to Read, the goroutine itself is ~4KB of wasted memory.

What I'd really like is a way to register a func() to run when my *net.TCPConn is readable (when a Read call wouldn't block). By analogy, I want the time.AfterFunc efficiency of running a func in a goroutine later, rather than running a goroutine just to block in a time.Sleep.

My new proposal is more like:

package net

// OnReadable runs f in a new goroutine when c is readable;
// that is, when a call to c.Read will not block.
func (c *TCPConn) OnReadable(f func()) {
   // ...
}

Yes, maybe this is getting dangerously into event-based programming land.

Or maybe just the name ("OnWhatever") is offensive. Maybe there's something better.

I would use this in http, http2, and grpc.

/cc @ianlancetaylor @rsc

@ianlancetaylor
Copy link
Contributor

Sounds like you are getting close to #15021.

I'm worried that the existence of such a method will encourage people to start writing their code as callbacks rather than as straightforward goroutines.

@bradfitz
Copy link
Contributor Author

Yeah. I'm conflicted. I see the benefits and the opportunity for overuse.

@dvyukov
Copy link
Member

dvyukov commented Jan 6, 2017

If we do OnReadable(f func()), won't we need to fork half of standard library for async style? Compress, io, tls, etc readers all assume blocking style and require a blocked goroutine.
I don't see any way to push something asynchronously into e.g. gzip.Reader. Does this mean that I have to choose between no blocked goroutine + my own gzip impl and blocked goroutine + std lib?

@dvyukov
Copy link
Member

dvyukov commented Jan 6, 2017

Re 0-sized reads.
It should work with level-triggered notifications, but netpoll uses epoll in edge-triggered mode (and kqueue iirc). I am concerned if cl/22031 works in more complex cases: waiting for already ready IO, double wait, wait without completely draining read buffer first, etc?

@bradfitz
Copy link
Contributor Author

bradfitz commented Jan 6, 2017

@dvyukov, no, we would only use OnReadable in very high-level places, like the http1 and http2 servers where we know the conn is expected to be idle for long periods of time. The rest of the code underneath would remain in the blocking style.

@dvyukov
Copy link
Member

dvyukov commented Jan 6, 2017

This looks like a half-measure. An http connection can halt in the middle of request...

@bradfitz
Copy link
Contributor Author

bradfitz commented Jan 6, 2017

@dvyukov, but not commonly. This would be an optimization for the common case.

@dvyukov
Copy link
Member

dvyukov commented Jan 7, 2017

An alternative interface can be to register a channel that will receive readiness notifications. The other camp wants this for packet-processing servers, and there starting a goroutine for every packet will be too expensive. However, if at the end you want a goroutine, then the channel will introduce unnecessary overhead.
Channel has a problem with overflow handling (netpoll can't block on send, on the other hand it is not OK to lose notifications).
For completeness, this API should also handle writes.

@DemiMarie
Copy link

We need to make sure that this works with Windows IOCP as well.

@rsc
Copy link
Contributor

rsc commented Jan 10, 2017

Not obvious to me why the API has to handle writes. The thing about reads is that until the data is ready for reading, you can use the memory for other work. If you're waiting to write data, that memory is not reusable (otherwise you'd lose the data you are waiting to write).

@dvyukov
Copy link
Member

dvyukov commented Jan 11, 2017

@rsc If we do just 0-sized reads, then write support is not necessary. However, if we do Brad's "My new proposal is more like": func (c *TCPConn) OnReadable(f func()), then this equally applies to writes as well -- to avoid 2 blocked goroutines per connection.

@noblehng
Copy link

noblehng commented Feb 21, 2017

If memory usage is the concern, it is possible to make long parked G use less memory instead of changing programming style? One main selling point of Go to me is high efficiency network servers without resorting to callbacks.

Something like shrink the stack or move the stack to heap by the GC using some heuristics, that will be littile different from spinning up a new goroutine on callback memory usage wise, and scheduling wise a callback is not much different than goready(). Also I assume the liveness change in Go1.8 could help here too.

For the backing array, if it is preallocated buffer, than a callback doesn't make much different than Read(), maybe it will make some different if it is allocated per-callback and use a pool.

Edit:
Actually we could have some GC deadline or gopark time in runtime.pollDesc, so we could get a list of long parked G from the poller, then GC can kick in, but more dance is still needed to avoid race and make it fast.

@noblehng
Copy link

noblehng commented Feb 22, 2017

How about a epoll like interface for net.Listener:

type PollableListener interface {
   net.Listener
   // Poll will block till at least one connection been ready for read or write
   // reads and writes are special net.Conn that will not block on EAGAIN
   Poll() (reads []net.Conn, writes []net.Conn)
}

Then the caller of Poll() can has a small number of goroutines to poll for readiness and handle the reads and writes. This should also works well for packet-processing servers.

Note that this only needs to be implemented in the runtime for those Listeners that multiplexed in the kernel, like the net.TCPListener. For other protocol that multiplex in the userspace and doesn't attached to the runtime poller directly, like udp listener or multiplexing streams in a tcp connection, can be implemented outside the runtime. For example, for multiplexing in a tcp connection, we can implemented the epoll like behavior by read from/write to some buffers then poll from them or register callbacks on buffer size changed.

Edit:
To implement this, we can let users of the runtime poller, like socket and os.File, provide a callback function pointer when open the poller for a fd, to notify them the readiness of I/O. The callback should
looks like:

type IOReadyNotify func(mode int32)

And we store this in the runtime.pollDesc, then the runtime.netpollready() function should also call this callback if not nil besides give out the pending goroutine(s).

@aajtodd
Copy link

aajtodd commented Feb 27, 2017

I'm fairly new to Go but seeing the callback interface is a little grating given the blocking API exposed everywhere else. Why not expose a public API to the netpoll interfaces?

Go provides no standard public facing event loop (correct me if I'm wrong please). I have need to wait for readability on external FFI socket(s) (given through cgo). It would be nice to re-use the existing netpoll abstraction to also spawn FFI sockets onto rather than having to wrap epoll/IOCP/select. Also I'm guessing wrapping (e.g) epoll from the sys package does not integrate with the scheduler which would also be a bummer.

@mjgarton
Copy link
Contributor

For a number of my use cases, something like this :

package net

// Readable returns a channel which can be read from whenever a call to c.Read
// would not block.
func (c *TCPConn) Readable() <-chan struct{} {
        // ...
}

.. would be nice because I can select on it. I have no idea whether it's practical to implement this though.

Another alternative (for some of my cases at least) might be somehow enabling reads to be canceled by using a context.

@d2gr
Copy link

d2gr commented Sep 13, 2022

..although you could have have a connection collection primitive that would handle this internally and would work in a channel-like manner. so handling a single connection could work as blocking Read() and non-blocking CanRead(), neither would break existing api, and this new collection would be a new primitive extending the net package and which would allow to add and remove connection and block, with timeout, until one of the connections is readable, in which case it would return it for reading. ...just thinking out loud here.

Too complex. You don't want some things explicitly blocking and others implicitly, although the doc makes a distinction. IMO what people want is 2 different approaches, one more pro-actor that allows you to have one goroutine to handle multiple connections, and another (that already exists) to handle a connection per goroutine (go net). But the problems presented above might be a blocker.

@ivanjaros
Copy link

Either way, i just wanted to keep the conversation going since I'd like to write a chat server in near future and this type of functionality will be a must so I will have to use one of the libraries I listed above, but would prefer this to be in the stdlib.

Also I bet proxy like Traefik would get a nice performance gain from this as well. I mean, gnet equals in performance the fastest code out there in c/cpp/rust/... so the performance is there, in Go. It just needs to be unlocked.

@lesismal
Copy link

lesismal commented Sep 14, 2022

I mean, gnet equals in performance the fastest code out there in c/cpp/rust/... so the performance is there, in Go

@ivanjaros
That's not right.
For HTTP, gnet uses a simple parser that does not implement full features of the HTTP protocol, so its testing code cost much less of cpu than a full-featured HTTP server. Here are the testing code: parser, response encoder
It seems not only gnet makes the test like this, but many frameworks join the github.com/TechEmpower/FrameworkBenchmarks like this. That's not the real performance reports, and it misleads many people about different frameworks' performance.

As we know, golang can't get the same performance as c/cpp/rust in most scenarios, but can get near to c/cpp/rust in some scenarios such as IO, and the most important is: goroutine and chan make us write code easier.

@d2gr
Copy link

d2gr commented Sep 14, 2022

Also I bet proxy like Traefik would get a nice performance gain from this as well. I mean, gnet equals in performance the fastest code out there in c/cpp/rust/... so the performance is there, in Go. It just needs to be unlocked.

@ivanjaros
Yeah, but the problem with frameworks like gnet or evio is that they are not usable in production. There might be cases where you can use it, even with codecs. They are too complicated to be used in serious environments. A pro-actor pattern library would be more suitable (like boost::asio).

Standard Go is quite performant, just take a look at fasthttp. It is built using standard Go. The only problem with Go is the amount of sync mechanisms that it requires. That's why I said that a pro-actor pattern might be faster for TLS & HTTP/2. Or any protocol that uses streams instead of one single stateless connection.

@d2gr
Copy link

d2gr commented Sep 14, 2022

As we know, golang can't get the same performance as c/cpp/rust in most scenarios, but can get near to c/cpp/rust in some scenarios such as IO, and the most important is: goroutine and chan make us write code easier.

@lesismal
I mean... Go is quite fast. I benchmarked fasthttp vs boost::beast quite a lot (in AWS 2xc5n.4xlarge, client and server in a placement group) and Go is able to handle 200K QPS below 5ms (100th percentile) and boost::beast was able to do 200K sometimes below 4.8ms other times below 5.5ms. So there's a lot of variance in boost::beast, whilst in Go I got the same result in all the benchmarks.
Now, the only problem with Go is that the more the connections the less performant the I/O becomes. And that's a limitation you should be aware of while building a system.

So it depends on how your Go program is structured. If it has locks, it doesn't. If uses channels, etc... In my benchmarks I had no locks, just plain HTTP (no TLS) with some caching system in both fasthttp & boost::beast.

I wouldn't say that Go is less performant than C++ and Rust. It also depends on the library that you use. boost::beast is ok, it does scale well, but it is not as latency sensitive as you might think (truth is that you can easily plug in solarflare's onload in boost::asio and get a performance improvement), and Rust is not as different from Go. They also use coroutines and they also need to lock in some scenarios but they have the advantage of not having a GC ("advantage").

Bechmarks are mostly a lie. People prepare their programs for the benchmark in question (like gnet's case). Production ready environments don't need to lie in their benchmarks (fasthttp)

@lesismal
Copy link

@d2gr
I think proactor is not the key point, but are the num of goroutines, and blocking or non-blocking.
I've tried a lot about a non-blocking based HTTP server, I think we can't gain both high performance and high online together using golang by now:

  1. For std-based frameworks that use net.Conn(blocking IO interface), including fasthttp, the cost of hardware grows fast as the num of connections increases, because they all use at least one goroutine for each connection, including fasthttp. It's hard to reduce the cost for gc, memory and schedule.
  2. For non-blocking frameworks, no matter whether it's reactor or proactor, we need to handle IO and logic in different goroutine pools. That needs more heap escape, more complex async parser, and we can not optimize the buffers as fasthttp does.

@d2gr
Copy link

d2gr commented Sep 14, 2022

@lesismal
I mention pro-actor because it's the easiest way to handle async (unlike async/await, or reactor). I agree that we need less heap and more stack-based structures. I don't know what do you mean with async parser.

@ivanjaros
Copy link

ivanjaros commented Sep 14, 2022

That's not right. For HTTP, gnet uses a simple parser that does not implement full features of the HTTP protocol, so its testing code cost much less of cpu than a full-featured HTTP server.

you are mixing apples and oranges here. nobody is talking about http servers here. gnet is merely networking framework(like ALL the projects mentioned before). what you build on top of it is up to you.

@lesismal
Copy link

@d2gr
We need to cache half-packet bytes because we can't use ReadFull. The parser and buffer usage logic are much more complex than net.Conn based frameworks.

@ivanjaros Please refer to the reasons I've mentioned here and in previous comments.

Here are some benchmark reports, you can run the test in your own env:
lesismal/go-net-benchmark#1
I get the same level of performance between nbio and gnet, but I support a lot more in nbio than gnet. I tried a lot to optimize the performance and reduce the cost, but I can achieve only balance between performance and cost reduce, can't gain both of them together.
That's not only about HTTP!
For simple IO logic, we do gain a good performance, but for product env, for the complex logic, the reasons I mentioned make the performance down.
You can try it, I will be glad to see if you can find some way to make more promotions.

@bcmills
Copy link
Contributor

bcmills commented Sep 14, 2022

@lesismal, @ivanjaros, @d2gr: it isn't clear to me how the above discussion relates to the feature proposed in this issue. For off-topic performance discussions, please start a thread on the golang-dev mailing list or a similar venue outside of the issue tracker.

@d2gr
Copy link

d2gr commented Sep 14, 2022

@bcmills Sorry for spamming a bit, but my comments where related to the issue. At least this one #15735 (comment)

@lesismal
Copy link

@bcmills
Sorry about that.
But I think the CanRead does related to non-blocking interfaces and performance. If the new feature doesn't consider these points, the new feature will not be useful and should not be added.
If the new feature provides only a CanRead but still blocking Read/Write, there will be problems like @ivanjaros 's for-loop or gobwas/ws.

My previous related comments:

My opinion is that if provide Readable, non-blocking Read/Write interface should be provided together, else Readable will not be useful.

If conn.CanRead() is blocking interface, one connection's blocking makes all the other connections wait for long in the loop, even if they have been readable already. If conn.CanRead() is non-blocking interface, that for loop will cause cpu 100%.

@lesismal
Copy link

lesismal commented Sep 15, 2022

@bcmills
Actually, CanRead interface is just the same thing that 1m-go-websockets and gobwas/ws did. That's the smallest import for event-driven on std's TCPConn, but as we've discussed, it leads to the problem:
gobwas/ws#143
That's the same paroblem as the for-loop-block we discussed in previous comments.
To solve this problem in gobwas/ws, we still need to serve each connection with at least 1 goroutine, which come back to the solution of the current std but is more complex. Then, there's no benefit, seems and performs even worse.
So, if Read/Write interfaces are still blocking mod, I would prefer you maintainers just keep it not changed rather than add a new CanRead interface.

Also that's why I hope if you add CanRead, please add non-blocking Read/Write interfaces together. And then, one more thing, the non-blocking interfaces and separate goroutine pool make the performance worse than the current std if there are not a lot of connections, and that needs a lot of changes to the current TCPConn, I think that should be also considered before this proposal is accepted.
But all right, I will stop discussing performance, I just focus on the CanRead, blocking or non-blocking.

@AnimusPEXUS
Copy link

just add those functions to net.Conn:

ReadAvailable() bool
WriteAvailable() bool
ReadNonBlocking(b []byte) (n int, err error)
WriteNonBlocking(b []byte) (n int, err error)

@ianlancetaylor
Copy link
Contributor

@AnimusPEXUS Thanks, but that doesn't address this issue. This issue is about putting a goroutine to sleep until there is something to read.

@AnimusPEXUS
Copy link

@ianlancetaylor determining "if ther's something to read" is system-to-system specific question. for NIX* it's 'select' function. maximum you can do here for nix is make callback or push signal after select returned and if we want select non-blocking, then we have to put socket into non-blocking state. for non-socket/non-unix there only two options: endless loop, constantly checking readiness, or, again, callback if peer can somehow separately indicate 'I have something to read'

@DemiMarie
Copy link

I don’t think Go as it stands today is the best choice for this kind of code.

Optimized Rust and C++ servers use a combination of compiler-generated state machines (async/await and co_async/co_await, respectively) and manual memory management to achieve very high performance. Go does not support either of these features, so achieving similar performance in Go would require essentially writing C with Go syntax.

@ianlancetaylor
Copy link
Contributor

@AnimusPEXUS There is a reason that this issue is still open.

@ianlancetaylor ianlancetaylor added NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. and removed Thinking v2 A language change or incompatible library change NeedsDecision Feedback is required from experts, contributors, and/or the community before a change can be made. labels Aug 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Projects
None yet
Development

No branches or pull requests