New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net: SetReadBuffer makes i/o very slow on OS X Mavericks #6930
Comments
Seems like Mavericks has a couple of minor changes to the buffer management/reconfiguration stuff of network subsystems. I'd like to suggest you to change/adjust "var bufSize = 32768" to fit your needs properly and see what happens. I just guess, on Mavericks we have fine-granularity control to the buffers inside the kernel, it's veiled until Mountain Lion. Labels changed: added os-macosx. |
That was the number I used in https://github.com/apcera/gnatsd, so was representative of a real world performance bug. |
Just skimmed source tree of FreeBSD and found that: At some point of FreeBSD 10-CURRENT/9-STABLE mbuf autotuning stuff was added to the kernel; see kern/uipc_socket.c and uipc_sockbuff.c. After the patches, we can observe the behavior you pointed out on FreeBSD 9-STABLE and 10, OS X Mavericks (that's likely because Mavericks uses a part of FreeBSD 10 as its BSD subsystem), like the following: Active Internet connections (including servers) Proto Recv-Q Send-Q Local Address Foreign Address (state) tcp4 0 65380 127.0.0.1.30213 127.0.0.1.58842 ESTABLISHED tcp46 0 0 *.58842 *.* LISTEN a completely squeezed Recv-Q by specifying SO_RCVBUF option. IIRC, FreeBSD 9.2 and prior, OS X Mountain Lion and prior do extend Recv-Q even if we specify a very small SO_RCVBUF. Also we know that the behavior of SO_RCVBUF/SNDBUF options is really platform and protocol-dependent stuf. Seems there's nothing we can do for now (except adding platform/protocol independent socket buffer management stuff into the net/runtime/syscall packages). WDYT? Labels changed: added os-freebsd. |
And there's a significant difference btw Darwin (OS X Mavericks) and latest FreeBSD kernels. The attached code uses a wildcard address on both passive and active TCP endpoint holders. Please replace them with some specific address such as 127.0.0.1 and see what happens. Probably it would be a workaround. A dumb hypothesis: Mavericks implements some fancy networking feature that tries to find out the best IP transport layer path on the fly working together with IP routing. Conventionally a packet to the wildcard address 0.0.0.0 is routed to the localhost, and it might be some cost on Mavericks. Status changed to WaitingForReply. |
Thanks for your cooperation. Yup, that would be a most stable workaround for now. I'll try to run dtrace at every TCP state-change and IO probe points on Mavericks later, probably new year holidays. I mean, I didn't see "test never completed" on latest FreeBSD kernels, sigh. Labels changed: removed os-freebsd. Status changed to LongTerm. |
Really sounds like a Mavericks bug. |
FWIW here is output from the attached benchmark using go1.5.1 on OS X 10.10.5 (Yosemite):
|
I guess the interesting question is whether you can reproduce this in C or Python or something like that using plain Given that that's literally all SetReadBuffer does, it seems like it must be the kernel mishandling that. Unless our constants are wrong and what we think is SO_RCVBUF is really something like "set maximum bytes/second to read". Still seems like a kernel bug. I guess we could make SetReadBuffer a no-op on OS X if we convinced ourselves of that. |
Perhaps this is working as expected now? The benchmark is completing whereas it was not in the original report. The size in the attached benchmark sets the buffer to 32k. Changing it to something larger (like 1M) yields results more in line with the default case:
|
OK, great, that does look fixed. |
by derek.collison:
Attachments:
The text was updated successfully, but these errors were encountered: