New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net/http: benchmarks fail on sierra with 'too many open files' #18753
Comments
What's your |
Yeah: $ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited |
@bradfitz I'm facing the same issue what's the fix for this? Can the open connections be throttled or queued or is it worth moving to a different programming language for high-load web servers? Within an Alpine Docker container I get this (that's odd, because the limit looks similar to the Host MacOs limit, but Docker is running its own VM - Moby)
|
@alexellis, it's not clear you're facing the same problem. What did you do?
If your OS configuration only permits N file descriptors, the program language won't matter at all. You only get N regardless. It's possible Docker or Moby are the ones hitting the limit and passing it back to you. |
I could be wrong - but it does give the same error as listed. When I generate a high load on my Golang web server which uses net/http - I'm seeing hundreds of connections on The example code is @ https://github.com/alexellis/faas/blob/labels_metrics/watchdog/main.go As far as I can see I am closing what's being opened. What I meant by a different language / HTTP server implementation was one that would control the in-flight connections. The ones I'm seeing I think have already "served" a connection but appear to have something open. |
@alexellis, can you file a separate bug? I'd like to keep this one about @josharian's very specific bug report. Your case seems like it would involve enough debugging that I suspect it's going to drown out the original intent of this bug. |
Just to add another data point, I am not an OSX user but have a contractor trying to run my code, which fails with "too many open files" during an accept in |
I retract the above, it was because of the fsnotify package creating lots of file descriptors and the limit being 256 with a low system-wide limit, too. Apologies for the noise. |
I've seen this too recently (same output as josharian above). Errors on all the BenchmarkClientServerParallel tests, most often on BenchmarkClientServerParallel64. You can raise the file limit with ulimit (for current session), but then I see a different error:
Running the tests with b.SetParallelism(1) in benchmarkClientServerParallel seems to fix it, so presumably it's some problem with running these tests in parallel. |
I see same issue on centos ,Ulimit open files is 10,000. Now , v.requestSomeHttpServer is not able to serve because there are multiple request which hitting and it is throwing too many open files. |
I looked into this a few weeks back and it does appear that something weird is going on. There were many more connections than I expected from reading the benchmark. But this needn't block Go 1.10, since benchmarks don't run by default, so punting. |
My web crawler program which was able to run smoothly now crashes because of too many open files error after upgrading to 1.10. OS type is Mac OS 10.13.6. |
@carusyte I would encourage you to file a new bug with instructions for how to reproduce this error. Thank you. |
I am still seeing this error Has there been any progress on this issue in the past year? Are there any practical workarounds? |
This issue is specifically about "too many open files", not "connection reset by peer". I believe you already commented on #20960 about this. But if you feel your issue is separate, please feel free to file a new issue. |
I'm also getting this error, even with const SAMPLE_SIZE = 128
const CONCURRENCY = 64
func benchmark(urls []string) []BenchmarkResult {
channel := make(chan BenchmarkResult)
results := make([]BenchmarkResult, 0)
urlCount := len(urls)
for needsMore(results) {
actives := 0
pending := SAMPLE_SIZE
for pending > 0 {
for actives < CONCURRENCY && actives < pending {
// The request() function will pass a result to the channel after the http request returns.
go request(urls[rand.Intn(urlCount)], channel)
actives += 1
}
result := <-channel
results = append(results, result)
actives -= 1
pending -= 1
}
/*
Irrelevant math for benchmarking purposes here...
*/
}
return results
} According to my code, there should never be more than 64 requests open at a time, so the ulimit of 1024 should be fine. |
Sample run excerpt:
The text was updated successfully, but these errors were encountered: