Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http: server Accept error: accept tcp ... too may open files #3167

Closed
gopherbot opened this issue Mar 1, 2012 · 5 comments
Closed

net/http: server Accept error: accept tcp ... too may open files #3167

gopherbot opened this issue Mar 1, 2012 · 5 comments

Comments

@gopherbot
Copy link

by pcrosby:

Running on weekly from 2/22/2012, getting the following errors on standard net/http
servers:

2012/03/01 18:27:09 http: Accept error: accept tcp [::]:80: too many open files;
retrying in 5ms
2012/03/01 18:27:09 http: Accept error: accept tcp [::]:80: too many open files;
retrying in 10ms
2012/03/01 18:27:09 http: Accept error: accept tcp [::]:80: too many open files;
retrying in 20ms
2012/03/01 18:27:09 http: Accept error: accept tcp [::]:80: too many open files;
retrying in 40ms
2012/03/01 18:27:09 http: Accept error: accept tcp [::]:80: too many open files;
retrying in 80ms

output of ulimit -a:

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 70000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

And sysctl values:

fs.file-nr = 608    0   70000
fs.file-max = 70000

As root, number of open files (lsof | wc -l):  983

This happens consistently on a live site.  Whenever it happens, the number of open files
is way less than the max of 70000.

Which compiler are you using (5g, 6g, 8g, gccgo)?

8g

Which operating system are you using?

Linux 2.6.38-13-virtual #53-Ubuntu SMP Mon Nov 28 19:59:56 UTC 2011 i686 i686 i386
GNU/Linux
(32 bit)

Which revision are you using?  (hg identify)

96bd78e7d35e+ weekly/weekly.2012-02-22
@dsymonds
Copy link
Contributor

dsymonds commented Mar 3, 2012

Comment 1:

Can you show your code? Most of the time something like this arises it turns out that
the program is forgetting to close something it is meant to be closing.

Labels changed: added priority-later, removed priority-triage.

Status changed to WaitingForReply.

@gopherbot
Copy link
Author

Comment 2 by pcrosby:

I'll post a public version of the code soon.
But as I reported, the number of open files reported by the OS is never anywhere close
to the limit.

@remyoudompheng
Copy link
Contributor

Comment 3:

Can you check the actual limits of your process in /proc/$PID/limits ? they might be
different from what you see in your shell.

@gopherbot
Copy link
Author

Comment 4 by pcrosby:

Ok, that's the problem.  Thanks.  Sorry to waste your time:
Max open files            1024                 4096                 files     
(I'm not sure why those are the limits, however, as /etc/security/limits.conf has this:
* soft nofile 70000
* hard nofile 100000
root soft nofile 70000
root hard nofile 100000
but that's not a go issue...)
Thanks again.

@rsc
Copy link
Contributor

rsc commented Mar 5, 2012

Comment 5:

Status changed to Retracted.

@golang golang locked and limited conversation to collaborators Jun 24, 2016
This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants