-
Notifications
You must be signed in to change notification settings - Fork 18k
cmd/gofmt: parallel processing causing "too many open files" errors #51164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Maybe related to https://go-review.googlesource.com/c/go/+/317975/ |
CL317975 using file size as weight to control parallalism. |
Change https://go.dev/cl/385515 mentions this issue: |
@dim13 out of curiosity, how did you end up with a file descriptor limit of just 256? The Linux kernel has had a default of 1024 for a long time, and Systemd also defaults to a soft limit of 1024. In both instances, the hard limit is significantly higher, so a user shouldn't need root to increase the soft limit a bit. I agree that the current gofmt logic in master may open thousands of files at once in some edge cases, so we should probably tweak that. I'm not sure that we should specifically aim to support open file limits as low as 256, though. Do we test the rest of Go with such limits, such as |
@mvdan macOS defaults to 256.
|
Ah, thank you @ericlagergren, I hadn't noticed the original user was on Mac. So then that answers my question; we have CI machines on Mac, so that must already be tested. |
@mvdan 256 is default value on macOS; and for example on OpenBSD default value is 512. |
Turns out that the biggest cause here is that gofmt in master now leaks open files; I hadn't noticed that as my I'm now reviewing @mengzhuo's CL. |
@mvdan couple things (on mobile so I can’t comment on the CL):
both of those avoid hardcoding 100 or 240 open files. |
Both of those are indeed a possibility, but we are very late into the freeze and 1.18 is already significantly delayed, so I think we need to go with a simpler approach that's less likely to cause further delays. |
Change https://go.dev/cl/385656 mentions this issue: |
I've fixed this for 1.18 with a hard-coded limit of 200. If someone wants to follow up with a change to use |
I grabbed some benchmark numbers as I was curious if the 200 hard limit would cause a performance regression. As a reminder, this is 1.17 versus what we had in master before Bryan's fix:
And this is the effect that Bryan's fix had on master:
This is on a Ryzen 5850u with 8 physical cores, 32GiB of RAM, and a soft limit of 32k file descriptors, all on Linux 5.16.9. If a hard limit of 200 open files doesn't cause a slow-down on this machine, I imagine it will be plenty for practically all other machines. I'd even go as far as saying that using getrlimit shouldn't be worth the added complexity, unless we can prove that it actually helps in some cases - e.g. on machines with many more CPU cores. |
I forgot to mention - those numbers were obtained via |
I could see it causing a slowdown when formatting a large repo hosted on a very slow disk (such as an NFS mount). That said, if the disk is slow then everything will be slow, so it's not clear to me why we would need to optimize |
@ericlagergren yep! |
I see another problem: 200 may be fine for now, but you never know, how many descriptors are already used. If benchmark doesn't fall into abyss, I would prefer more conservative value of maybe 128. |
The limit is per-process - I'd be surprised if initializing a main Go program managed to open 56 file descriptors and keep them open. I can imagine a few, like stdin/stdout/stderr, but certainly nothing close to get from 200 to 256. |
Was under wrong assumption, it were per session. But you're right, it's per process. |
Thinking outloud: I wonder if #46279 would allow |
What version of Go are you using (
go version
)?What did you do?
Execute
gofmt -w .
on decent sized project (1660 go files in my case).What did you expect to see?
No errors.
What did you see instead?
Long list of errors:
Maybe related to
https://tip.golang.org/doc/go1.18#gofmt
The text was updated successfully, but these errors were encountered: