-
Notifications
You must be signed in to change notification settings - Fork 18k
x/tools/gopls: HUGE memory leak in gopls. Manual GC works. #72919
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
When gopls starts up, it type-checks and analyzes the workspace. This involves a lot of allocation, which becomes garbage once the analysis is complete. GC'ing collects this garbage. It used to be the case that all of this memory was held indefinitely, so in this case your process would have 650mb memory usage as its steady state. We since did a lot of work to make memory independent, so that it can be GC'ed and the steady state memory is lower (it sounds like 150mb in your case). There are also some asynchronous processes in gopls that will allocate in the background:
All of these will also produce some garbage that must be GC'ed. I'm not sure that it's typical for these to produce 1-2mb of allocations per second; that seems a bit high, but note that loading the debug page also causes allocations. If this is causing problems, we could grab a timed profile to see what's going on. I'm not following why this is a memory leak. If manual GC works, where is the leak? The garbage collector should eventually run (https://tip.golang.org/doc/gc-guide#GOGC for some description of the scheduling). Are you actually experiencing OOMs? If so, what is your memory limit? Can you please confirm that the Also, for medium-sized repos, 650mb high water mark for memory with 200mb low-water mark is not abnormal. Syntax and type information consumes a significant amount of memory, many times the size of the source code. As indicated above, we've done a lot of work so that not all of this information need be in memory at once, but there is still a lot of allocation when this data is invalidated. |
Maybe i was a bit unclear, i just used 650mb as a example, as i had just pkill:d it, Its not 600mb, its 30gb... It does not gc at all, ever.. It grows about 1-2MB a sec until it reaches the OOM point, just now, it was at 16GB.. Ran a manual gc, and it went down to 200 ish mb. Were not talking it jumping up to 30gb right away, it like almost at a constant phase keeps growing. Also it does not matter if anything is done on the system, just with the process as "sleeping", it still grows by the second Edit: Regarding OOM, yep, it goes on until either oom kicks in or if you are using the computer, you notice when it completly freezes. |
This sounds like it disables automatic GC, assuming |
@prattmic thanks for spotting that! That's the problem. (Aside: perhaps we should include |
Closing as WAI. |
I think that would be nice, though I don't think it would have helped here, since Since |
gopls version
Build info
go env
What did you do?
Seems to make no difference what you do to trigger it, after startup it starts to creep up about +1-2MB memory a second until hitting close to/oom limit.
Note
Manually triggering a GC seems to actually "gc", but it keeps on climbing back until you manually gc next.
At the point where i just restarted it and it was around 650MB, manually triggering a GC pushed it down to 166MB, where it kept climbing again, and once a new manual gc was triggered, went back to 166MB exactly again.
What did you see happen?
System etc
Start climbing 1-2mb/s right at start, manually triggering a gc frees the memory as you would expect to be done normally.
Malloc calls are consistantly a huge number higher than frees, example :
With no manual gc:
Malloc calls 4,772,278
Frees 212,506
with manual gc, same instance:
Malloc calls 5,251,943
Frees 4,491,204
Process is idle while this is happening, logging shows nothing weird at all.
What did you expect to see?
No memleak lol
Editor and settings
Same thing occurs in vscode+cursor
code:
cursor:
And trough cli.
Vscode go related settings
Logs
Attatchments:
allocs.out
https://cdn.hlmpn.dev/allocs.out
heap.out
https://cdn.hlmpn.dev/heap.out
Output from web interface > analyzer.runtimes
web_analyzer.runtimes.md
Output from web interface > status
web_status.md
The text was updated successfully, but these errors were encountered: