You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
findleyr opened this issue
Jun 6, 2023
· 5 comments
Assignees
Labels
goplsIssues related to the Go language server, gopls.NeedsFixThe path to resolution is known, but the work has not been done.SoonThis needs to be done soon. (regressions, serious bugs, outages)ToolsThis label describes issues relating to any tools in the x/tools repository.
gopherbot
added
Tools
This label describes issues relating to any tools in the x/tools repository.
gopls
Issues related to the Go language server, gopls.
labels
Jun 6, 2023
This is a really fascinating issue that has been distracting us from conference talk prep! Long story short, the analysis driver has a pathological memory allocation in some larger workspaces due to its simple one-pass top-down recursion causing repeated decoding of the same import/fact data over and over again. The solution is something conceptually equivalent to the "batching" done by the main type-checking loop, which uses a two pass (bottom up) approach. The two-pass approach allows a "batch" of type-checking operations to share the same graph of symbols, rather than each unit being a singleton batch, allowing re-use of already-decode type export data. (For analysis, this would apply to decoded facts too.)
One way to implement this would be to implement batching in the analysis driver. Another would be to use the main type-checking loop (forEachPackage) directly, though at the cost of the pruning based on source+export+facts that the analysis driver already does. (To be clear, that's a second order benefit compared to the cost of not batching.) We quickly sketched this in the attached CL and found that it greatly improves analysis warm-up time. However, in our experimental haste, we deleted the optimization that applies only a subset of fact-using analyzers to dependencies, and it turns out this is surprisingly important.
There is clearly more work to be done here to achieve the performance goals we wanted for 0.12, but so far, other than this opt-out survey, we don't have any direct communication from users or issues files to suggest that there's a wider problem. (It's not clear why the problem manifests so clearly in this hashicorp repo but not in k8s, which has very similar graph metrics: nodes, edges, median and p95 arity, etc. Perhaps there are some unusually large types.Packages in this project.)
findleyr
added
NeedsFix
The path to resolution is known, but the work has not been done.
Soon
This needs to be done soon. (regressions, serious bugs, outages)
and removed
NeedsInvestigation
Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
labels
Jun 10, 2023
goplsIssues related to the Go language server, gopls.NeedsFixThe path to resolution is known, but the work has not been done.SoonThis needs to be done soon. (regressions, serious bugs, outages)ToolsThis label describes issues relating to any tools in the x/tools repository.
Discovered by way of a user survey, there is very inconsistent performance of the gopls analysis driver in v0.12.0.
Repro:
./internal/types
. Everything is great, and gopls uses much less memory than v0.11.0, as expected./internal/provider
, everything goes boom. analysis uses ~50GB (and counting..?)Given that gopls can type-check the repository expediently, this seems like a likely bug in the new analysis driver.
CC @adonovan
The text was updated successfully, but these errors were encountered: