-
Notifications
You must be signed in to change notification settings - Fork 18k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
go/types, types2: audit performance before the beta #49856
Comments
CC @griesemer |
Just FYI, we're aiming for beta 1 at the beginning of next week. @findleyr, do you feel that you can make good progress on this by then? |
Yes, the release-blocking aspect of this issue is just to do a sanity check that we don't have major unexpected performance regressions (unlikely, given usage), and that there is no obvious low-hanging fruit. It felt like we needed an issue to track our due diligence. |
Change https://golang.org/cl/369434 mentions this issue: |
I have done sufficient profiling to convince myself that we have not obviously regressed from 1.17, looking at CPU usage and allocations while type checking various packages. However, along the way I found a surprising non-linearity that dominates type checking for large packages like |
Our calculation of initOrder builds the dependency graph and then removes function nodes approximately at random. While profiling, I noticed that this latter step introduces a superlinear algorithm into our type checking pass, which can dominate type checking for large packages such as runtime. It is hard to analyze this rigorously, but to give an idea of how such a non-linearity could arise, suppose the following assumptions hold: - Every function makes D calls at random to other functions in the package, for some fixed constant D. - The number of functions is proportional to N, the size of the package. Under these simplified assumptions, the cost of removing an arbitrary function F is P*D, where P is the expected number of functions calling F. P has a Poisson distribution with mean D. Now consider the fact that when removing a function F in position i, we recursively pay the cost of copying F's predecessors and successors for each node in the remaining unremoved subgraph of functions containing F. With our assumptions, the size of this subgraph is proportional to (N-i), the number of remaining functions to remove. Therefore, the total cost of removing functions is proportional to P*D*Σᴺ(N-i) which is proportional to N². However, if we remove functions in ascending order of cost, we can partition by the number of predecessors, and the total cost of removing functions is proportional to N*D*Σ(PMF(X)) where PMF is the probability mass function of P. In other words cost is proportional to N. Assuming the above analysis is correct, it is still the case that the initial assumptions are naive. Many large packages are more accurately characterized as combinations of many smaller packages. Nevertheless, it is intuitively clear that removing expensive nodes last should be cheaper. Therefore, we sort by cost first before removing nodes in dependencyGraph. We also move deletes to the outer loop, to avoid redundant deletes. By inspection, this avoids a bug where n may not have been removed from its successors if n had no predecessors. name old time/op new time/op delta Check/runtime/funcbodies/noinfo-8 568ms ±25% 82ms ± 1% -85.53% (p=0.000 n=8+10) name old lines/s new lines/s delta Check/runtime/funcbodies/noinfo-8 93.1k ±56% 705.1k ± 1% +657.63% (p=0.000 n=10+10) Updates #49856 Change-Id: Id2e70d67401af19205e1e0b9947baa16dd6506f0 Reviewed-on: https://go-review.googlesource.com/c/go/+/369434 Trust: Robert Findley <rfindley@google.com> Run-TryBot: Robert Findley <rfindley@google.com> Reviewed-by: Robert Griesemer <gri@golang.org> TryBot-Result: Gopher Robot <gobot@golang.org>
Since the beta is out now and it sounds like you had a chance to audit the go/types performance, should we close this bug? |
Thanks, yes I think we can close this now. |
As a matter of procedure, we should do a final audit of type checker performance before the beta. We have good reason to believe that there have not been major regressions (via compiler profiling and ongoing usage), but so much has changed that it would be good to verify some baseline numbers.
The text was updated successfully, but these errors were encountered: