You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Consider a large type switch with constant (non-interface) cases, like:
package p
funcf(einterface{}) int {
switche.(type) {
caseint:
return0casebool:
return1casebyte:
return2caseuint16:
return3casestring:
return4
}
return5
}
The generated code does a binary search using < over hashes for the type cases. However, the actual values, being hashes, are large, so the generated code contains large constants for doing the comparisons. Using 1.11 for the above code, here's an excerpt from the middle:
But since the constants are know to be hashes, we could do our binary search using bit tests instead: First split on whether hash&1 != 0, then the second bit, and so on as needed. This should afford a shorter instruction encoding and perhaps even faster execution. (This might need to apply to only some architectures; investigation required.) It would be a good idea to confirm at compile time that the bit test will do a good job of splitting the inputs (almost) in half, and moving on to higher bits if the candidate bit doesn't work well.
The text was updated successfully, but these errors were encountered:
The compiler could calculate the entropies in each of the possible splits produced by bit tests (maybe on some architectures we might use multiple flags for more than 2 way splits with some operations, e.g. shifts) and choose the split that results in the most information gain. Repeat until there are only the case items of the switch branch left. Such a decision tree building mechanism could even be enhanced with profile guided optimization data later and used for non-type switches.
@martisch it doesn’t work as nicely for all non-type switches, because non-type switches often have more natural structure, e.g. contiguous ranges of values that all map to the same case, and it is a bit harder to figure out how to take advantage of that while still making optimism decisions higher in the tree. The fact that type switches use a hash makes them easier and simpler and thus a good place to start.
@josharian Agreed that hashes should have a nicer distribution that can be leveraged better. Definitely a nice place to start. I was thinking this could be tested and extended to e.g. large enum like switches later once there is infrastructure that the compiler can then use to make a decision on applying this to a more wider range of applicable switches. However enums usually are smaller so the size difference for compares would not seem to matter as much. There could be some easy filtering heuristic that doesn't even try this on switches with ranges so no work is wasted on them during compilation. But all this is future talk. Better to start simple and develop, test and benchmark something that works nicely with type switches first which is the scope of this issue.
Consider a large type switch with constant (non-interface) cases, like:
The generated code does a binary search using
<
over hashes for the type cases. However, the actual values, being hashes, are large, so the generated code contains large constants for doing the comparisons. Using 1.11 for the above code, here's an excerpt from the middle:But since the constants are know to be hashes, we could do our binary search using bit tests instead: First split on whether hash&1 != 0, then the second bit, and so on as needed. This should afford a shorter instruction encoding and perhaps even faster execution. (This might need to apply to only some architectures; investigation required.) It would be a good idea to confirm at compile time that the bit test will do a good job of splitting the inputs (almost) in half, and moving on to higher bits if the candidate bit doesn't work well.
The text was updated successfully, but these errors were encountered: