-
Notifications
You must be signed in to change notification settings - Fork 18k
cmd/compile: find a good way to eliminate zero/sign extensions on arm64 #42162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The main reason for the above-mentioned cases is that some unnecessary zero/sign extension are not eliminated in the lower pass, so causing some rewrite rules to merge unnessary zero/sign extension into other ops like bitfield ops, but these rules may break other optimizations like ORshiftLL. In order to resolve this problem, we need to eliminate zero/sign extension in the lower pass. But I do not find a good way to do this optimization. Do you have any good suggestions? Thank you. |
Order dependencies in rewrite rules are a bother. And my experience last year suggests that it has a disproportionate impact on arm64. (I hacked a b.Values shuffle into the rewrite rule inner loop and looked for differences in generated code.) The best fix available now that I know of is to change order-dependent rewrite rules to accept the later form, so that they trigger regardless of order. Sometimes that requires accepting both forms. |
I don't see why this needs to be a proposal (and I couldn't find what is proposed). Changed to a regular issue. |
Agreed. It has caused issues/complexities, e.g. adding a new optimization caused another rule not to fire, so we had to add new rules to match more forms. The load-shift combining rules on ARM64 are an example. I think there were discussions about giving rules order/priority. Some rules only fire in early stage, some only in later stage, etc.. I'm not sure if it will make things better. Just to bring it up. |
Change https://golang.org/cl/265038 mentions this issue: |
For the case mentioned above, we can indeed prevent the degradation of optimization by reordering the optimization rules. I will submit a patch to do this. Thank you for the comments. In addition, I only know to use "toolstash-check -all" command to check wheter the new change has assembly codes that are inconsistent with the master, but I do not know how to check whether the performance is worse or better than the master's. Any suggestions? Thank you. |
Change https://golang.org/cl/267598 mentions this issue: |
This is a gentle ping. Recently, I notice that there are some redundant unsign/sign extend instructions following 32-bit ops. Check the following code: func unsignEXT(x, y uint32) uint64 {
ret := uint64(x & y)
return ret
} codegen: AND R1, R0, R1
MOVWU R1, R0
RET we want: ANDW R1, R0, R0
RET The root cause is that we don't support all 32-bit opcodes on arm64. For example, we lower
The good thing is that it's really convenient for us to write the rules based on I have dug into this issue for a while, and there are two approaches come to my mind: (Anyway we have to add ANDW in our backend, let's assume we already support it)
Since we break the original rules
This will not change the original rules, but we have to add such rules for newly added 32-bit opcodes one by one, the rules will be complicated. I'm not sure which one is more acceptable or whether there are more elegant solutions, any suggestions? |
Change https://go.dev/cl/427454 mentions this issue: |
I'm hesitant to add lots of opcodes to support 32->64 upconversion. It just doesn't happen that often. It's no more expensive to operate on full-width registers, and as long as we support upconversion on load (sign/zero extending loads), there's no reason for someone to code in 32-bit values on 64-bit hardware. At least in places where people have at least a small interest in the performance of the code. I guess if you wanted to be performance-portable to 32-bit archs, but even then we have |
Got it, I will keep the original rules. Thanks~! |
Recently, I am working on the bit field optizations for arm64 and find that the existing bitfield optimizations will cause the compiler generate bad codes for some cases.
For example, for the following case, because we have the bitfield rewrite rule
(SLLconst [sc] (MOVWUreg x)) && isARM64BFMask(sc, 1<<32-1, 0) => (UBFIZ [armBFAuxInt(sc, 32)] x)
to optimizeuint64(hi) << 18
asUBFIZ
, but we also have the rewrite rule(OR x0 x1:(SLLconst [c] y)) && clobberIfDead(x1) => (ORshiftLL x0 y [c])
to optimize it asORR(shifted register)
. Obviously, the later one is better.e.g.
We know that the following rewrite rules are to merge zero/sign extensions into bitfiled ops. For the following case, they will eliminate a zero/sign extension instruction.
e.g
But in the following case, comparing the assembly code with and without these rewrite rules, both of have have only one intruction, there is no benefit from these changes. Without these rewrite rules,the reason why the zero/sign extension will not be generated is that the codegen do the optimization, that is, if the value is a proper-typed load, already zero/sign-extended, do not extend again.
e.g.
The text was updated successfully, but these errors were encountered: