You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On architectures that allow unaligned loads, we should rewrite b[0] == c1 && b[1] == c2 to *(*uint16)(&b[0]) == (c1 << 8) + c2. And do all architectures for cases in which we know b[0] is appropriately aligned. Also uint32 and uint64 (as appropriate).
See CL 26758 for more discussion; that CL will cause many of these to be generated.
We might also independently want to update the front end (near OCMPSTR) to generate the larger comparisons directly.
On architectures that allow unaligned loads, we should rewrite
b[0] == c1 && b[1] == c2
to*(*uint16)(&b[0]) == (c1 << 8) + c2
. And do all architectures for cases in which we know b[0] is appropriately aligned. Also uint32 and uint64 (as appropriate).See CL 26758 for more discussion; that CL will cause many of these to be generated.
We might also independently want to update the front end (near OCMPSTR) to generate the larger comparisons directly.
The text was updated successfully, but these errors were encountered: