aboutsummaryrefslogtreecommitdiff
path: root/libgcc
diff options
context:
space:
mode:
authorRoger Sayle <roger@nextmovesoftware.com>2021-12-15 15:09:48 +0100
committerTom de Vries <tdevries@suse.de>2022-01-04 12:28:02 +0100
commitbeed3f8f60492289ca6211d86c54a2254a642035 (patch)
treedecc94b8e351664b22f98f7ccbc14ce8bf694851 /libgcc
parenta54d11749f0ce98192cfe28e5ccc0633d4db3982 (diff)
downloadgcc-beed3f8f60492289ca6211d86c54a2254a642035.zip
gcc-beed3f8f60492289ca6211d86c54a2254a642035.tar.gz
gcc-beed3f8f60492289ca6211d86c54a2254a642035.tar.bz2
nvptx: Transition nvptx backend to STORE_FLAG_VALUE = 1
This patch to the nvptx backend changes the backend's STORE_FLAG_VALUE from -1 to 1, by using BImode predicates and selp instructions, instead of set instructions (almost always followed by integer negation). Historically, it was reasonable (through rare) for backends to use -1 for representing true during the RTL passes. However with tree-ssa, GCC now emits lots of code that reads and writes _Bool values, requiring STORE_FLAG_VALUE=-1 targets to frequently convert 0/-1 pseudos to 0/1 pseudos using integer negation. Unfortunately, this process prevents or complicates many optimizations (negate isn't associative with logical AND, OR and XOR, and interferes with range/vrp/nonzerobits bounds etc.). The impact of this is that for a relatively simple logical expression like "return (x==21) && (y==69);", the nvptx backend currently generates: mov.u32 %r26, %ar0; mov.u32 %r27, %ar1; set.u32.eq.u32 %r30, %r26, 21; neg.s32 %r31, %r30; mov.u32 %r29, %r31; set.u32.eq.u32 %r33, %r27, 69; neg.s32 %r34, %r33; mov.u32 %r32, %r34; cvt.u16.u8 %r39, %r29; mov.u16 %r36, %r39; cvt.u16.u8 %r39, %r32; mov.u16 %r37, %r39; and.b16 %r35, %r36, %r37; cvt.u32.u16 %r38, %r35; cvt.u32.u8 %value, %r38; This patch tweaks nvptx to generate 0/1 values instead, requiring the same number of instructions, using (BImode) predicate registers and selp instructions so as to now generate the almost identical: mov.u32 %r26, %ar0; mov.u32 %r27, %ar1; setp.eq.u32 %r31, %r26, 21; selp.u32 %r30, 1, 0, %r31; mov.u32 %r29, %r30; setp.eq.u32 %r34, %r27, 69; selp.u32 %r33, 1, 0, %r34; mov.u32 %r32, %r33; cvt.u16.u8 %r39, %r29; mov.u16 %r36, %r39; cvt.u16.u8 %r39, %r32; mov.u16 %r37, %r39; and.b16 %r35, %r36, %r37; cvt.u32.u16 %r38, %r35; cvt.u32.u8 %value, %r38; The hidden benefit is that this sequence can (in theory) be optimized by the RTL passes to eventually generate a much shorter sequence using an and.pred instruction (just like Nvidia's nvcc compiler). This patch has been tested nvptx-none with a "make" and "make -k check" (including newlib) hosted on x86_64-pc-linux-gnu with no new failures. gcc/ChangeLog: * config/nvptx/nvptx.h (STORE_FLAG_VALUE): Change to 1. * config/nvptx/nvptx.md (movbi): Use P1 constraint for true. (setcc_from_bi): Remove SImode specific pattern. (setcc<mode>_from_bi): Provide more general HSDIM pattern. (extendbi<mode>2, zeroextendbi<mode>2): Provide instructions for sign- and zero-extending BImode predicates to integers. (setcc_int<mode>): Remove previous (-1-based) instructions. (cstorebi4): Remove BImode to SImode specific expander. (cstore<mode>4): Fix indentation. Expand using setccsi_from_bi. (cstore<mode>4): For both integer and floating point modes.
Diffstat (limited to 'libgcc')
0 files changed, 0 insertions, 0 deletions