aboutsummaryrefslogtreecommitdiff
path: root/gcc/tree-ssa-loop-split.c
diff options
context:
space:
mode:
authorJakub Jelinek <jakub@redhat.com>2020-07-15 11:26:22 +0200
committerJakub Jelinek <jakub@redhat.com>2020-07-15 11:26:22 +0200
commit410675cb63466d8de9ad590521f0766b012d2475 (patch)
tree7abbcf66e3de58b3bf89a06714571c0f37a4570f /gcc/tree-ssa-loop-split.c
parent7a9fd18598e638b55c591624e753fb7a88abe1ab (diff)
downloadgcc-410675cb63466d8de9ad590521f0766b012d2475.zip
gcc-410675cb63466d8de9ad590521f0766b012d2475.tar.gz
gcc-410675cb63466d8de9ad590521f0766b012d2475.tar.bz2
builtins: Avoid useless char/short -> int promotions before atomics [PR96176]
As mentioned in the PR, we generate a useless movzbl insn before lock cmpxchg. The problem is that the builtin for the char/short cases has the arguments promoted to int and combine gives up, because the instructions have MEM_VOLATILE_P arguments and recog in that case doesn't recognize anything when volatile_ok is false, and nothing afterwards optimizes the (reg:SI a) = (zero_extend:SI (reg:QI a)) ... (subreg:QI (reg:SI a) 0) ... The following patch fixes it at expansion time, we already have a function that is meant to undo the promotion, so this just adds the very common case to that. 2020-07-15 Jakub Jelinek <jakub@redhat.com> PR target/96176 * builtins.c: Include gimple-ssa.h, tree-ssa-live.h and tree-outof-ssa.h. (expand_expr_force_mode): If exp is a SSA_NAME with different mode from MODE and get_gimple_for_ssa_name is a cast from MODE, use the cast's rhs. * gcc.target/i386/pr96176.c: New test.
Diffstat (limited to 'gcc/tree-ssa-loop-split.c')
0 files changed, 0 insertions, 0 deletions