diff options
author | Richard Biener <rguenther@suse.de> | 2025-08-06 12:31:13 +0200 |
---|---|---|
committer | Richard Biener <rguenth@gcc.gnu.org> | 2025-08-07 13:57:18 +0200 |
commit | 53f491ccd1e59fad77fb2cb30d1a58b9e5e5f63c (patch) | |
tree | 8ea90f0cd74585f7a0617f0fa6393e478d6603af /gcc/testsuite/gcc.dg | |
parent | eee51f9a4b6e584230f75e4616438bb5ad5935a9 (diff) | |
download | gcc-53f491ccd1e59fad77fb2cb30d1a58b9e5e5f63c.zip gcc-53f491ccd1e59fad77fb2cb30d1a58b9e5e5f63c.tar.gz gcc-53f491ccd1e59fad77fb2cb30d1a58b9e5e5f63c.tar.bz2 |
tree-optimization/121405 - missed VN with aggregate copy
The following handles value-numbering of a BIT_FIELD_REF of
a register that's defined by a load by looking up a subset
load similar to how we handle bit-and masked loads. This
allows the testcase to be simplified by two FRE passes,
the first one will create the BIT_FIELD_REF.
PR tree-optimization/121405
* tree-ssa-sccvn.cc (visit_nary_op): Handle BIT_FIELD_REF
with reference def by looking up a combination of both.
* gcc.dg/tree-ssa/ssa-fre-107.c: New testcase.
* gcc.target/i386/pr90579.c: Adjust.
Diffstat (limited to 'gcc/testsuite/gcc.dg')
-rw-r--r-- | gcc/testsuite/gcc.dg/tree-ssa/ssa-fre-107.c | 29 |
1 files changed, 29 insertions, 0 deletions
diff --git a/gcc/testsuite/gcc.dg/tree-ssa/ssa-fre-107.c b/gcc/testsuite/gcc.dg/tree-ssa/ssa-fre-107.c new file mode 100644 index 0000000..f80baf3 --- /dev/null +++ b/gcc/testsuite/gcc.dg/tree-ssa/ssa-fre-107.c @@ -0,0 +1,29 @@ +/* { dg-do compile } */ +/* { dg-options "-O -fdump-tree-optimized" } */ + +struct vec_char_16 +{ + unsigned char raw[2]; +}; + +static inline struct vec_char_16 +Dup128VecFromValues(unsigned char t0, unsigned char t1) +{ + struct vec_char_16 result; + result.raw[0] = t0; + result.raw[1] = t1; + return result; +} + +int f(unsigned char t0, unsigned char t1) +{ + struct vec_char_16 a = Dup128VecFromValues(t0, t1); + struct vec_char_16 b; + __builtin_memcpy(&b, &a, sizeof(a)); + return b.raw[0] + b.raw[1]; +} + +/* Ideally we'd optimize this at FRE1 time but we only replace + the loads from b.raw[] with BIT_FIELD_REFs which get optimized + only later in the next FRE. */ +/* { dg-final { scan-tree-dump-not "MEM" "optimized" } } */ |