aboutsummaryrefslogtreecommitdiff
path: root/gcc/fold-const.h
diff options
context:
space:
mode:
authorJakub Jelinek <jakub@redhat.com>2020-02-13 10:04:11 +0100
committerJakub Jelinek <jakub@redhat.com>2020-02-13 10:04:11 +0100
commit8aba425f4ebc5e2c054776d3cdddf13f7c1918f8 (patch)
treeb99f04913cac6d23c8d1bccea47a2a03a9d44c98 /gcc/fold-const.h
parent8ea884b85e338d09b14e6a54043c53ae0c1b1fe9 (diff)
downloadgcc-8aba425f4ebc5e2c054776d3cdddf13f7c1918f8.zip
gcc-8aba425f4ebc5e2c054776d3cdddf13f7c1918f8.tar.gz
gcc-8aba425f4ebc5e2c054776d3cdddf13f7c1918f8.tar.bz2
sccvn: Handle bitfields in vn_reference_lookup_3 [PR93582]
The following patch is first step towards fixing PR93582. vn_reference_lookup_3 right now punts on anything that isn't byte aligned, so to be able to lookup a constant bitfield store, one needs to use the exact same COMPONENT_REF, otherwise it isn't found. This patch lifts up that that restriction if the bits to be loaded are covered by a single store of a constant (keeps the restriction so far for the multiple store case, can tweak that incrementally, but I think for bisection etc. it is worth to do it one step at a time). 2020-02-13 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/93582 * fold-const.h (shift_bytes_in_array_left, shift_bytes_in_array_right): Declare. * fold-const.c (shift_bytes_in_array_left, shift_bytes_in_array_right): New function, moved from gimple-ssa-store-merging.c, no longer static. * gimple-ssa-store-merging.c (shift_bytes_in_array): Move to gimple-ssa-store-merging.c and rename to shift_bytes_in_array_left. (shift_bytes_in_array_right): Move to gimple-ssa-store-merging.c. (encode_tree_to_bitpos): Use shift_bytes_in_array_left instead of shift_bytes_in_array. (verify_shift_bytes_in_array): Rename to ... (verify_shift_bytes_in_array_left): ... this. Use shift_bytes_in_array_left instead of shift_bytes_in_array. (store_merging_c_tests): Call verify_shift_bytes_in_array_left instead of verify_shift_bytes_in_array. * tree-ssa-sccvn.c (vn_reference_lookup_3): For native_encode_expr / native_interpret_expr where the store covers all needed bits, punt on PDP-endian, otherwise allow all involved offsets and sizes not to be byte-aligned. * gcc.dg/tree-ssa/pr93582-1.c: New test. * gcc.dg/tree-ssa/pr93582-2.c: New test. * gcc.dg/tree-ssa/pr93582-3.c: New test.
Diffstat (limited to 'gcc/fold-const.h')
-rw-r--r--gcc/fold-const.h4
1 files changed, 4 insertions, 0 deletions
diff --git a/gcc/fold-const.h b/gcc/fold-const.h
index 7ac792f..0f788a4 100644
--- a/gcc/fold-const.h
+++ b/gcc/fold-const.h
@@ -30,6 +30,10 @@ extern int native_encode_initializer (tree, unsigned char *, int,
int off = -1);
extern tree native_interpret_expr (tree, const unsigned char *, int);
extern bool can_native_interpret_type_p (tree);
+extern void shift_bytes_in_array_left (unsigned char *, unsigned int,
+ unsigned int);
+extern void shift_bytes_in_array_right (unsigned char *, unsigned int,
+ unsigned int);
/* Fold constants as much as possible in an expression.
Returns the simplified expression.