/* Operations with very long integers. -*- C++ -*-
Copyright (C) 2012-2024 Free Software Foundation, Inc.
This file is part of GCC.
GCC is free software; you can redistribute it and/or modify it
under the terms of the GNU General Public License as published by the
Free Software Foundation; either version 3, or (at your option) any
later version.
GCC is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
. */
#ifndef WIDE_INT_H
#define WIDE_INT_H
/* wide-int.[cc|h] implements a class that efficiently performs
mathematical operations on finite precision integers. wide_ints
are designed to be transient - they are not for long term storage
of values. There is tight integration between wide_ints and the
other longer storage GCC representations (rtl and tree).
The actual precision of a wide_int depends on the flavor. There
are three predefined flavors:
1) wide_int (the default). This flavor does the math in the
precision of its input arguments. It is assumed (and checked)
that the precisions of the operands and results are consistent.
This is the most efficient flavor. It is not possible to examine
bits above the precision that has been specified. Because of
this, the default flavor has semantics that are simple to
understand and in general model the underlying hardware that the
compiler is targetted for.
This flavor must be used at the RTL level of gcc because there
is, in general, not enough information in the RTL representation
to extend a value beyond the precision specified in the mode.
This flavor should also be used at the TREE and GIMPLE levels of
the compiler except for the circumstances described in the
descriptions of the other two flavors.
The default wide_int representation does not contain any
information inherent about signedness of the represented value,
so it can be used to represent both signed and unsigned numbers.
For operations where the results depend on signedness (full width
multiply, division, shifts, comparisons, and operations that need
overflow detected), the signedness must be specified separately.
For precisions up to WIDE_INT_MAX_INL_PRECISION, it uses an inline
buffer in the type, for larger precisions up to WIDEST_INT_MAX_PRECISION
it uses a pointer to heap allocated buffer.
2) offset_int. This is a fixed-precision integer that can hold
any address offset, measured in either bits or bytes, with at
least one extra sign bit. At the moment the maximum address
size GCC supports is 64 bits. With 8-bit bytes and an extra
sign bit, offset_int therefore needs to have at least 68 bits
of precision. We round this up to 128 bits for efficiency.
Values of type T are converted to this precision by sign- or
zero-extending them based on the signedness of T.
The extra sign bit means that offset_int is effectively a signed
128-bit integer, i.e. it behaves like int128_t.
Since the values are logically signed, there is no need to
distinguish between signed and unsigned operations. Sign-sensitive
comparison operators <, <=, > and >= are therefore supported.
Shift operators << and >> are also supported, with >> being
an _arithmetic_ right shift.
[ Note that, even though offset_int is effectively int128_t,
it can still be useful to use unsigned comparisons like
wi::leu_p (a, b) as a more efficient short-hand for
"a >= 0 && a <= b". ]
3) widest_int. This representation is an approximation of
infinite precision math. However, it is not really infinite
precision math as in the GMP library. It is really finite
precision math where the precision is WIDEST_INT_MAX_PRECISION.
Like offset_int, widest_int is wider than all the values that
it needs to represent, so the integers are logically signed.
Sign-sensitive comparison operators <, <=, > and >= are supported,
as are << and >>.
There are several places in the GCC where this should/must be used:
* Code that does induction variable optimizations. This code
works with induction variables of many different types at the
same time. Because of this, it ends up doing many different
calculations where the operands are not compatible types. The
widest_int makes this easy, because it provides a field where
nothing is lost when converting from any variable,
* There are a small number of passes that currently use the
widest_int that should use the default. These should be
changed.
There are surprising features of offset_int and widest_int
that the users should be careful about:
1) Shifts and rotations are just weird. You have to specify a
precision in which the shift or rotate is to happen in. The bits
above this precision are zeroed. While this is what you
want, it is clearly non obvious.
2) Larger precision math sometimes does not produce the same
answer as would be expected for doing the math at the proper
precision. In particular, a multiply followed by a divide will
produce a different answer if the first product is larger than
what can be represented in the input precision.
The offset_int and the widest_int flavors are more expensive
than the default wide int, so in addition to the caveats with these
two, the default is the prefered representation.
All three flavors of wide_int are represented as a vector of
HOST_WIDE_INTs. The default and widest_int vectors contain enough elements
to hold a value of MAX_BITSIZE_MODE_ANY_INT bits. offset_int contains only
enough elements to hold ADDR_MAX_PRECISION bits. The values are stored
in the vector with the least significant HOST_BITS_PER_WIDE_INT bits
in element 0.
The default wide_int contains three fields: the vector (VAL),
the precision and a length (LEN). The length is the number of HWIs
needed to represent the value. widest_int and offset_int have a
constant precision that cannot be changed, so they only store the
VAL and LEN fields.
Since most integers used in a compiler are small values, it is
generally profitable to use a representation of the value that is
as small as possible. LEN is used to indicate the number of
elements of the vector that are in use. The numbers are stored as
sign extended numbers as a means of compression. Leading
HOST_WIDE_INTs that contain strings of either -1 or 0 are removed
as long as they can be reconstructed from the top bit that is being
represented.
The precision and length of a wide_int are always greater than 0.
Any bits in a wide_int above the precision are sign-extended from the
most significant bit. For example, a 4-bit value 0x8 is represented as
VAL = { 0xf...fff8 }. However, as an optimization, we allow other integer
constants to be represented with undefined bits above the precision.
This allows INTEGER_CSTs to be pre-extended according to TYPE_SIGN,
so that the INTEGER_CST representation can be used both in TYPE_PRECISION
and in wider precisions.
There are constructors to create the various forms of wide_int from
trees, rtl and constants. For trees the options are:
tree t = ...;
wi::to_wide (t) // Treat T as a wide_int
wi::to_offset (t) // Treat T as an offset_int
wi::to_widest (t) // Treat T as a widest_int
All three are light-weight accessors that should have no overhead
in release builds. If it is useful for readability reasons to
store the result in a temporary variable, the preferred method is:
wi::tree_to_wide_ref twide = wi::to_wide (t);
wi::tree_to_offset_ref toffset = wi::to_offset (t);
wi::tree_to_widest_ref twidest = wi::to_widest (t);
To make an rtx into a wide_int, you have to pair it with a mode.
The canonical way to do this is with rtx_mode_t as in:
rtx r = ...
wide_int x = rtx_mode_t (r, mode);
Similarly, a wide_int can only be constructed from a host value if
the target precision is given explicitly, such as in:
wide_int x = wi::shwi (c, prec); // sign-extend C if necessary
wide_int y = wi::uhwi (c, prec); // zero-extend C if necessary
However, offset_int and widest_int have an inherent precision and so
can be initialized directly from a host value:
offset_int x = (int) c; // sign-extend C
widest_int x = (unsigned int) c; // zero-extend C
It is also possible to do arithmetic directly on rtx_mode_ts and
constants. For example:
wi::add (r1, r2); // add equal-sized rtx_mode_ts r1 and r2
wi::add (r1, 1); // add 1 to rtx_mode_t r1
wi::lshift (1, 100); // 1 << 100 as a widest_int
Many binary operations place restrictions on the combinations of inputs,
using the following rules:
- {rtx, wide_int} op {rtx, wide_int} -> wide_int
The inputs must be the same precision. The result is a wide_int
of the same precision
- {rtx, wide_int} op (un)signed HOST_WIDE_INT -> wide_int
(un)signed HOST_WIDE_INT op {rtx, wide_int} -> wide_int
The HOST_WIDE_INT is extended or truncated to the precision of
the other input. The result is a wide_int of the same precision
as that input.
- (un)signed HOST_WIDE_INT op (un)signed HOST_WIDE_INT -> widest_int
The inputs are extended to widest_int precision and produce a
widest_int result.
- offset_int op offset_int -> offset_int
offset_int op (un)signed HOST_WIDE_INT -> offset_int
(un)signed HOST_WIDE_INT op offset_int -> offset_int
- widest_int op widest_int -> widest_int
widest_int op (un)signed HOST_WIDE_INT -> widest_int
(un)signed HOST_WIDE_INT op widest_int -> widest_int
Other combinations like:
- widest_int op offset_int and
- wide_int op offset_int
are not allowed. The inputs should instead be extended or truncated
so that they match.
The inputs to comparison functions like wi::eq_p and wi::lts_p
follow the same compatibility rules, although their return types
are different. Unary functions on X produce the same result as
a binary operation X + X. Shift functions X op Y also produce
the same result as X + X; the precision of the shift amount Y
can be arbitrarily different from X. */
/* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
early examination of the target's mode file. The WIDE_INT_MAX_INL_ELTS
can accomodate at least 1 more bit so that unsigned numbers of that
mode can be represented as a signed value. Note that it is still
possible to create fixed_wide_ints that have precisions greater than
MAX_BITSIZE_MODE_ANY_INT. This can be useful when representing a
double-width multiplication result, for example. */
#define WIDE_INT_MAX_INL_ELTS \
((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT) \
/ HOST_BITS_PER_WIDE_INT)
#define WIDE_INT_MAX_INL_PRECISION \
(WIDE_INT_MAX_INL_ELTS * HOST_BITS_PER_WIDE_INT)
/* Precision of wide_int and largest _BitInt precision + 1 we can
support. */
#define WIDE_INT_MAX_ELTS 1024
#define WIDE_INT_MAX_PRECISION (WIDE_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
/* Precision of widest_int. */
#define WIDEST_INT_MAX_ELTS 2048
#define WIDEST_INT_MAX_PRECISION (WIDEST_INT_MAX_ELTS * HOST_BITS_PER_WIDE_INT)
STATIC_ASSERT (WIDE_INT_MAX_INL_ELTS < WIDE_INT_MAX_ELTS);
/* This is the max size of any pointer on any machine. It does not
seem to be as easy to sniff this out of the machine description as
it is for MAX_BITSIZE_MODE_ANY_INT since targets may support
multiple address sizes and may have different address sizes for
different address spaces. However, currently the largest pointer
on any platform is 64 bits. When that changes, then it is likely
that a target hook should be defined so that targets can make this
value larger for those targets. */
#define ADDR_MAX_BITSIZE 64
/* This is the internal precision used when doing any address
arithmetic. The '4' is really 3 + 1. Three of the bits are for
the number of extra bits needed to do bit addresses and the other bit
is to allow everything to be signed without loosing any precision.
Then everything is rounded up to the next HWI for efficiency. */
#define ADDR_MAX_PRECISION \
((ADDR_MAX_BITSIZE + 4 + HOST_BITS_PER_WIDE_INT - 1) \
& ~(HOST_BITS_PER_WIDE_INT - 1))
/* The number of HWIs needed to store an offset_int. */
#define OFFSET_INT_ELTS (ADDR_MAX_PRECISION / HOST_BITS_PER_WIDE_INT)
/* The max number of HWIs needed to store a wide_int of PRECISION. */
#define WIDE_INT_MAX_HWIS(PRECISION) \
((PRECISION + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT)
/* The type of result produced by a binary operation on types T1 and T2.
Defined purely for brevity. */
#define WI_BINARY_RESULT(T1, T2) \
typename wi::binary_traits ::result_type
/* Likewise for binary operators, which excludes the case in which neither
T1 nor T2 is a wide-int-based type. */
#define WI_BINARY_OPERATOR_RESULT(T1, T2) \
typename wi::binary_traits ::operator_result
/* The type of result produced by T1 << T2. Leads to substitution failure
if the operation isn't supported. Defined purely for brevity. */
#define WI_SIGNED_SHIFT_RESULT(T1, T2) \
typename wi::binary_traits ::signed_shift_result_type
/* The type of result produced by a sign-agnostic binary predicate on
types T1 and T2. This is bool if wide-int operations make sense for
T1 and T2 and leads to substitution failure otherwise. */
#define WI_BINARY_PREDICATE_RESULT(T1, T2) \
typename wi::binary_traits ::predicate_result
/* The type of result produced by a signed binary predicate on types T1 and T2.
This is bool if signed comparisons make sense for T1 and T2 and leads to
substitution failure otherwise. */
#define WI_SIGNED_BINARY_PREDICATE_RESULT(T1, T2) \
typename wi::binary_traits ::signed_predicate_result
/* The type of result produced by a unary operation on type T. */
#define WI_UNARY_RESULT(T) \
typename wi::binary_traits ::result_type
/* Define a variable RESULT to hold the result of a binary operation on
X and Y, which have types T1 and T2 respectively. Define VAL to
point to the blocks of RESULT. Once the user of the macro has
filled in VAL, it should call RESULT.set_len to set the number
of initialized blocks. */
#define WI_BINARY_RESULT_VAR(RESULT, VAL, T1, X, T2, Y) \
WI_BINARY_RESULT (T1, T2) RESULT = \
wi::int_traits ::get_binary_result (X, Y); \
HOST_WIDE_INT *VAL = RESULT.write_val (0)
/* Similar for the result of a unary operation on X, which has type T. */
#define WI_UNARY_RESULT_VAR(RESULT, VAL, T, X) \
WI_UNARY_RESULT (T) RESULT = \
wi::int_traits ::get_binary_result (X, X); \
HOST_WIDE_INT *VAL = RESULT.write_val (0)
template class generic_wide_int;
template class fixed_wide_int_storage;
class wide_int_storage;
template class widest_int_storage;
/* An N-bit integer. Until we can use typedef templates, use this instead. */
#define FIXED_WIDE_INT(N) \
generic_wide_int < fixed_wide_int_storage >
typedef generic_wide_int wide_int;
typedef FIXED_WIDE_INT (ADDR_MAX_PRECISION) offset_int;
typedef generic_wide_int > widest_int;
typedef generic_wide_int > widest2_int;
/* wi::storage_ref can be a reference to a primitive type,
so this is the conservatively-correct setting. */
template
class wide_int_ref_storage;
typedef generic_wide_int > wide_int_ref;
/* This can be used instead of wide_int_ref if the referenced value is
known to have type T. It carries across properties of T's representation,
such as whether excess upper bits in a HWI are defined, and can therefore
help avoid redundant work.
The macro could be replaced with a template typedef, once we're able
to use those. */
#define WIDE_INT_REF_FOR(T) \
generic_wide_int \
::is_sign_extended, \
wi::int_traits ::host_dependent_precision> >
namespace wi
{
/* Operations that calculate overflow do so even for
TYPE_OVERFLOW_WRAPS types. For example, adding 1 to +MAX_INT in
an unsigned int is 0 and does not overflow in C/C++, but wi::add
will set the overflow argument in case it's needed for further
analysis.
For operations that require overflow, these are the different
types of overflow. */
enum overflow_type {
OVF_NONE = 0,
OVF_UNDERFLOW = -1,
OVF_OVERFLOW = 1,
/* There was an overflow, but we are unsure whether it was an
overflow or an underflow. */
OVF_UNKNOWN = 2
};
/* Classifies an integer based on its precision. */
enum precision_type {
/* The integer has both a precision and defined signedness. This allows
the integer to be converted to any width, since we know whether to fill
any extra bits with zeros or signs. */
FLEXIBLE_PRECISION,
/* The integer has a variable precision but no defined signedness. */
VAR_PRECISION,
/* The integer has a constant precision (known at GCC compile time),
is signed and all elements are in inline buffer. */
INL_CONST_PRECISION,
/* Like INL_CONST_PRECISION, but elements can be heap allocated for
larger lengths. */
CONST_PRECISION
};
/* This class, which has no default implementation, is expected to
provide the following members:
static const enum precision_type precision_type;
Classifies the type of T.
static const unsigned int precision;
Only defined if precision_type == INL_CONST_PRECISION or
precision_type == CONST_PRECISION. Specifies the
precision of all integers of type T.
static const bool host_dependent_precision;
True if the precision of T depends (or can depend) on the host.
static unsigned int get_precision (const T &x)
Return the number of bits in X.
static wi::storage_ref *decompose (HOST_WIDE_INT *scratch,
unsigned int precision, const T &x)
Decompose X as a PRECISION-bit integer, returning the associated
wi::storage_ref. SCRATCH is available as scratch space if needed.
The routine should assert that PRECISION is acceptable. */
template struct int_traits;
/* This class provides a single type, result_type, which specifies the
type of integer produced by a binary operation whose inputs have
types T1 and T2. The definition should be symmetric. */
template ::precision_type,
enum precision_type P2 = int_traits ::precision_type>
struct binary_traits;
/* Specify the result type for each supported combination of binary
inputs. Note that INL_CONST_PRECISION, CONST_PRECISION and
VAR_PRECISION cannot be mixed, in order to give stronger type
checking. When both inputs are INL_CONST_PRECISION or both are
CONST_PRECISION, they must have the same precision. */
template
struct binary_traits
{
typedef widest_int result_type;
/* Don't define operators for this combination. */
};
template
struct binary_traits
{
typedef wide_int result_type;
typedef result_type operator_result;
typedef bool predicate_result;
};
template
struct binary_traits
{
/* Spelled out explicitly (rather than through FIXED_WIDE_INT)
so as not to confuse gengtype. */
typedef generic_wide_int < fixed_wide_int_storage
::precision> > result_type;
typedef result_type operator_result;
typedef bool predicate_result;
typedef result_type signed_shift_result_type;
typedef bool signed_predicate_result;
};
template
struct binary_traits
{
typedef generic_wide_int < widest_int_storage
::precision> > result_type;
typedef result_type operator_result;
typedef bool predicate_result;
typedef result_type signed_shift_result_type;
typedef bool signed_predicate_result;
};
template
struct binary_traits
{
typedef wide_int result_type;
typedef result_type operator_result;
typedef bool predicate_result;
};
template
struct binary_traits
{
/* Spelled out explicitly (rather than through FIXED_WIDE_INT)
so as not to confuse gengtype. */
typedef generic_wide_int < fixed_wide_int_storage
::precision> > result_type;
typedef result_type operator_result;
typedef bool predicate_result;
typedef result_type signed_shift_result_type;
typedef bool signed_predicate_result;
};
template
struct binary_traits
{
typedef generic_wide_int < widest_int_storage
::precision> > result_type;
typedef result_type operator_result;
typedef bool predicate_result;
typedef result_type signed_shift_result_type;
typedef bool signed_predicate_result;
};
template
struct binary_traits
{
STATIC_ASSERT (int_traits ::precision == int_traits ::precision);
/* Spelled out explicitly (rather than through FIXED_WIDE_INT)
so as not to confuse gengtype. */
typedef generic_wide_int < fixed_wide_int_storage
::precision> > result_type;
typedef result_type operator_result;
typedef bool predicate_result;
typedef result_type signed_shift_result_type;
typedef bool signed_predicate_result;
};
template
struct binary_traits
{
STATIC_ASSERT (int_traits ::precision == int_traits ::precision);
typedef generic_wide_int < widest_int_storage
::precision> > result_type;
typedef result_type operator_result;
typedef bool predicate_result;
typedef result_type signed_shift_result_type;
typedef bool signed_predicate_result;
};
template
struct binary_traits
{
typedef wide_int result_type;
typedef result_type operator_result;
typedef bool predicate_result;
};
}
/* Public functions for querying and operating on integers. */
namespace wi
{
template
unsigned int get_precision (const T &);
template
unsigned int get_binary_precision (const T1 &, const T2 &);
template
void copy (T1 &, const T2 &);
#define UNARY_PREDICATE \
template bool
#define UNARY_FUNCTION \
template WI_UNARY_RESULT (T)
#define BINARY_PREDICATE \
template bool
#define BINARY_FUNCTION \
template WI_BINARY_RESULT (T1, T2)
#define SHIFT_FUNCTION \
template WI_UNARY_RESULT (T1)
UNARY_PREDICATE fits_shwi_p (const T &);
UNARY_PREDICATE fits_uhwi_p (const T &);
UNARY_PREDICATE neg_p (const T &, signop = SIGNED);
template
HOST_WIDE_INT sign_mask (const T &);
BINARY_PREDICATE eq_p (const T1 &, const T2 &);
BINARY_PREDICATE ne_p (const T1 &, const T2 &);
BINARY_PREDICATE lt_p (const T1 &, const T2 &, signop);
BINARY_PREDICATE lts_p (const T1 &, const T2 &);
BINARY_PREDICATE ltu_p (const T1 &, const T2 &);
BINARY_PREDICATE le_p (const T1 &, const T2 &, signop);
BINARY_PREDICATE les_p (const T1 &, const T2 &);
BINARY_PREDICATE leu_p (const T1 &, const T2 &);
BINARY_PREDICATE gt_p (const T1 &, const T2 &, signop);
BINARY_PREDICATE gts_p (const T1 &, const T2 &);
BINARY_PREDICATE gtu_p (const T1 &, const T2 &);
BINARY_PREDICATE ge_p (const T1 &, const T2 &, signop);
BINARY_PREDICATE ges_p (const T1 &, const T2 &);
BINARY_PREDICATE geu_p (const T1 &, const T2 &);
template
int cmp (const T1 &, const T2 &, signop);
template
int cmps (const T1 &, const T2 &);
template
int cmpu (const T1 &, const T2 &);
UNARY_FUNCTION bit_not (const T &);
UNARY_FUNCTION neg (const T &);
UNARY_FUNCTION neg (const T &, overflow_type *);
UNARY_FUNCTION abs (const T &);
UNARY_FUNCTION ext (const T &, unsigned int, signop);
UNARY_FUNCTION sext (const T &, unsigned int);
UNARY_FUNCTION zext (const T &, unsigned int);
UNARY_FUNCTION set_bit (const T &, unsigned int);
UNARY_FUNCTION bswap (const T &);
UNARY_FUNCTION bitreverse (const T &);
BINARY_FUNCTION min (const T1 &, const T2 &, signop);
BINARY_FUNCTION smin (const T1 &, const T2 &);
BINARY_FUNCTION umin (const T1 &, const T2 &);
BINARY_FUNCTION max (const T1 &, const T2 &, signop);
BINARY_FUNCTION smax (const T1 &, const T2 &);
BINARY_FUNCTION umax (const T1 &, const T2 &);
BINARY_FUNCTION bit_and (const T1 &, const T2 &);
BINARY_FUNCTION bit_and_not (const T1 &, const T2 &);
BINARY_FUNCTION bit_or (const T1 &, const T2 &);
BINARY_FUNCTION bit_or_not (const T1 &, const T2 &);
BINARY_FUNCTION bit_xor (const T1 &, const T2 &);
BINARY_FUNCTION add (const T1 &, const T2 &);
BINARY_FUNCTION add (const T1 &, const T2 &, signop, overflow_type *);
BINARY_FUNCTION sub (const T1 &, const T2 &);
BINARY_FUNCTION sub (const T1 &, const T2 &, signop, overflow_type *);
BINARY_FUNCTION mul (const T1 &, const T2 &);
BINARY_FUNCTION mul (const T1 &, const T2 &, signop, overflow_type *);
BINARY_FUNCTION smul (const T1 &, const T2 &, overflow_type *);
BINARY_FUNCTION umul (const T1 &, const T2 &, overflow_type *);
BINARY_FUNCTION mul_high (const T1 &, const T2 &, signop);
BINARY_FUNCTION div_trunc (const T1 &, const T2 &, signop,
overflow_type * = 0);
BINARY_FUNCTION sdiv_trunc (const T1 &, const T2 &);
BINARY_FUNCTION udiv_trunc (const T1 &, const T2 &);
BINARY_FUNCTION div_floor (const T1 &, const T2 &, signop,
overflow_type * = 0);
BINARY_FUNCTION udiv_floor (const T1 &, const T2 &);
BINARY_FUNCTION sdiv_floor (const T1 &, const T2 &);
BINARY_FUNCTION div_ceil (const T1 &, const T2 &, signop,
overflow_type * = 0);
BINARY_FUNCTION udiv_ceil (const T1 &, const T2 &);
BINARY_FUNCTION div_round (const T1 &, const T2 &, signop,
overflow_type * = 0);
BINARY_FUNCTION divmod_trunc (const T1 &, const T2 &, signop,
WI_BINARY_RESULT (T1, T2) *);
BINARY_FUNCTION gcd (const T1 &, const T2 &, signop = UNSIGNED);
BINARY_FUNCTION mod_trunc (const T1 &, const T2 &, signop,
overflow_type * = 0);
BINARY_FUNCTION smod_trunc (const T1 &, const T2 &);
BINARY_FUNCTION umod_trunc (const T1 &, const T2 &);
BINARY_FUNCTION mod_floor (const T1 &, const T2 &, signop,
overflow_type * = 0);
BINARY_FUNCTION umod_floor (const T1 &, const T2 &);
BINARY_FUNCTION mod_ceil (const T1 &, const T2 &, signop,
overflow_type * = 0);
BINARY_FUNCTION mod_round (const T1 &, const T2 &, signop,
overflow_type * = 0);
template
bool multiple_of_p (const T1 &, const T2 &, signop);
template
bool multiple_of_p (const T1 &, const T2 &, signop,
WI_BINARY_RESULT (T1, T2) *);
SHIFT_FUNCTION lshift (const T1 &, const T2 &);
SHIFT_FUNCTION lrshift (const T1 &, const T2 &);
SHIFT_FUNCTION arshift (const T1 &, const T2 &);
SHIFT_FUNCTION rshift (const T1 &, const T2 &, signop sgn);
SHIFT_FUNCTION lrotate (const T1 &, const T2 &, unsigned int = 0);
SHIFT_FUNCTION rrotate (const T1 &, const T2 &, unsigned int = 0);
#undef SHIFT_FUNCTION
#undef BINARY_PREDICATE
#undef BINARY_FUNCTION
#undef UNARY_PREDICATE
#undef UNARY_FUNCTION
bool only_sign_bit_p (const wide_int_ref &, unsigned int);
bool only_sign_bit_p (const wide_int_ref &);
int clz (const wide_int_ref &);
int clrsb (const wide_int_ref &);
int ctz (const wide_int_ref &);
int exact_log2 (const wide_int_ref &);
int floor_log2 (const wide_int_ref &);
int ffs (const wide_int_ref &);
int popcount (const wide_int_ref &);
int parity (const wide_int_ref &);
template
unsigned HOST_WIDE_INT extract_uhwi (const T &, unsigned int, unsigned int);
template
unsigned int min_precision (const T &, signop);
static inline void accumulate_overflow (overflow_type &, overflow_type);
}
namespace wi
{
/* Contains the components of a decomposed integer for easy, direct
access. */
class storage_ref
{
public:
storage_ref () {}
storage_ref (const HOST_WIDE_INT *, unsigned int, unsigned int);
const HOST_WIDE_INT *val;
unsigned int len;
unsigned int precision;
/* Provide enough trappings for this class to act as storage for
generic_wide_int. */
unsigned int get_len () const;
unsigned int get_precision () const;
const HOST_WIDE_INT *get_val () const;
};
}
inline::wi::storage_ref::storage_ref (const HOST_WIDE_INT *val_in,
unsigned int len_in,
unsigned int precision_in)
: val (val_in), len (len_in), precision (precision_in)
{
}
inline unsigned int
wi::storage_ref::get_len () const
{
return len;
}
inline unsigned int
wi::storage_ref::get_precision () const
{
return precision;
}
inline const HOST_WIDE_INT *
wi::storage_ref::get_val () const
{
return val;
}
/* This class defines an integer type using the storage provided by the
template argument. The storage class must provide the following
functions:
unsigned int get_precision () const
Return the number of bits in the integer.
HOST_WIDE_INT *get_val () const
Return a pointer to the array of blocks that encodes the integer.
unsigned int get_len () const
Return the number of blocks in get_val (). If this is smaller
than the number of blocks implied by get_precision (), the
remaining blocks are sign extensions of block get_len () - 1.
Although not required by generic_wide_int itself, writable storage
classes can also provide the following functions:
HOST_WIDE_INT *write_val (unsigned int)
Get a modifiable version of get_val (). The argument should be
upper estimation for LEN (ignored by all storages but
widest_int_storage).
unsigned int set_len (unsigned int len)
Set the value returned by get_len () to LEN. */
template
class GTY(()) generic_wide_int : public storage
{
public:
generic_wide_int ();
template
generic_wide_int (const T &);
template
generic_wide_int (const T &, unsigned int);
/* Conversions. */
HOST_WIDE_INT to_shwi (unsigned int) const;
HOST_WIDE_INT to_shwi () const;
unsigned HOST_WIDE_INT to_uhwi (unsigned int) const;
unsigned HOST_WIDE_INT to_uhwi () const;
HOST_WIDE_INT to_short_addr () const;
/* Public accessors for the interior of a wide int. */
HOST_WIDE_INT sign_mask () const;
HOST_WIDE_INT elt (unsigned int) const;
HOST_WIDE_INT sext_elt (unsigned int) const;
unsigned HOST_WIDE_INT ulow () const;
unsigned HOST_WIDE_INT uhigh () const;
HOST_WIDE_INT slow () const;
HOST_WIDE_INT shigh () const;
template
generic_wide_int &operator = (const T &);
#define ASSIGNMENT_OPERATOR(OP, F) \
template \
generic_wide_int &OP (const T &c) { return (*this = wi::F (*this, c)); }
/* Restrict these to cases where the shift operator is defined. */
#define SHIFT_ASSIGNMENT_OPERATOR(OP, OP2) \
template \
generic_wide_int &OP (const T &c) { return (*this = *this OP2 c); }
#define INCDEC_OPERATOR(OP, DELTA) \
generic_wide_int &OP () { *this += DELTA; return *this; }
ASSIGNMENT_OPERATOR (operator &=, bit_and)
ASSIGNMENT_OPERATOR (operator |=, bit_or)
ASSIGNMENT_OPERATOR (operator ^=, bit_xor)
ASSIGNMENT_OPERATOR (operator +=, add)
ASSIGNMENT_OPERATOR (operator -=, sub)
ASSIGNMENT_OPERATOR (operator *=, mul)
ASSIGNMENT_OPERATOR (operator <<=, lshift)
SHIFT_ASSIGNMENT_OPERATOR (operator >>=, >>)
INCDEC_OPERATOR (operator ++, 1)
INCDEC_OPERATOR (operator --, -1)
#undef SHIFT_ASSIGNMENT_OPERATOR
#undef ASSIGNMENT_OPERATOR
#undef INCDEC_OPERATOR
/* Debugging functions. */
void dump () const;
static const bool is_sign_extended
= wi::int_traits >::is_sign_extended;
static const bool needs_write_val_arg
= wi::int_traits >::needs_write_val_arg;
};
template
inline generic_wide_int ::generic_wide_int () {}
template
template
inline generic_wide_int ::generic_wide_int (const T &x)
: storage (x)
{
}
template
template
inline generic_wide_int ::generic_wide_int (const T &x,
unsigned int precision)
: storage (x, precision)
{
}
/* Return THIS as a signed HOST_WIDE_INT, sign-extending from PRECISION.
If THIS does not fit in PRECISION, the information is lost. */
template
inline HOST_WIDE_INT
generic_wide_int ::to_shwi (unsigned int precision) const
{
if (precision < HOST_BITS_PER_WIDE_INT)
return sext_hwi (this->get_val ()[0], precision);
else
return this->get_val ()[0];
}
/* Return THIS as a signed HOST_WIDE_INT, in its natural precision. */
template
inline HOST_WIDE_INT
generic_wide_int ::to_shwi () const
{
if (is_sign_extended)
return this->get_val ()[0];
else
return to_shwi (this->get_precision ());
}
/* Return THIS as an unsigned HOST_WIDE_INT, zero-extending from
PRECISION. If THIS does not fit in PRECISION, the information
is lost. */
template
inline unsigned HOST_WIDE_INT
generic_wide_int ::to_uhwi (unsigned int precision) const
{
if (precision < HOST_BITS_PER_WIDE_INT)
return zext_hwi (this->get_val ()[0], precision);
else
return this->get_val ()[0];
}
/* Return THIS as an signed HOST_WIDE_INT, in its natural precision. */
template
inline unsigned HOST_WIDE_INT
generic_wide_int ::to_uhwi () const
{
return to_uhwi (this->get_precision ());
}
/* TODO: The compiler is half converted from using HOST_WIDE_INT to
represent addresses to using offset_int to represent addresses.
We use to_short_addr at the interface from new code to old,
unconverted code. */
template
inline HOST_WIDE_INT
generic_wide_int ::to_short_addr () const
{
return this->get_val ()[0];
}
/* Return the implicit value of blocks above get_len (). */
template
inline HOST_WIDE_INT
generic_wide_int ::sign_mask () const
{
unsigned int len = this->get_len ();
gcc_assert (len > 0);
unsigned HOST_WIDE_INT high = this->get_val ()[len - 1];
if (!is_sign_extended)
{
unsigned int precision = this->get_precision ();
int excess = len * HOST_BITS_PER_WIDE_INT - precision;
if (excess > 0)
high <<= excess;
}
return (HOST_WIDE_INT) (high) < 0 ? -1 : 0;
}
/* Return the signed value of the least-significant explicitly-encoded
block. */
template
inline HOST_WIDE_INT
generic_wide_int ::slow () const
{
return this->get_val ()[0];
}
/* Return the signed value of the most-significant explicitly-encoded
block. */
template
inline HOST_WIDE_INT
generic_wide_int ::shigh () const
{
return this->get_val ()[this->get_len () - 1];
}
/* Return the unsigned value of the least-significant
explicitly-encoded block. */
template
inline unsigned HOST_WIDE_INT
generic_wide_int ::ulow () const
{
return this->get_val ()[0];
}
/* Return the unsigned value of the most-significant
explicitly-encoded block. */
template
inline unsigned HOST_WIDE_INT
generic_wide_int ::uhigh () const
{
return this->get_val ()[this->get_len () - 1];
}
/* Return block I, which might be implicitly or explicit encoded. */
template
inline HOST_WIDE_INT
generic_wide_int ::elt (unsigned int i) const
{
if (i >= this->get_len ())
return sign_mask ();
else
return this->get_val ()[i];
}
/* Like elt, but sign-extend beyond the upper bit, instead of returning
the raw encoding. */
template
inline HOST_WIDE_INT
generic_wide_int ::sext_elt (unsigned int i) const
{
HOST_WIDE_INT elt_i = elt (i);
if (!is_sign_extended)
{
unsigned int precision = this->get_precision ();
unsigned int lsb = i * HOST_BITS_PER_WIDE_INT;
if (precision - lsb < HOST_BITS_PER_WIDE_INT)
elt_i = sext_hwi (elt_i, precision - lsb);
}
return elt_i;
}
template
template
inline generic_wide_int &
generic_wide_int ::operator = (const T &x)
{
storage::operator = (x);
return *this;
}
/* Dump the contents of the integer to stderr, for debugging. */
template
void
generic_wide_int ::dump () const
{
unsigned int len = this->get_len ();
const HOST_WIDE_INT *val = this->get_val ();
unsigned int precision = this->get_precision ();
fprintf (stderr, "[");
if (len * HOST_BITS_PER_WIDE_INT < precision)
fprintf (stderr, "...,");
for (unsigned int i = 0; i < len - 1; ++i)
fprintf (stderr, HOST_WIDE_INT_PRINT_HEX ",", val[len - 1 - i]);
fprintf (stderr, HOST_WIDE_INT_PRINT_HEX "], precision = %d\n",
val[0], precision);
}
namespace wi
{
template
struct int_traits < generic_wide_int >
: public wi::int_traits
{
static unsigned int get_precision (const generic_wide_int &);
static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int,
const generic_wide_int &);
};
}
template
inline unsigned int
wi::int_traits < generic_wide_int >::
get_precision (const generic_wide_int &x)
{
return x.get_precision ();
}
template
inline wi::storage_ref
wi::int_traits < generic_wide_int >::
decompose (HOST_WIDE_INT *, unsigned int precision,
const generic_wide_int &x)
{
gcc_checking_assert (precision == x.get_precision ());
return wi::storage_ref (x.get_val (), x.get_len (), precision);
}
/* Provide the storage for a wide_int_ref. This acts like a read-only
wide_int, with the optimization that VAL is normally a pointer to
another integer's storage, so that no array copy is needed. */
template
class wide_int_ref_storage : public wi::storage_ref
{
private:
/* Scratch space that can be used when decomposing the original integer.
It must live as long as this object. */
HOST_WIDE_INT scratch[2];
public:
wide_int_ref_storage () {}
wide_int_ref_storage (const wi::storage_ref &);
template
wide_int_ref_storage (const T &);
template
wide_int_ref_storage (const T &, unsigned int);
};
/* Create a reference from an existing reference. */
template
inline wide_int_ref_storage ::
wide_int_ref_storage (const wi::storage_ref &x)
: storage_ref (x)
{}
/* Create a reference to integer X in its natural precision. Note
that the natural precision is host-dependent for primitive
types. */
template
template
inline wide_int_ref_storage ::wide_int_ref_storage (const T &x)
: storage_ref (wi::int_traits ::decompose (scratch,
wi::get_precision (x), x))
{
}
/* Create a reference to integer X in precision PRECISION. */
template
template
inline wide_int_ref_storage ::
wide_int_ref_storage (const T &x, unsigned int precision)
: storage_ref (wi::int_traits ::decompose (scratch, precision, x))
{
}
namespace wi
{
template
struct int_traits >
{
static const enum precision_type precision_type = VAR_PRECISION;
static const bool host_dependent_precision = HDP;
static const bool is_sign_extended = SE;
static const bool needs_write_val_arg = false;
};
}
namespace wi
{
unsigned int force_to_size (HOST_WIDE_INT *, const HOST_WIDE_INT *,
unsigned int, unsigned int, unsigned int,
signop sgn);
unsigned int from_array (HOST_WIDE_INT *, const HOST_WIDE_INT *,
unsigned int, unsigned int, bool = true);
}
/* The storage used by wide_int. */
class GTY(()) wide_int_storage
{
private:
union
{
HOST_WIDE_INT val[WIDE_INT_MAX_INL_ELTS];
HOST_WIDE_INT *valp;
} GTY((skip)) u;
unsigned int len;
unsigned int precision;
public:
wide_int_storage ();
template
wide_int_storage (const T &);
wide_int_storage (const wide_int_storage &);
~wide_int_storage ();
/* The standard generic_wide_int storage methods. */
unsigned int get_precision () const;
const HOST_WIDE_INT *get_val () const;
unsigned int get_len () const;
HOST_WIDE_INT *write_val (unsigned int);
void set_len (unsigned int, bool = false);
wide_int_storage &operator = (const wide_int_storage &);
template
wide_int_storage &operator = (const T &);
static wide_int from (const wide_int_ref &, unsigned int, signop);
static wide_int from_array (const HOST_WIDE_INT *, unsigned int,
unsigned int, bool = true);
static wide_int create (unsigned int);
};
namespace wi
{
template <>
struct int_traits
{
static const enum precision_type precision_type = VAR_PRECISION;
/* Guaranteed by a static assert in the wide_int_storage constructor. */
static const bool host_dependent_precision = false;
static const bool is_sign_extended = true;
static const bool needs_write_val_arg = false;
template
static wide_int get_binary_result (const T1 &, const T2 &);
template
static unsigned int get_binary_precision (const T1 &, const T2 &);
};
}
inline wide_int_storage::wide_int_storage () : precision (0) {}
/* Initialize the storage from integer X, in its natural precision.
Note that we do not allow integers with host-dependent precision
to become wide_ints; wide_ints must always be logically independent
of the host. */
template
inline wide_int_storage::wide_int_storage (const T &x)
{
STATIC_ASSERT (!wi::int_traits::host_dependent_precision);
STATIC_ASSERT (wi::int_traits::precision_type != wi::CONST_PRECISION);
STATIC_ASSERT (wi::int_traits::precision_type != wi::INL_CONST_PRECISION);
WIDE_INT_REF_FOR (T) xi (x);
precision = xi.precision;
if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
wi::copy (*this, xi);
}
inline wide_int_storage::wide_int_storage (const wide_int_storage &x)
{
memcpy (this, &x, sizeof (wide_int_storage));
if (UNLIKELY (x.precision > WIDE_INT_MAX_INL_PRECISION))
{
u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (precision, HOST_BITS_PER_WIDE_INT));
memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
}
}
inline wide_int_storage::~wide_int_storage ()
{
if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
XDELETEVEC (u.valp);
}
inline wide_int_storage&
wide_int_storage::operator = (const wide_int_storage &x)
{
if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
{
if (this == &x)
return *this;
XDELETEVEC (u.valp);
}
memcpy (this, &x, sizeof (wide_int_storage));
if (UNLIKELY (x.precision > WIDE_INT_MAX_INL_PRECISION))
{
u.valp = XNEWVEC (HOST_WIDE_INT, CEIL (x.precision, HOST_BITS_PER_WIDE_INT));
memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
}
return *this;
}
template
inline wide_int_storage&
wide_int_storage::operator = (const T &x)
{
STATIC_ASSERT (!wi::int_traits::host_dependent_precision);
STATIC_ASSERT (wi::int_traits::precision_type != wi::CONST_PRECISION);
STATIC_ASSERT (wi::int_traits::precision_type != wi::INL_CONST_PRECISION);
WIDE_INT_REF_FOR (T) xi (x);
if (UNLIKELY (precision != xi.precision))
{
if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
XDELETEVEC (u.valp);
precision = xi.precision;
if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
u.valp = XNEWVEC (HOST_WIDE_INT,
CEIL (precision, HOST_BITS_PER_WIDE_INT));
}
wi::copy (*this, xi);
return *this;
}
inline unsigned int
wide_int_storage::get_precision () const
{
return precision;
}
inline const HOST_WIDE_INT *
wide_int_storage::get_val () const
{
return UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION) ? u.valp : u.val;
}
inline unsigned int
wide_int_storage::get_len () const
{
return len;
}
inline HOST_WIDE_INT *
wide_int_storage::write_val (unsigned int)
{
return UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION) ? u.valp : u.val;
}
inline void
wide_int_storage::set_len (unsigned int l, bool is_sign_extended)
{
len = l;
if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > precision)
{
HOST_WIDE_INT &v = write_val (len)[len - 1];
v = sext_hwi (v, precision % HOST_BITS_PER_WIDE_INT);
}
}
/* Treat X as having signedness SGN and convert it to a PRECISION-bit
number. */
inline wide_int
wide_int_storage::from (const wide_int_ref &x, unsigned int precision,
signop sgn)
{
wide_int result = wide_int::create (precision);
result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len,
x.precision, precision, sgn));
return result;
}
/* Create a wide_int from the explicit block encoding given by VAL and
LEN. PRECISION is the precision of the integer. NEED_CANON_P is
true if the encoding may have redundant trailing blocks. */
inline wide_int
wide_int_storage::from_array (const HOST_WIDE_INT *val, unsigned int len,
unsigned int precision, bool need_canon_p)
{
wide_int result = wide_int::create (precision);
result.set_len (wi::from_array (result.write_val (len), val, len, precision,
need_canon_p));
return result;
}
/* Return an uninitialized wide_int with precision PRECISION. */
inline wide_int
wide_int_storage::create (unsigned int precision)
{
wide_int x;
x.precision = precision;
if (UNLIKELY (precision > WIDE_INT_MAX_INL_PRECISION))
x.u.valp = XNEWVEC (HOST_WIDE_INT,
CEIL (precision, HOST_BITS_PER_WIDE_INT));
return x;
}
template
inline wide_int
wi::int_traits ::get_binary_result (const T1 &x, const T2 &y)
{
/* This shouldn't be used for two flexible-precision inputs. */
STATIC_ASSERT (wi::int_traits ::precision_type != FLEXIBLE_PRECISION
|| wi::int_traits ::precision_type != FLEXIBLE_PRECISION);
if (wi::int_traits ::precision_type == FLEXIBLE_PRECISION)
return wide_int::create (wi::get_precision (y));
else
return wide_int::create (wi::get_precision (x));
}
template
inline unsigned int
wi::int_traits ::get_binary_precision (const T1 &x,
const T2 &y)
{
/* This shouldn't be used for two flexible-precision inputs. */
STATIC_ASSERT (wi::int_traits ::precision_type != FLEXIBLE_PRECISION
|| wi::int_traits ::precision_type != FLEXIBLE_PRECISION);
if (wi::int_traits ::precision_type == FLEXIBLE_PRECISION)
return wi::get_precision (y);
else
return wi::get_precision (x);
}
/* The storage used by FIXED_WIDE_INT (N). */
template
class GTY(()) fixed_wide_int_storage
{
private:
HOST_WIDE_INT val[WIDE_INT_MAX_HWIS (N)];
unsigned int len;
public:
fixed_wide_int_storage () = default;
template
fixed_wide_int_storage (const T &);
/* The standard generic_wide_int storage methods. */
unsigned int get_precision () const;
const HOST_WIDE_INT *get_val () const;
unsigned int get_len () const;
HOST_WIDE_INT *write_val (unsigned int);
void set_len (unsigned int, bool = false);
static FIXED_WIDE_INT (N) from (const wide_int_ref &, signop);
static FIXED_WIDE_INT (N) from_array (const HOST_WIDE_INT *, unsigned int,
bool = true);
};
namespace wi
{
template
struct int_traits < fixed_wide_int_storage >
{
static const enum precision_type precision_type = INL_CONST_PRECISION;
static const bool host_dependent_precision = false;
static const bool is_sign_extended = true;
static const bool needs_write_val_arg = false;
static const unsigned int precision = N;
template
static FIXED_WIDE_INT (N) get_binary_result (const T1 &, const T2 &);
template
static unsigned int get_binary_precision (const T1 &, const T2 &);
};
}
/* Initialize the storage from integer X, in precision N. */
template
template
inline fixed_wide_int_storage ::fixed_wide_int_storage (const T &x)
{
/* Check for type compatibility. We don't want to initialize a
fixed-width integer from something like a wide_int. */
WI_BINARY_RESULT (T, FIXED_WIDE_INT (N)) *assertion ATTRIBUTE_UNUSED;
wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N));
}
template
inline unsigned int
fixed_wide_int_storage ::get_precision () const
{
return N;
}
template
inline const HOST_WIDE_INT *
fixed_wide_int_storage ::get_val () const
{
return val;
}
template
inline unsigned int
fixed_wide_int_storage ::get_len () const
{
return len;
}
template
inline HOST_WIDE_INT *
fixed_wide_int_storage ::write_val (unsigned int)
{
return val;
}
template
inline void
fixed_wide_int_storage ::set_len (unsigned int l, bool)
{
len = l;
/* There are no excess bits in val[len - 1]. */
STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0);
}
/* Treat X as having signedness SGN and convert it to an N-bit number. */
template
inline FIXED_WIDE_INT (N)
fixed_wide_int_storage ::from (const wide_int_ref &x, signop sgn)
{
FIXED_WIDE_INT (N) result;
result.set_len (wi::force_to_size (result.write_val (x.len), x.val, x.len,
x.precision, N, sgn));
return result;
}
/* Create a FIXED_WIDE_INT (N) from the explicit block encoding given by
VAL and LEN. NEED_CANON_P is true if the encoding may have redundant
trailing blocks. */
template
inline FIXED_WIDE_INT (N)
fixed_wide_int_storage ::from_array (const HOST_WIDE_INT *val,
unsigned int len,
bool need_canon_p)
{
FIXED_WIDE_INT (N) result;
result.set_len (wi::from_array (result.write_val (len), val, len,
N, need_canon_p));
return result;
}
template
template
inline FIXED_WIDE_INT (N)
wi::int_traits < fixed_wide_int_storage >::
get_binary_result (const T1 &, const T2 &)
{
return FIXED_WIDE_INT (N) ();
}
template
template
inline unsigned int
wi::int_traits < fixed_wide_int_storage >::
get_binary_precision (const T1 &, const T2 &)
{
return N;
}
#define WIDEST_INT(N) generic_wide_int < widest_int_storage >
/* The storage used by widest_int. */
template
class GTY(()) widest_int_storage
{
private:
union
{
HOST_WIDE_INT val[WIDE_INT_MAX_INL_ELTS];
HOST_WIDE_INT *valp;
} GTY((skip)) u;
unsigned int len;
public:
widest_int_storage ();
widest_int_storage (const widest_int_storage &);
template
widest_int_storage (const T &);
~widest_int_storage ();
widest_int_storage &operator = (const widest_int_storage &);
template
inline widest_int_storage& operator = (const T &);
/* The standard generic_wide_int storage methods. */
unsigned int get_precision () const;
const HOST_WIDE_INT *get_val () const;
unsigned int get_len () const;
HOST_WIDE_INT *write_val (unsigned int);
void set_len (unsigned int, bool = false);
static WIDEST_INT (N) from (const wide_int_ref &, signop);
static WIDEST_INT (N) from_array (const HOST_WIDE_INT *, unsigned int,
bool = true);
};
namespace wi
{
template
struct int_traits < widest_int_storage >
{
static const enum precision_type precision_type = CONST_PRECISION;
static const bool host_dependent_precision = false;
static const bool is_sign_extended = true;
static const bool needs_write_val_arg = true;
static const unsigned int precision = N;
template
static WIDEST_INT (N) get_binary_result (const T1 &, const T2 &);
template
static unsigned int get_binary_precision (const T1 &, const T2 &);
};
}
template
inline widest_int_storage ::widest_int_storage () : len (0) {}
/* Initialize the storage from integer X, in precision N. */
template
template
inline widest_int_storage ::widest_int_storage (const T &x) : len (0)
{
/* Check for type compatibility. We don't want to initialize a
widest integer from something like a wide_int. */
WI_BINARY_RESULT (T, WIDEST_INT (N)) *assertion ATTRIBUTE_UNUSED;
wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N));
}
template
inline
widest_int_storage ::widest_int_storage (const widest_int_storage &x)
{
memcpy (this, &x, sizeof (widest_int_storage));
if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
{
u.valp = XNEWVEC (HOST_WIDE_INT, len);
memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
}
}
template
inline widest_int_storage ::~widest_int_storage ()
{
if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
XDELETEVEC (u.valp);
}
template
inline widest_int_storage &
widest_int_storage ::operator = (const widest_int_storage &x)
{
if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
{
if (this == &x)
return *this;
XDELETEVEC (u.valp);
}
memcpy (this, &x, sizeof (widest_int_storage));
if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
{
u.valp = XNEWVEC (HOST_WIDE_INT, len);
memcpy (u.valp, x.u.valp, len * sizeof (HOST_WIDE_INT));
}
return *this;
}
template
template
inline widest_int_storage &
widest_int_storage ::operator = (const T &x)
{
/* Check for type compatibility. We don't want to assign a
widest integer from something like a wide_int. */
WI_BINARY_RESULT (T, WIDEST_INT (N)) *assertion ATTRIBUTE_UNUSED;
if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
XDELETEVEC (u.valp);
len = 0;
wi::copy (*this, WIDE_INT_REF_FOR (T) (x, N));
return *this;
}
template
inline unsigned int
widest_int_storage ::get_precision () const
{
return N;
}
template
inline const HOST_WIDE_INT *
widest_int_storage ::get_val () const
{
return UNLIKELY (len > WIDE_INT_MAX_INL_ELTS) ? u.valp : u.val;
}
template
inline unsigned int
widest_int_storage ::get_len () const
{
return len;
}
template
inline HOST_WIDE_INT *
widest_int_storage ::write_val (unsigned int l)
{
if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS))
XDELETEVEC (u.valp);
len = l;
if (UNLIKELY (l > WIDE_INT_MAX_INL_ELTS))
{
u.valp = XNEWVEC (HOST_WIDE_INT, l);
return u.valp;
}
else if (CHECKING_P && l < WIDE_INT_MAX_INL_ELTS)
u.val[l] = HOST_WIDE_INT_UC (0xbaaaaaaddeadbeef);
return u.val;
}
template
inline void
widest_int_storage ::set_len (unsigned int l, bool)
{
gcc_checking_assert (l <= len);
if (UNLIKELY (len > WIDE_INT_MAX_INL_ELTS)
&& l <= WIDE_INT_MAX_INL_ELTS)
{
HOST_WIDE_INT *valp = u.valp;
memcpy (u.val, valp, l * sizeof (u.val[0]));
XDELETEVEC (valp);
}
else if (len && len < WIDE_INT_MAX_INL_ELTS)
gcc_checking_assert ((unsigned HOST_WIDE_INT) u.val[len]
== HOST_WIDE_INT_UC (0xbaaaaaaddeadbeef));
len = l;
/* There are no excess bits in val[len - 1]. */
STATIC_ASSERT (N % HOST_BITS_PER_WIDE_INT == 0);
}
/* Treat X as having signedness SGN and convert it to an N-bit number. */
template
inline WIDEST_INT (N)
widest_int_storage ::from (const wide_int_ref &x, signop sgn)
{
WIDEST_INT (N) result;
unsigned int exp_len = x.len;
unsigned int prec = result.get_precision ();
if (sgn == UNSIGNED && prec > x.precision && x.val[x.len - 1] < 0)
exp_len = CEIL (x.precision, HOST_BITS_PER_WIDE_INT) + 1;
result.set_len (wi::force_to_size (result.write_val (exp_len), x.val, x.len,
x.precision, prec, sgn));
return result;
}
/* Create a WIDEST_INT (N) from the explicit block encoding given by
VAL and LEN. NEED_CANON_P is true if the encoding may have redundant
trailing blocks. */
template
inline WIDEST_INT (N)
widest_int_storage ::from_array (const HOST_WIDE_INT *val,
unsigned int len,
bool need_canon_p)
{
WIDEST_INT (N) result;
result.set_len (wi::from_array (result.write_val (len), val, len,
result.get_precision (), need_canon_p));
return result;
}
template
template
inline WIDEST_INT (N)
wi::int_traits < widest_int_storage >::
get_binary_result (const T1 &, const T2 &)
{
return WIDEST_INT (N) ();
}
template
template
inline unsigned int
wi::int_traits < widest_int_storage >::
get_binary_precision (const T1 &, const T2 &)
{
return N;
}
/* A reference to one element of a trailing_wide_ints structure. */
class trailing_wide_int_storage
{
private:
/* The precision of the integer, which is a fixed property of the
parent trailing_wide_ints. */
unsigned int m_precision;
/* A pointer to the length field. */
unsigned short *m_len;
/* A pointer to the HWI array. There are enough elements to hold all
values of precision M_PRECISION. */
HOST_WIDE_INT *m_val;
public:
trailing_wide_int_storage (unsigned int, unsigned short *, HOST_WIDE_INT *);
/* The standard generic_wide_int storage methods. */
unsigned int get_len () const;
unsigned int get_precision () const;
const HOST_WIDE_INT *get_val () const;
HOST_WIDE_INT *write_val (unsigned int);
void set_len (unsigned int, bool = false);
template
trailing_wide_int_storage &operator = (const T &);
};
typedef generic_wide_int trailing_wide_int;
/* trailing_wide_int behaves like a wide_int. */
namespace wi
{
template <>
struct int_traits
: public int_traits {};
}
/* A variable-length array of wide_int-like objects that can be put
at the end of a variable-sized structure. The number of objects is
at most N and can be set at runtime by using set_precision().
Use extra_size to calculate how many bytes beyond the
sizeof need to be allocated. Use set_precision to initialize the
structure. */
template
struct GTY((user)) trailing_wide_ints
{
private:
/* The shared precision of each number. */
unsigned short m_precision;
/* The shared maximum length of each number. */
unsigned short m_max_len;
/* The number of elements. */
unsigned char m_num_elements;
/* The current length of each number. */
unsigned short m_len[N];
/* The variable-length part of the structure, which always contains
at least one HWI. Element I starts at index I * M_MAX_LEN. */
HOST_WIDE_INT m_val[1];
public:
typedef WIDE_INT_REF_FOR (trailing_wide_int_storage) const_reference;
void set_precision (unsigned int precision, unsigned int num_elements = N);
unsigned int get_precision () const { return m_precision; }
unsigned int num_elements () const { return m_num_elements; }
trailing_wide_int operator [] (unsigned int);
const_reference operator [] (unsigned int) const;
static size_t extra_size (unsigned int precision,
unsigned int num_elements = N);
size_t extra_size () const { return extra_size (m_precision,
m_num_elements); }
};
inline trailing_wide_int_storage::
trailing_wide_int_storage (unsigned int precision, unsigned short *len,
HOST_WIDE_INT *val)
: m_precision (precision), m_len (len), m_val (val)
{
}
inline unsigned int
trailing_wide_int_storage::get_len () const
{
return *m_len;
}
inline unsigned int
trailing_wide_int_storage::get_precision () const
{
return m_precision;
}
inline const HOST_WIDE_INT *
trailing_wide_int_storage::get_val () const
{
return m_val;
}
inline HOST_WIDE_INT *
trailing_wide_int_storage::write_val (unsigned int)
{
return m_val;
}
inline void
trailing_wide_int_storage::set_len (unsigned int len, bool is_sign_extended)
{
*m_len = len;
if (!is_sign_extended && len * HOST_BITS_PER_WIDE_INT > m_precision)
m_val[len - 1] = sext_hwi (m_val[len - 1],
m_precision % HOST_BITS_PER_WIDE_INT);
}
template
inline trailing_wide_int_storage &
trailing_wide_int_storage::operator = (const T &x)
{
WIDE_INT_REF_FOR (T) xi (x, m_precision);
wi::copy (*this, xi);
return *this;
}
/* Initialize the structure and record that all elements have precision
PRECISION. NUM_ELEMENTS can be no more than N. */
template
inline void
trailing_wide_ints ::set_precision (unsigned int precision,
unsigned int num_elements)
{
gcc_checking_assert (num_elements <= N);
m_num_elements = num_elements;
m_precision = precision;
m_max_len = WIDE_INT_MAX_HWIS (precision);
}
/* Return a reference to element INDEX. */
template
inline trailing_wide_int
trailing_wide_ints ::operator [] (unsigned int index)
{
return trailing_wide_int_storage (m_precision, &m_len[index],
&m_val[index * m_max_len]);
}
template
inline typename trailing_wide_ints ::const_reference
trailing_wide_ints ::operator [] (unsigned int index) const
{
return wi::storage_ref (&m_val[index * m_max_len],
m_len[index], m_precision);
}
/* Return how many extra bytes need to be added to the end of the
structure in order to handle NUM_ELEMENTS wide_ints of precision
PRECISION. NUM_ELEMENTS is the number of elements, and defaults
to N. */
template
inline size_t
trailing_wide_ints ::extra_size (unsigned int precision,
unsigned int num_elements)
{
unsigned int max_len = WIDE_INT_MAX_HWIS (precision);
gcc_checking_assert (num_elements <= N);
return (num_elements * max_len - 1) * sizeof (HOST_WIDE_INT);
}
/* This macro is used in structures that end with a trailing_wide_ints field
called FIELD. It declares get_NAME() and set_NAME() methods to access
element I of FIELD. */
#define TRAILING_WIDE_INT_ACCESSOR(NAME, FIELD, I) \
trailing_wide_int get_##NAME () { return FIELD[I]; } \
template void set_##NAME (const T &x) { FIELD[I] = x; }
namespace wi
{
/* Implementation of int_traits for primitive integer types like "int". */
template
struct primitive_int_traits
{
static const enum precision_type precision_type = FLEXIBLE_PRECISION;
static const bool host_dependent_precision = true;
static const bool is_sign_extended = true;
static const bool needs_write_val_arg = false;
static unsigned int get_precision (T);
static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, T);
};
}
template
inline unsigned int
wi::primitive_int_traits