Age | Commit message (Collapse) | Author | Files | Lines |
|
into infinities [PR109008]
The following patch does two things (both related to range extension
around the boundaries).
The first part (in the 2 real_isfinite blocks) is to make the ranges
narrower when the old boundaries are minimum and/or maximum representable
finite number. In that case frange_nextafter gives -Inf or +Inf,
but then the resulting computed reverse range is very far from the actually
needed range, usually extends up to infinity or could even result in NaNs.
While infinities are really the next representable numbers in the
corresponding mode, REAL_VALUE_TYPE is actually a type with wider range
for exponent and 160 bit precision, so the patch instead uses
nextafter number in a hypothetical floating point format with the same
mantissa precision but wider range of exponents. This significantly
improves the actual ranges of the reverse operations, while still making
them conservatively correct.
The second part is a fix for miscompilation of the new testcase below.
For -ffinite-math-only, without this patch we extend the minimum and/or
maximum representable finite number to -Inf or +Inf, with the patch to
some number outside of the normal exponent range of the mode, but then
we use set which canonicalizes it and turns the boundaries back to
the minimum and/or maximum representable finite numbers, but because
in say [__DBL_MAX__, __DBL_MAX__] = op1 + [__DBL_MAX__, __DBL_MAX__]
op1 can be larger than 0, up to the largest number which rounds to even
down back to __DBL_MAX__ and there are still no infinities involved,
it needs to work even with -ffinite-math-only. So, we really need to
widen the lhs range a little bit even in that case. The patch does
that through temporarily clearing -ffinite-math-only, such that the
value with infinities or the outside of bounds values passes the
setting and verification (the VR_VARYING case is needed because
we get ICEs otherwise, but when lhs is VR_VARYING in -ffast-math,
i.e. minimum to maximum representable finite and both signs of NaN,
then set does all we need, we don't need to or in a NaN range).
We don't really later use the range in a way that would become a problem
that it is wider than varying, we actually just perform maths on the
two boundaries.
As I said in the PR, this doesn't fix the !MODE_HAS_INFINITIES case,
I believe we actually need to treat the boundary values as infinities
in that case because they (probably) work like that, but it is unclear
if it is just the reverse operation lhs widening that is a problem there,
or whether it is a general problem. I have zero experience with
floating points without infinities (PDP11, some ARM half type?,
what else?).
2023-03-10 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/109008
* range-op-float.cc (float_widen_lhs_range): If lb is
minimum representable finite number or ub is maximum
representable finite number, instead of widening it to
-inf or inf widen it to negative or positive 0x0.8p+(EMAX+1).
Temporarily clear flag_finite_math_only when canonicalizing
the widened range.
* gcc.dg/pr109008.c: New test.
|
|
The following testcase is reduced from miscompilation of scipy package.
If we have say lhs = [1., 1.] - [1., 1.] and want to compute the range
of lhs from it, we correctly determine it is [0., 0.] (if computations
are exact, we generally don't try to round them further in
frange_arithmetic). In the testcase it is about a reverse operation,
[1., 1.] = op1 + [1., 1.] and we want to compute range of op1 from that.
Right now we just perform the inverse operation (there are some corner
cases about NaN and infinities handling) and so arrive to range
[0., 0.] as well, and because it is a singleton, optimize return eps;
to return 0. That is incorrect though, for the reverse ops we need to
take into account also rounding, the right exact range is
[-0x1.0p-54, 0x1.0p-53] in this case when rounding to nearest, i.e.
all numbers which added to 1. with round to nearest still produce 1.
The problem isn't solely on singleton ranges, and isn't solely on
results around zero. We basically need to consider also values
where the result is up to 0.5ulp away from the lhs range boundaries
in each direction.
The following patch fixes it by extending the lhs range for the
reverse operations by 1ulp in each direction. The PR contains
a pseudo-random test generator I've used to generate 300000 tests
of + and - and then used the same test with * and / instead of + and -
together with a hack to print the discovered ranges by the patch in
a form that another test could then verify the range is conservatively
correct and how far it is from a minimal range.
I believe the results are good enough for now, though plan to look
incrementally into trying to do something better on the -XXX_MAX or
XXX_MAX boundaries (where I think frange_nextafter will use -inf or +inf)
and also try to increase the range just by 0.5ulp rather than 1ulp
if !flag_rounding_math. But dunno if either of those will be doable
and will pass the testing, so I think it is worth committing this fix
first.
2023-03-09 Jakub Jelinek <jakub@redhat.com>
Richard Biener <rguenther@suse.de>
PR tree-optimization/109008
* range-op-float.cc (float_widen_lhs_range): New function.
(foperator_plus::op1_range, foperator_minus::op1_range,
foperator_minus::op2_range, foperator_mult::op1_range,
foperator_div::op1_range, foperator_div::op2_range): Use it.
* gcc.c-torture/execute/ieee/pr109008.c: New test.
|
|
This patch gracefully handles undefined operand ranges for the floating
point op[12]_range operators. This is very low risk, as we would have
ICEd otherwise.
We don't have a testcase that ICEs for floating point ranges, but it's
only a matter of time. Besides, this dovetails nicely with the integer
versions Jakub is testing.
gcc/ChangeLog:
PR tree-optimization/108647
* range-op-float.cc (foperator_lt::op1_range): Handle undefined ranges.
(foperator_lt::op2_range): Same.
(foperator_le::op1_range): Same.
(foperator_le::op2_range): Same.
(foperator_gt::op1_range): Same.
(foperator_gt::op2_range): Same.
(foperator_ge::op1_range): Same.
(foperator_ge::op2_range): Same.
(foperator_unordered_lt::op1_range): Same.
(foperator_unordered_lt::op2_range): Same.
(foperator_unordered_le::op1_range): Same.
(foperator_unordered_le::op2_range): Same.
(foperator_unordered_gt::op1_range): Same.
(foperator_unordered_gt::op2_range): Same.
(foperator_unordered_ge::op1_range): Same.
(foperator_unordered_ge::op2_range): Same.
|
|
The following testcases are miscompiled, because threader sees some
SSA_NAME would have -0.0 value and when computing range of SSA_NAME == 0.0
foperator_equal::fold_range sees one operand has [-0.0, -0.0] singleton
range, the other [0.0, 0.0], they aren't equal (frange operator== uses
real_identical etc. rather than real comparisons) and so it thinks they
compare unequal. With signed zeros -0.0 == 0.0 is true though, so we
need to special case the both ranges singleton code.
Similarly, if we see op1 range being say [-42.0, -0.0] and op2 range
[0.0, 42.0], we'd check that the intersection of the two ranges is empty
(that is correct) and fold the result of == between such operands to
[0, 0] which is wrong, because -0.0 == 0.0, it needs to be [0, 1].
Similarly for foperator_not_equal::fold_range.
2023-01-26 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/108540
* range-op-float.cc (foperator_equal::fold_range): If both op1 and op2
are singletons, use range_true even if op1 != op2
when one range is [-0.0, -0.0] and another [0.0, 0.0]. Similarly,
even if intersection of the ranges is empty and one has
zero low bound and another zero high bound, use range_true_and_false
rather than range_false.
(foperator_not_equal::fold_range): If both op1 and op2
are singletons, use range_false even if op1 != op2
when one range is [-0.0, -0.0] and another [0.0, 0.0]. Similarly,
even if intersection of the ranges is empty and one has
zero low bound and another zero high bound, use range_true_and_false
rather than range_true.
* gcc.c-torture/execute/ieee/pr108540-1.c: New test.
* gcc.c-torture/execute/ieee/pr108540-2.c: New test.
|
|
As discussed in the PR, for trapping math, do not fold overflowing
operations into +-INF as doing so could elide a trap.
There is a minor adjustment to known_isinf() where it was mistakenly
returning true for an [infinity U NAN], whereas it should only return
true when the range is exclusively +INF or -INF. This is benign, as
there were no users of known_isinf up to now.
Tested on x86-64 Linux.
I also ran the glibc testsuite (git sources) on x86-64 and this patch
fixes:
-FAIL: math/test-double-lgamma
-FAIL: math/test-double-log1p
-FAIL: math/test-float-lgamma
-FAIL: math/test-float-log1p
-FAIL: math/test-float128-catan
-FAIL: math/test-float128-catanh
-FAIL: math/test-float128-lgamma
-FAIL: math/test-float128-log
-FAIL: math/test-float128-log1p
-FAIL: math/test-float128-y0
-FAIL: math/test-float128-y1
-FAIL: math/test-float32-lgamma
-FAIL: math/test-float32-log1p
-FAIL: math/test-float32x-lgamma
-FAIL: math/test-float32x-log1p
-FAIL: math/test-float64-lgamma
-FAIL: math/test-float64-log1p
-FAIL: math/test-float64x-lgamma
-FAIL: math/test-ldouble-lgamma
PR tree-optimization/107608
gcc/ChangeLog:
* range-op-float.cc (range_operator_float::fold_range): Avoid
folding into INF when flag_trapping_math.
* value-range.h (frange::known_isinf): Return false for possible NANs.
|
|
|
|
As mentioned in PR107967, ibm-ldouble-format documents that
+- has 1ulp accuracy, * 2ulps and / 3ulps.
So, even if the result is exact, we need to widen the range a little bit.
The following patch does that. I just wonder what it means for reverse
division (the op1_range case), which we implement through multiplication,
when division has 3ulps error and multiplication just 2ulps. In any case,
this format is a mess and for non-default rounding modes can't be trusted
at all, instead of +inf or something close to it it happily computes -inf.
2022-12-08 Jakub Jelinek <jakub@redhat.com>
* range-op-float.cc (frange_nextafter): For MODE_COMPOSITE_P from
denormal or zero, use real_nextafter on DFmode with conversions
around it.
(frange_arithmetic): For mode_composite, on top of rounding in the
right direction accept extra 1ulp error for PLUS/MINUS_EXPR, extra
2ulps error for MULT_EXPR and extra 3ulps error for RDIV_EXPR.
|
|
The addition of PLUS/MINUS/MULT/RDIV_EXPR frange handlers causes
miscompilation of some of the libm routines, resulting in lots of
glibc test failures. A part of them is purely PR107608 fold-overflow-1.c
etc. issues, say when the code does
return -0.5 / 0.0;
and expects division by zero to be emitted, but we propagate -Inf
and avoid the operation.
But there are also various tests where we end up with different computed
value from the expected ones. All those cases are like:
is: inf inf
should be: 1.18973149535723176502e+4932 0xf.fffffffffffffff0p+16380
is: inf inf
should be: 1.18973149535723176508575932662800701e+4932 0x1.ffffffffffffffffffffffffffffp+16383
is: inf inf
should be: 1.7976931348623157e+308 0x1.fffffffffffffp+1023
is: inf inf
should be: 3.40282346e+38 0x1.fffffep+127
and the corresponding source looks like:
static const double huge = 1.0e+300;
double whatever (...) {
...
return huge * huge;
...
}
which for rounding to nearest or +inf should and does return +inf, but
for rounding to -inf or 0 should instead return nextafter (inf, -inf);
The rules IEEE754 has are that operations on +-Inf operands are exact
and produce +-Inf (except for the invalid ones that produce NaN) regardless
of rounding mode, while overflows:
"a) roundTiesToEven and roundTiesToAway carry all overflows to ∞ with the
sign of the intermediate result.
b) roundTowardZero carries all overflows to the format’s largest finite
number with the sign of the intermediate result.
c) roundTowardNegative carries positive overflows to the format’s largest
finite number, and carries negative overflows to −∞.
d) roundTowardPositive carries negative overflows to the format’s most
negative finite number, and carries positive overflows to +∞."
The behavior around overflows to -Inf or nextafter (-inf, inf) was actually
handled correctly, we'd construct [-INF, -MAX] ranges in those cases
because !real_less (&value, &result) in that case - value is finite
but larger in magnitude than what the format can represent (but GCC
internal's format can), while result is -INF in that case.
But for the overflows to +Inf or nextafter (inf, -inf) was handled
incorrectly, it tested real_less (&result, &value) rather than
!real_less (&result, &value), the former test is true when already the
rounding value -> result rounded down and in that case we shouldn't
round again, we should round down when it didn't.
So, in theory this could be fixed just by adding one ! character,
- if ((mode_composite || (real_isneg (&inf) ? real_less (&result, &value)
+ if ((mode_composite || (real_isneg (&inf) ? !real_less (&result, &value)
: !real_less (&value, &result)))
but the following patch goes further. The distance between
nextafter (inf, -inf) and inf is large (infinite) and expressions like
1.0e+300 * 1.0e+300 always produce +inf in round to nearest mode by far,
so I think having low bound of nextafter (inf, -inf) in that case is
unnecessary. But if it isn't multiplication but say addition and we are
inexact and very close to the boundary between rounding to nearest
maximum representable vs. rounding to nearest +inf, still using [MAX, +INF]
etc. ranges seems safer because we don't know exactly what we lost in the
inexact computation.
The following patch implements that.
2022-12-08 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/107967
* range-op-float.cc (frange_arithmetic): Fix a thinko - if
inf is negative, use nextafter if !real_less (&result, &value)
rather than if real_less (&result, &value). If result is +-INF
while value is finite and -fno-rounding-math, don't do rounding
if !inexact or if result is significantly above max representable
value or below min representable value.
* gcc.dg/pr107967-1.c: New test.
* gcc.dg/pr107967-2.c: New test.
* gcc.dg/pr107967-3.c: New test.
|
|
On Mon, Dec 05, 2022 at 02:29:36PM +0100, Aldy Hernandez wrote:
> > So like this for multiplication op1/2_range if it passes bootstrap/regtest?
> > For division I'll need to go to a drawing board...
>
> Sure, looks good to me.
Ulrich just filed PR107972, so in the light of that PR the following patch
attempts to do that differently.
As for testcase, I've tried both attached testcases, but unfortunately it
seems that in neither of the cases we actually figure out that res range
is finite (or for last function non-zero ordered). So there is further
work needed on that.
2022-12-06 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/107972
* range-op-float.cc (frange_drop_infs): New function.
(float_binary_op_range_finish): Add DIV_OP2 argument. If DIV_OP2 is
false and lhs is finite or if DIV_OP2 is true and lhs is non-zero and
not NAN, r must be finite too.
(foperator_div::op2_range): Pass true to DIV_OP2 of
float_binary_op_range_finish.
|
|
According to https://gcc.gnu.org/pipermail/gcc-regression/2022-December/077258.html
my patch caused some ICEs, e.g. the following testcase ICEs.
The problem is that lower_bound and upper_bound methods on a france assert
that the range isn't VR_NAN or VR_UNDEFINED.
All the op1_range/op2_range methods already return early if lhs.undefined_p,
but the other cases (when lhs is VR_NAN or the other opN is VR_NAN or
VR_UNDEFINED) aren't. float_binary_op_range_finish will DTRT for those
cases already.
2022-12-06 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/107975
* range-op-float.cc (foperator_mult::op1_range,
foperator_div::op1_range, foperator_div::op2_range): Just
return float_binary_op_range_finish result if lhs is known
NAN, or the other operand is known NAN or UNDEFINED.
* gcc.dg/pr107975.c: New test.
|
|
While for the normal cases it seems to be correct to implement
reverse multiplication (op1_range/op2_range) through division
with float_binary_op_range_finish, reverse division (op1_range)
through multiplication with float_binary_op_range_finish or
(op2_range) through division with float_binary_op_range_finish,
as e.g. following testcase shows for the corner cases it is
incorrect.
Say on the testcase we are doing reverse multiplication, we
have [-0., 0.] range (no NAN) on lhs and VARYING on op1 (or op2).
We implement that through division, because x from
lhs = x * op2
is
x = lhs / op2
For the division, [-0., 0.] / VARYING is computed (IMHO correctly)
as [-0., 0.] +-NAN, because 0 / anything but 0 or NAN is still
0 and 0 / 0 is NAN and ditto 0 / NAN. And then we just
float_binary_op_range_finish, which figures out that because lhs
can't be NAN, neither operand can be NAN. So, the end range is
[-0., 0.]. But that is not correct for the reverse multiplication.
When the result is 0, if op2 can be zero, then x can be anything
(VARYING), to be precise anything but INF (unless result can be NAN),
because anything * 0 is 0 (or NAN for INF). While if op2 must be
non-zero, then x must be 0. Of course the sign logic
(signbit(x) = signbit(lhs) ^ signbit(op2)) still holds, so it actually
isn't full VARYING if both lhs and op2 have known sign bits.
And going through other corner cases one by one shows other differences
between what we compute for the corresponding forward operation and
what we should compute for the reverse operations.
The following patch is slightly conservative and includes INF
(in case of result including 0 and not NAN) in the ranges or
0 in the ranges (in case of result including INF and not NAN).
The latter is what happens anyway because we flush denormals to 0,
and the former just not to deal with all the corner cases.
So, the end test is that for reverse multiplication and division
op2_range the cases we need to adjust to VARYING or VARYING positive
or VARYING negative are if lhs and op? ranges both contain 0,
or both contain some infinity, while for division op1_range the
corner case is if lhs range contains 0 and op2 range contains INF or vice
versa. Otherwise I believe ranges from the corresponding operation
are ok, or could be slightly more conservative (e.g. for
reverse multiplication, if op? range is singleton INF and lhs
range doesn't include any INF, then x's range should be UNDEFINED or
known NAN (depending on if lhs can be NAN), while the division computes
[-0., 0.] +-NAN; or similarly if op? range is only 0 and lhs range
doesn't include 0, division would compute +INF +-NAN, or -INF +-NAN,
or (for lack of multipart franges -INF +INF +-NAN just VARYING +-NAN),
while again it is UNDEFINED or known NAN.
Oh, and I found by code inspection wrong condition for the division's
known NAN result, due to thinko it would trigger not just when
both operands are known to be 0 or both are known to be INF, but
when either both are known to be 0, or at least one is known to be INF.
2022-12-05 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/107879
* range-op-float.cc (foperator_mult::op1_range): If both
lhs and op2 ranges contain zero or both ranges contain
some infinity, set r range to zero_to_inf_range depending on
signbit_known_p.
(foperator_div::op2_range): Similarly for lhs and op1 ranges.
(foperator_div::op1_range): If lhs range contains zero and op2
range contains some infinity or vice versa, set r range to
zero_to_inf_range depending on signbit_known_p.
(foperator_div::rv_fold): Fix up condition for returning known NAN.
|
|
The threader is creating a scenario where we are trying to solve:
[NEGATIVES] = abs(x)
While solving this we have an intermediate value of UNDEFINED because
we have no positive numbers. But then we try to union the negative
pair to the final result by querying the bounds. Since neither
UNDEFINED nor NAN have bounds, they need to be specially handled.
PR tree-optimization/107732
gcc/ChangeLog:
* range-op-float.cc (foperator_abs::op1_range): Early exit when
result is undefined.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/pr107732.c: New test.
|
|
gcc/ChangeLog:
* range-op-float.cc (range_operator_float::fold_range): Make check
for maybe_isnan more readable.
|
|
The following testcase ICEs, because when !HONOR_NANS but
HONOR_SIGNED_ZEROS, if we see
lhs = op1 * op2;
and know that lhs is [-0.0, 0.0] and op2 is [0.0, 0.0], the
division of these two yields UNDEFINED and clear_nan () on it
fails an assert. With HONOR_NANS it would actually result in
a known NAN, but when NANs aren't honored, we clear the NAN bits.
Now, for the above case we actually don't know anything about
the op1 range (except that it isn't a NAN/INF because of
!HONOR_NANS !HONOR_INFINITIES), so I think the best is just
to return VARYING for the case we get UNDEFINED as well.
If we want, the op[12]_range methods perhaps can handle the
corner cases earlier separately, say for
lhs [0.0, 0.0] and op2 [0.0, 0.0] when HONOR_SIGNED_ZEROS this
would be just [0.0, MAX].
2022-11-16 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/107668
* range-op-float.cc (float_binary_op_range_finish): Set VARYING
also when r is UNDEFINED.
* gcc.dg/ubsan/pr107668.c: New test.
|
|
Currently we represent < and > with a closed interval. So < 3.0 is
represented as [-INF, +3.0]. This means 3.0 is included in the range,
and though not ideal, is conservatively correct. Jakub has found a
couple cases where properly representing < and > would help
optimizations and tests, and this patch allows representing open
intervals with real_nextafter.
There are a few caveats.
First, we leave MODE_COMPOSITE_P types pessimistically as a closed interval.
Second, for -ffinite-math-only, real_nextafter will saturate the
maximum representable number into +INF. However, this will still do
the right thing, as frange::set() will crop things appropriately.
Finally, and most frustratingly, representing < and > -+0.0 is
problematic because we flush denormals to zero. Let me explain...
real_nextafter(+0.0, +INF) gives 0x0.8p-148 as expected, but setting a
range to this value yields [+0.0, 0x0.8p-148] because of the flushing.
On the other hand, real_nextafter(+0.0, -INF) (surprisingly) gives
-0.0.8p-148, but setting a range to that value yields [-0.0x8p-148,
-0.0]. I say surprising, because according to cppreference.com,
std::nextafter(+0.0, -INF) should give -0.0. But that's neither here
nor there because our flushing denormals to zero prevents us from even
representing ranges involving small values around 0.0. We ultimately
end up with ranges looking like this:
> +0.0 => [+0.0, INF]
> -0.0 => [+0.0, INF]
< +0.0 => [-INF, -0.0]
< -0.0 => [-INF, -0.0]
I suppose this is no worse off that what we had before with closed
intervals. One could even argue that we're better because we at least
have the right sign now ;-).
gcc/ChangeLog:
* range-op-float.cc (build_lt): Adjust with frange_nextafter
instead of default to a closed range.
(build_gt): Same.
|
|
On Wed, Nov 09, 2022 at 04:43:56PM +0100, Aldy Hernandez wrote:
> On Wed, Nov 9, 2022 at 3:58 PM Jakub Jelinek <jakub@redhat.com> wrote:
> >
> > On Wed, Nov 09, 2022 at 10:02:46AM +0100, Aldy Hernandez wrote:
> > > We can implement the op[12]_range entries for plus and minus in terms
> > > of each other. These are adapted from the integer versions.
> >
> > I think for NANs the op[12]_range shouldn't act this way.
> > For the forward binary operations, we have the (maybe/known) NAN handling
> > of one or both NAN operands resulting in VARYING sign (maybe/known) NAN
> > result, that is the somehow the case for the reverse binary operations too,
> > if result is (maybe/known) NAN and the other op is not NAN, op is
> > VARYING sign (maybe/known) NAN, if other op is (maybe/known) NAN,
> > then op is VARYING sign maybe NAN (always maybe, never known).
> > But then for + we have the -INF + INF or vice versa into NAN, and that
> > is something that shouldn't be considered. If result isn't NAN, then
> > neither operand can be NAN, regardless of whether result can be
> > +/- INF and the other op -/+ INF.
>
> Heh. I just ran into this while debugging the problem reported by Xi.
>
> We are solving NAN = op1 - VARYING, and trying to do it with op1 = NAN
> + VARYING, which returns op1 = NAN (incorrectly).
>
> I suppose in the above case op1 should ideally be
> [-INF,-INF][+INF,+INF]+-NAN, but since we can't represent that then
> [-INF,+INF] +-NAN, which is actually VARYING. Do you agree?
>
> I'm reverting this patch as attached, while I sort this out.
Here is a patch which reinstalls your change, add the fixups I've talked
about and also hooks up reverse operators for MULT_EXPR/RDIV_EXPR.
2022-11-12 Aldy Hernandez <aldyh@redhat.com>
Jakub Jelinek <jakub@redhat.com>
* range-op-float.cc (float_binary_op_range_finish): New function.
(foperator_plus::op1_range): New.
(foperator_plus::op2_range): New.
(foperator_minus::op1_range): New.
(foperator_minus::op2_range): New.
(foperator_mult::op1_range): New.
(foperator_mult::op2_range): New.
(foperator_div::op1_range): New.
(foperator_div::op2_range): New.
* gcc.c-torture/execute/ieee/inf-4.c: New test.
|
|
[PR107569]
Admittedly there are many similar spots with the foperator_div case
(but also with significant differences), so perhaps if foperator_{mult,div}
inherit from some derived class from range_operator_float and that class
would define various smaller helper static? methods, like this
discussed in the PR - contains_zero_p, singleton_nan_p, zero_p,
that
+ bool must_have_signbit_zero = false;
+ bool must_have_signbit_nonzero = false;
+ if (real_isneg (&lh_lb) == real_isneg (&lh_ub)
+ && real_isneg (&rh_lb) == real_isneg (&rh_ub))
+ {
+ if (real_isneg (&lh_lb) == real_isneg (&rh_ub))
+ must_have_signbit_zero = true;
+ else
+ must_have_signbit_nonzero = true;
+ }
returned as -1/0/1 int, and those set result (based on the above value) to
[+INF, +INF], [-INF, -INF] or [-INF, +INF]
or
[+0, +0], [-0, -0] or [-0, +0]
or
[+0, +INF], [-INF, -0] or [-INF, +INF]
and the
+ for (int i = 1; i < 4; ++i)
+ {
+ if (real_less (&cp[i], &cp[0])
+ || (real_iszero (&cp[0]) && real_isnegzero (&cp[i])))
+ std::swap (cp[i], cp[0]);
+ if (real_less (&cp[4], &cp[i + 4])
+ || (real_isnegzero (&cp[4]) && real_iszero (&cp[i + 4])))
+ std::swap (cp[i + 4], cp[4]);
+ }
block, it could be smaller and more readable.
2022-11-12 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/107569
* range-op-float.cc (zero_p, contains_p, singleton_inf_p,
signbit_known_p, zero_range, inf_range, zero_to_inf_range): New
functions.
(foperator_mult_div_base): New class.
(foperator_mult, foperator_div): Derive from that and use
protected static method from it as well as above new functions
to simplify the code.
|
|
Here is the floating point division fold_range implementation,
as I wrote in the last mail, we could outline some of the common parts
into static methods with descriptive names and share them between
foperator_div and foperator_mult.
Regressions are
+FAIL: gcc.dg/pr95115.c execution test
+FAIL: libphobos.phobos/std/math/hardware.d execution test
+FAIL: libphobos.phobos_shared/std/math/hardware.d execution test
The first test is we have:
# RANGE [frange] double [] +-NAN
_3 = Inf / Inf;
if (_3 ord _3)
goto <bb 3>; [INV]
else
goto <bb 4>; [INV]
<bb 3> :
abort ();
<bb 4> :
before evrp, the range is correct, Inf / Inf is known NAN of unknown
sign. evrp correctly folds _3 ord _3 into false and the
_3 = Inf / Inf;
remains in the IL, but then comes dse1 and removes it as dead
statement. So, I think yet another example of the PR107608 problems
where DCE? removes dead statements which raise floating point exceptions.
And -fno-delete-dead-exceptions doesn't help.
2022-11-12 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/107569
* range-op-float.cc (foperator_div): New class.
(floating_op_table::floating_op_table): Use foperator_div
for RDIV_EXPR.
|
|
The following patch implements frange multiplication, including the
special case of x * x. The callers don't tell us that it is x * x,
just that it is either z = x * x or if (x == y) z = x * y;
For irange that makes no difference, but for frange it can mean
x is -0.0 and y is 0.0 if they have the same range that includes both
signed and unsigned zeros, so we need to assume result could be -0.0.
The patch causes one regression:
+FAIL: gcc.dg/fold-overflow-1.c scan-assembler-times 2139095040 2
but that is already tracked in PR107608 and affects not just the newly
added multiplication, but addition and other floating point operations
(and doesn't seem like a ranger bug but dce or whatever else).
2022-11-12 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/107569
PR tree-optimization/107591
* range-op.h (range_operator_float::rv_fold): Add relation_kind
argument.
* range-op-float.cc (range_operator_float::fold_range): Name
last argument trio and pass trio.op1_op2 () as last argument to
rv_fold.
(range_operator_float::rv_fold): Add relation_kind argument.
(foperator_plus::rv_fold, foperator_minus::rv_fold): Likewise.
(foperator_mult): New class.
(floating_op_table::floating_op_table): Use foperator_mult for
MULT_EXPR.
|
|
Revert the patch below until issues are resolved:
commit 4287e8168f89e90b3dff3a50f3ada40be53e0e01
Author: Aldy Hernandez <aldyh@redhat.com>
Date: Wed Nov 9 01:00:57 2022 +0100
Implement op[12]_range operators for PLUS_EXPR and MINUS_EXPR.
We can implement the op[12]_range entries for plus and minus in terms
of each other. These are adapted from the integer versions.
gcc/ChangeLog:
* range-op-float.cc (class foperator_plus): Remove op[12]_range.
(class foperator_minus): Same.
|
|
foperator_abs::op1_range works except for the NaN handling,
from:
[frange] double [-Inf, 1.79769313486231570814527423731704356798070567525844996599e+308 (0x0.fffffffffffff8p+1024)]
lhs it computes r
[frange] double [-1.79769313486231570814527423731704356798070567525844996599e+308 (-0x0.fffffffffffff8p+1024), 1.79769313486231570814527423731704356798070567525844996599e+308
+(0x0.fffffffffffff8p+1024)] +-NAN
which is correct except for the +-NAN part.
For r before the final step it makes sure to add -NAN if there is +NAN
in the lhs range, but the final r.union_ makes it unconditional +-NAN,
because the frange ctor sets +-NAN.
So, I think we need to clear it (or have some set variant which
says not to set NAN).
This patch fixes that, but isn't enough to fix the PR, something in
the assumptions handling is still broken (and the PR has other parts).
2022-11-09 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/107569
* range-op-float.cc (foperator_abs::op1_range): Clear NaNs
from the negatives frange before unioning it into r.
|
|
We can implement the op[12]_range entries for plus and minus in terms
of each other. These are adapted from the integer versions.
gcc/ChangeLog:
* range-op-float.cc (foperator_plus::op1_range): New.
(foperator_plus::op2_range): New.
(foperator_minus::op1_range): New.
(foperator_minus::op2_range): New.
|
|
Now that the generic parts of the binary operators have been
abstracted, implementing MINUS_EXPR is a cinch.
The op[12]_range entries will be submitted as a follow-up.
gcc/ChangeLog:
* range-op-float.cc (class foperator_minus): New.
(floating_op_table::floating_op_table): Add MINUS_EXPR entry.
|
|
The PLUS_EXPR was always meant to be a template for further
development, since most of the binary operators will share a similar
structure. This patch abstracts out the common bits into the default
definition for range_operator_float::fold_range() and provides an
rv_fold() to be implemented by the individual entries wishing to use
the generic folder. This is akin to what we do with fold_range() and
wi_fold() in the integer version of range-ops.
gcc/ChangeLog:
* range-op-float.cc (range_operator_float::fold_range): Abstract
out from foperator_plus.
(range_operator_float::rv_fold): New.
(foperator_plus::fold_range): Remove.
(foperator_plus::rv_fold): New.
(propagate_nans): Remove.
* range-op.h (class range_operator_float): Add rv_fold.
|
|
Some combinations of operations can yield a NAN even if no operands
have the possiblity of a NAN. For example, [-INF] + [+INF] = NAN and
vice versa.
For [-INF,+INF] + [-INF,+INF], frange_arithmetic will not return a
NAN, and since the operands have no possibility of a NAN, we will
mistakenly assume the result cannot have a NAN. This fixes the
oversight.
gcc/ChangeLog:
* range-op-float.cc (foperator_plus::fold_range): Set NAN for
addition of different signed infinities.
(range_op_float_tests): New test.
|
|
This is the range-op entry for floating point PLUS_EXPR. It's the
most intricate range entry we have so far, because we need to keep
track of rounding and target FP formats. This will be the last FP
entry I commit, mostly to avoid disturbing the tree any further, and
also because what we have so far is enough for a solid VRP.
So far we track NANs and signs correctly. We also handle relationals
(symbolics and numeric), both ordered and unordered, ABS_EXPR and
NEGATE_EXPR which are used to fold __builtin_isinf, and __builtin_sign
(__builtin_copysign is coming up). All in all, I think this provide
more than enough for basic VRP on floats, as well as provide a basis
to flesh out the rest if there's interest.
My goal with this entry is to provide a template for additional binary
operators, as they tend to follow a similar pattern: handle NANs, do
the arithmetic while keeping track of rounding, and adjust for NAN. I
may abstract the general parts as we do for irange's fold_range and
wi_fold.
PR tree-optimization/24021
gcc/ChangeLog:
* range-op-float.cc (propagate_nans): New.
(frange_nextafter): New.
(frange_arithmetic): New.
(class foperator_plus): New.
(floating_op_table::floating_op_table): Add PLUS_EXPR entry.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/vrp-float-plus.c: New test.
|
|
None of the build_<OP> functions in range-op handle NANs. This is by
design in order to force us to handle NANs specially, because
"x relop NAN" makes no sense. This patch fixes a handful of
op[12]_range entries that weren't handling NANs.
PR tree-optimization/107490
gcc/ChangeLog:
* range-op-float.cc (foperator_unordered_lt::op1_range): Handle
NANs.
(foperator_unordered_lt::op2_range): Same.
(foperator_unordered_le::op1_range): Same.
(foperator_unordered_le::op2_range): Same.
(foperator_unordered_gt::op1_range): Same.
(foperator_unordered_gt::op2_range): Same.
(foperator_unordered_ge::op1_range): Same.
(foperator_unordered_ge::op2_range): Same.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/pr107490.c: New test.
|
|
The problem here is that the threader is coming up with a path where
the only valid result is a NAN. When the abs op1_range entry is
trying to add the negative posibility, it attempts to get the bounds
of the working range. NANs don't have bounds so they need to be
special cased.
PR tree-optimization/107355
gcc/ChangeLog:
* range-op-float.cc (foperator_abs::op1_range): Handle NAN.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/pr107355.c: New test.
|
|
gcc/ChangeLog:
* range-op-float.cc (foperator_unordered_lt::op1_range): New.
(foperator_unordered_lt::op2_range): New.
|
|
The false side of UNORDERED_<cond> means neither operand can be a NAN.
Adjust all the op[12]_range entries for the UNORDERED operators such
that a known NAN on one operands means the other operands is
undefined.
gcc/ChangeLog:
* range-op-float.cc (foperator_unordered_le::op1_range): Adjust
false side with a NAN operand.
(foperator_unordered_le::op2_range): Same.
(foperator_unordered_gt::op1_range): Same.
(foperator_unordered_gt::op2_range): Same.
(foperator_unordered_ge::op1_range): Same.
(foperator_unordered_ge::op2_range): Same.
(foperator_unordered_equal::op1_range): Same.
|
|
Since NANs can't appear in ranges for !HONOR_NANS, there's no reason
to set them in a VARYING range.
gcc/ChangeLog:
* value-range.h (frange::set_varying): Do not set NAN flags for
!HONOR_NANS.
* value-range.cc (frange::normalize_kind): Adjust for no NAN when
!HONOR_NANS.
(frange::verify_range): Same.
* range-op-float.cc (maybe_isnan): Remove flag_finite_math_only check.
|
|
The finite_operands_p function was incorrectly named, as it only
returned TRUE when !NAN. This was leftover from the initial
implementation of frange. Using the maybe_isnan() nomenclature is
more consistent and easier to understand.
gcc/ChangeLog:
* range-op-float.cc (finite_operand_p): Remove.
(finite_operands_p): Rename to...
(maybe_isnan): ...this.
(frelop_early_resolve): Use maybe_isnan instead of finite_operands_p.
(foperator_equal::fold_range): Same.
(foperator_equal::op1_range): Same.
(foperator_not_equal::fold_range): Same.
(foperator_lt::fold_range): Same.
(foperator_le::fold_range): Same.
(foperator_gt::fold_range): Same.
(foperator_ge::fold_range): Same.
|
|
A result of false from build_<COND> in range-ops means the result is
final and needs no further adjustments. This patch documents this,
and changes all uses to check the result. There should be no change
in functionality.
gcc/ChangeLog:
* range-op-float.cc (build_le): Document result.
(build_lt): Same.
(build_ge): Same.
(foperator_ge::op2_range): Check result of build_*.
(foperator_unordered_le::op1_range): Same.
(foperator_unordered_le::op2_range): Same.
(foperator_unordered_gt::op1_range): Same.
(foperator_unordered_gt::op2_range): Same.
(foperator_unordered_ge::op1_range): Same.
(foperator_unordered_ge::op2_range): Same.
|
|
There are 3 possible relations range-ops might care about, but only the one
most likely to be needed is supplied. This patch provides a new class
relation_trio which allows 3 relations to be passed in a single word.
fold_range (), op1_range (), and op2_range () are adjusted to take a
relation_trio class instead of a relation_kind, then the routine can
extract which relation it wants to work with.
* gimple-range-fold.cc (fold_using_range::range_of_range_op):
Provide relation_trio class.
* gimple-range-gori.cc (gori_compute::refine_using_relation):
Provide relation_trio class.
(gori_compute::refine_using_relation): Ditto.
(gori_compute::compute_operand1_range): Provide lhs_op2 and
op1_op2 relations via relation_trio class.
(gori_compute::compute_operand2_range): Ditto.
* gimple-range-op.cc (gimple_range_op_handler::calc_op1): Use
relation_trio instead of relation_kind.
(gimple_range_op_handler::calc_op2): Ditto.
(*::fold_range): Ditto.
* gimple-range-op.h (gimple_range_op::calc_op1): Adjust prototypes.
(gimple_range_op::calc_op2): Adjust prototypes.
* range-op-float.cc (*::fold_range): Use relation_trio instead of
relation_kind.
(*::op1_range): Ditto.
(*::op2_range): Ditto.
* range-op.cc (*::fold_range): Use relation_trio instead of
relation_kind.
(*::op1_range): Ditto.
(*::op2_range): Ditto.
* range-op.h (class range_operator): Adjust prototypes.
(class range_operator_float): Ditto.
(class range_op_handler): Adjust prototypes.
(relop_early_resolve): Pickup op1_op2 relation from relation_trio.
* value-relation.cc (VREL_LAST): Adjust use to be one past the end of
the enum.
(relation_oracle::validate_relation): Use relation_trio in call
to fold_range.
* value-relation.h (enum relation_kind_t): Add VREL_LAST as
final element.
(class relation_trio): New.
(TRIO_VARYING, TRIO_SHIFT, TRIO_MASK): New.
|
|
Calling clean_nan on an undefined type traps, set_varying first. Other
tweaks for correctness.
* range-op-float.cc (foperator_not_equal::op1_range): Check for
VREL_EQ after singleton.
(foperator_unordered::op1_range): Set VARYING before calling
clear_nan().
(foperator_ordered::op1_range): Set rather than clear NAN if both
operands are the same.
|
|
op1_op2_relation can be called for relops (bool = a < b) as well as
regular binary operators (z = a + b). This patch adds the overloaded
method for floating point results.
gcc/ChangeLog:
* range-op-float.cc (range_operator_float::op1_op2_relation): New.
(class foperator_equal): Add using.
(class foperator_not_equal): Same.
(class foperator_lt): Same.
(class foperator_le): Same.
(class foperator_gt): Same.
(class foperator_ge): Same.
* range-op.cc (range_op_handler::op1_op2_relation): New.
* range-op.h (range_operator_float::op1_op2_relation): New.
|
|
gcc/ChangeLog:
* range-op-float.cc (class foperator_negate): New.
(floating_op_table::floating_op_table): Add NEGATE_EXPR
(range_op_float_tests): Add negate tests.
|
|
gcc/ChangeLog:
* range-op-float.cc (frange_float): New.
(range_op_float_tests): New.
* range-op.cc (range_op_tests): Call range_op_float_tests.
|
|
The methods from which these derive all have a default relation_kind.
This patch just adds the default, to make it easier to write unit
tests later.
gcc/ChangeLog:
* range-op-float.cc: Add relation_kind = VREL_VARYING to all
methods.
|
|
Implementing ABS_EXPR allows us to fold certain __builtin_inf calls
since they are expanded into calls to involving ABS_EXPR.
This is an adaptation of the integer version.
gcc/ChangeLog:
* range-op-float.cc (class foperator_abs): New.
(floating_op_table::floating_op_table): Add ABS_EXPR entry.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/vrp-float-abs-1.c: New test.
|
|
gcc/ChangeLog:
* range-op-float.cc (foperator_unordered_le::op1_range): New.
(foperator_unordered_le::op2_range): New.
(foperator_unordered_gt::op1_range): New.
(foperator_unordered_gt::op2_range): New.
(foperator_unordered_ge::op1_range): New.
(foperator_unordered_ge::op2_range): New.
(foperator_unordered_equal::op1_range): New.
|
|
Most unordered comparisons can use the result from the ordered
version, if the operands are known not to be NAN or if the result is
true.
gcc/ChangeLog:
* range-op-float.cc (class foperator_unordered_lt): New.
(class foperator_relop_unknown): Remove
(class foperator_unordered_le): New.
(class foperator_unordered_gt): New.
(class foperator_unordered_ge): New.
(class foperator_unordered_equal): New.
(floating_op_table::floating_op_table): Replace all UN_EXPR
entries with their appropriate fop_unordered_* counterpart.
|
|
gcc/ChangeLog:
* range-op-float.cc (class foperator_identity): Make members public.
(class foperator_equal): Same.
(class foperator_not_equal): Same.
(class foperator_lt): Same.
(class foperator_le): Same.
(class foperator_gt): Same.
(class foperator_ge): Same.
(class foperator_unordered): Same.
(class foperator_ordered): Same.
|
|
gcc/ChangeLog:
* range-op-float.cc (foperator_not_equal::op1_range): Set NAN on
TRUE side for x != x.
|
|
gcc/ChangeLog:
* range-op-float.cc (foperator_unordered::op1_range): Set NAN when
operands are equal and result is TRUE.
|
|
The uses of finite_operands_p removed are guarded by a call to
finite_operands_p already.
gcc/ChangeLog:
* range-op-float.cc (foperator_lt::fold_range): Remove extra check
to finite_operands_p.
(foperator_le::fold_range): Same.
(foperator_gt::fold_range): Same.
(foperator_ge::fold_range): Same.
|
|
Similarly to how we drop NANs to UNDEFINED when -ffinite-math-only, I
think we can drop the numbers outside of the min/max representable
numbers to the representable number.
This means the endpoings to VR_VARYING for -ffinite-math-only can now
be the min/max representable, instead of -INF and +INF.
Saturating in the setter means that the upcoming implementation for
binary operators no longer have to worry about doing the right
thing for -ffinite-math-only. If the range goes outside the limits,
it'll get chopped down.
Tested on x86-64 Linux.
gcc/ChangeLog:
* range-op-float.cc (build_le): Use vrp_val_*.
(build_lt): Same.
(build_ge): Same.
(build_gt): Same.
* value-range.cc (frange::set): Chop ranges outside of the
representable numbers for -ffinite-math-only.
(frange::normalize_kind): Use vrp_val*.
(frange::verify_range): Same.
(frange::set_nonnegative): Same.
(range_tests_floats): Remove tests that depend on -INF and +INF.
* value-range.h (real_max_representable): Add prototype.
(real_min_representable): Same.
(vrp_val_max): Set max representable number for
-ffinite-math-only.
(vrp_val_min): Same but for min.
(frange::set_varying): Use vrp_val*.
|
|
Unary operations require op2 to be the range of the type of the LHS.
This is so the type for the LHS can be properly set.
* range-op-float.cc (range_operator_float::fold_range): New base
method for "int = float op int".
* range-op.cc (range_op_handler::fold_range): New case.
* range-op.h: Update prototypes.
|
|
Since NANs can be inserted by other passes even for -ffinite-math-only,
we can't depend on the flag to determine if a NAN is a possiblity.
Instead, we must explicitly check for them.
In the case of -ffinite-math-only, paths leading up to a NAN are
undefined and can be considered unreachable. I have audited all the
relational code and made sure we're handling the known NAN case before
anything else, setting undefined when appropriate.
In the process, I revamped all the relational code handling NANs to
correctly notice paths that are unreachable.
The basic structure for ordered relational operators (except != of
course) is this:
If either operand is a known NAN, return FALSE.
The true side of a relop when one operand is a NAN is
unreachable.
On the false side of a relop when one operand is a NAN, we
know nothing about the other operand.
Regstrapped on x86-64 and ppc64le Linux.
lapack testing on x86-64 with and without -ffinite-math-only.
PR tree-optimization/106967
gcc/ChangeLog:
* range-op-float.cc (foperator_equal::fold_range): Adjust for NAN.
(foperator_equal::op1_range): Same.
(foperator_not_equal::fold_range): Same.
(foperator_not_equal::op1_range): Same.
(foperator_lt::fold_range): Same.
(foperator_lt::op1_range): Same.
(foperator_lt::op2_range): Same.
(foperator_le::fold_range): Same.
(foperator_le::op1_range): Same.
(foperator_le::op2_range): Same.
(foperator_gt::fold_range): Same.
(foperator_gt::op1_range): Same.
(foperator_gt::op2_range): Same.
(foperator_ge::fold_range): Same.
(foperator_ge::op1_range): Same.
(foperator_ge::op2_range): Same.
(foperator_unordered::op1_range): Same.
(foperator_ordered::fold_range): Same.
(foperator_ordered::op1_range): Same.
(build_le): Assert that we don't have a NAN.
(build_lt): Same.
(build_gt): Same.
(build_ge): Same.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/pr106967.c: New test.
|
|
The attatched patch rewrites the NAN and sign handling, dropping both
tristates in favor of a pair of boolean flags for NANs, and nothing at
all for signs. The signs are tracked in the range itself, so now it's
possible to describe things like [-0.0, +0.0] +NAN, [+0, +0], [-5, +0],
[+0, 3] -NAN, etc.
Here is an example of the various ranges and how they are displayed:
[frange] float VARYING NAN ;; Varying includes NAN
[frange] UNDEFINED ;; Empty set as always
[frange] float [] +-NAN ;; Unknown sign NAN
[frange] float [] -NAN ;; -NAN
[frange] float [] +NAN ;; +NAN
[frange] float [-0.0, 0.0] ;; All zeros.
[frange] float [-0.0, -0.0] +-NAN ;; -0 or NAN.
[frange] float [-5.0e+0, -1.0e+0] +NAN ;; [-5, -1] or +NAN
[frange] float [-5.0e+0, -0.0] +-NAN ;; [-5, -0] or NAN
[frange] float [-5.0e+0, -0.0] ;; [-5, -0]
[frange] float [5.0e+0, 1.0e+1] ;; [5, 10]
Notice the NAN signs are decoupled from the range, so we can represent
a negative range with a positive NAN. For this range,
frange::signbit_p() would return false, as only when the signs of the
NANs and range agree can we be certain.
There is no longer any pessimization of ranges for intersects
involving NANs. Also, union and intersect work with signed zeros:
// [-0, x] U [+0, x] => [-0, x]
// [ x, -0] U [ x, +0] => [ x, +0]
// [-0, x] ^ [+0, x] => [+0, x]
// [ x, -0] ^ [ x, +0] => [ x, -0]
The special casing for signed zeros in the singleton code is gone in
favor of just making sure the signs in the range agree, that is
[-0, -0] for example.
I have removed the idea that a known NAN is a "range", so a NAN is no
longer in the endpoints itself. Requesting the bound of a known NAN
is a hard fail. For that matter, we don't store the actual NAN in the
range. The only information we have are the set of boolean flags.
This way we make sure nothing seeps into the frange. This also means
it's explicit that we don't track anything but the sign in NANs. We
can revisit this if we desire to track signalling or whatever
concoction y'all can imagine.
Regstrapped with mpfr tests on x86-64 and ppc64le Linux. Selftests
were also run with -ffinite-math-only on x86-64.
At Jakub's suggestion, I built lapack with associated tests. They
pass on x86-64 and ppc64le Linux with no regressions from mainline.
As a sanity check, I also ran them for -ffinite-math-only on x86 which
(as expected) returned:
NaN arithmetic did not perform per the ieee spec
Otherwise, all tests pass for -ffinite-math-only.
gcc/ChangeLog:
* range-op-float.cc (frange_add_zeros): Replace set_signbit with
union of zero.
* value-query.cc (range_query::get_tree_range): Remove set_signbit
use.
* value-range-pretty-print.cc (vrange_printer::print_frange_prop):
Remove.
(vrange_printer::print_frange_nan): New.
* value-range-pretty-print.h (print_frange_prop): Remove.
(print_frange_nan): New.
* value-range-storage.cc (frange_storage_slot::set_frange): Set
kind and NAN fields.
(frange_storage_slot::get_frange): Restore kind and NAN fields.
* value-range-storage.h (class frange_storage_slot): Add kind and
NAN fields.
* value-range.cc (frange::update_nan): Remove.
(frange::set_signbit): Remove.
(frange::set): Adjust for NAN fields.
(frange::normalize_kind): Remove m_props.
(frange::combine_zeros): New.
(frange::union_nans): New.
(frange::union_): Handle new NAN fields.
(frange::intersect_nans): New.
(frange::intersect): Handle new NAN fields.
(frange::operator=): Same.
(frange::operator==): Same.
(frange::contains_p): Same.
(frange::singleton_p): Remove special case for signed zeros.
(frange::verify_range): Adjust for new NAN fields.
(frange::set_zero): Handle signed zeros.
(frange::set_nonnegative): Same.
(range_tests_nan): Adjust tests.
(range_tests_signed_zeros): Same.
(range_tests_signbit): Same.
(range_tests_floats): Same.
* value-range.h (class fp_prop): Remove.
(FP_PROP_ACCESSOR): Remove.
(class frange_props): Remove
(frange::lower_bound): NANs don't have endpoints.
(frange::upper_bound): Same.
(frange_props::operator==): Remove.
(frange_props::union_): Remove.
(frange_props::intersect): Remove.
(frange::update_nan): New.
(frange::clear_nan): New.
(frange::undefined_p): New.
(frange::set_nan): New.
(frange::known_finite): Adjust for new NAN representation.
(frange::maybe_isnan): Same.
(frange::known_isnan): Same.
(frange::signbit_p): Same.
* gimple-range-fold.cc (range_of_builtin_int_call): Rename
known_signbit_p into signbit_p.
|