Age | Commit message (Collapse) | Author | Files | Lines |
|
gcc.target/arm/pr112337.c was failing to validate that adding MVE options
was compatible with the test environment, so add the missing checks.
gcc/testsuite/ChangeLog:
PR target/112337
* gcc.target/arm/pr112337.c: Check for, then use the right MVE
options.
|
|
Register alloc may expand a 3-operand arithmetic X = Y o CST as
X = CST
X o= Y
where it may be better to instead:
X = Y
X o= CST
because 1) the first insn may use MOVW for "X = Y", and 2) the
operation may be more efficient when performed with a constant,
for example when ADIW or SBIW can be used, or some bytes of
the constant are 0x00 or 0xff.
gcc/
* config/avr/avr.md: Add two RTL peepholes for PLUS, IOR and AND
in HI, PSI, SI that swap operation order from "X = CST, X o= Y"
to "X = Y, X o= CST".
|
|
Fixes: 08edf85f747b ("c++/modules: relax diagnostic about GMF contents")
gcc/c-family/ChangeLog:
* c.opt.urls: Regenerate.
|
|
The psABI allows using s9 as an alias of r22.
gcc/ChangeLog:
* config/loongarch/loongarch.h (ADDITIONAL_REGISTER_NAMES): Add
s9 as an alias of r22.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/regname-fp-s9.c: New test.
|
|
The instructions printed by insn "*insv.any_shift.<mode>_split" were
sub-optimal. The code to print the improved output is lengthy and
performed by new function avr_out_insv. As it turns out, the function
can also handle shift-offsets of zero, which is "*andhi3", "*andpsi3"
and "*andsi3". Thus, these tree insns get a new 3-operand alternative
where the 3rd operand is an exact power of 2.
gcc/
* config/avr/avr-protos.h (avr_out_insv): New proto.
* config/avr/avr.cc (avr_out_insv): New function.
(avr_adjust_insn_length) [ADJUST_LEN_INSV]: Handle case.
(avr_cbranch_cost) [ZERO_EXTRACT]: Adjust rtx costs.
* config/avr/avr.md (define_attr "adjust_len") Add insv.
(andhi3, *andhi3, andpsi3, *andpsi3, andsi3, *andsi3):
Add constraint alternative where the 3rd operand is a power
of 2, and the source register may differ from the destination.
(*insv.any_shift.<mode>_split): Call avr_out_insv to output
instructions. Set attr "length" to "insv".
* config/avr/constraints.md (Cb2, Cb3, Cb4): New constraints.
gcc/testsuite/
* gcc.target/avr/torture/insv-anyshift-hi.c: New test.
* gcc.target/avr/torture/insv-anyshift-si.c: New test.
|
|
The following makes sure to use recognized patterns when vectorizing
roots during BB SLP discovery. We need to apply those late since
during root discovery we've not yet done pattern recognition.
All parts of the vectorizer assume patterns get used, for the testcase
we mix this up when doing live lane computation.
PR tree-optimization/114231
* tree-vect-slp.cc (vect_analyze_slp): Lookup patterns when
processing a BB SLP root.
* gcc.dg/vect/pr114231.c: New testcase.
|
|
On the following testcase, we have
(insn 10 7 11 2 (set (reg/v:TI 106 [ h ])
(rotate:TI (reg/v:TI 106 [ h ])
(const_int 64 [0x40]))) "pr114211.c":8:5 1042 {rotl64ti2_doubleword}
(nil))
before subreg1 and the pass decides to use
(reg:DI 127 [ h ]) / (reg:DI 128 [ h+8 ])
register pair instead of (reg/v:TI 106 [ h ]).
resolve_operand_for_swap_move_operator implements it by pretending it is
an assignment from
(concatn (reg:DI 127 [ h ]) (reg:DI 128 [ h+8 ]))
to
(concatn (reg:DI 128 [ h+8 ]) (reg:DI 127 [ h ]))
The problem is that if the rotate argument is the same as destination or
if there is even an overlap between the first half of the destination with
second half of the source we emit incorrect code, because the store to
(reg:DI 128 [ h+8 ]) overwrites what we need for source of the second
move. The following patch detects that case and uses a temporary pseudo
to hold the original (reg:DI 128 [ h+8 ]) value across the first store.
2024-03-05 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/114211
* lower-subreg.cc (resolve_simple_move): For double-word
rotates by BITS_PER_WORD if there is overlap between source
and destination use a temporary.
* gcc.dg/pr114211.c: New test.
|
|
The following patch adds support for BIT_FIELD_REF lowering with
large/huge _BitInt lhs. BIT_FIELD_REF requires mode argument first
operand, so the operand shouldn't be any huge _BitInt.
If we only access limbs from inside of BIT_FIELD_REF using constant
indexes, we can just create a new BIT_FIELD_REF to extract the limb,
but if we need to use variable index in a loop, I'm afraid we need
to spill it into memory, which is what the following patch does.
If there is some bitwise type for the extraction, it extracts just
what we need and not more than that, otherwise it spills the whole
first argument of BIT_FIELD_REF and uses MEM_REF with an offset
with VIEW_CONVERT_EXPR around it.
2024-03-05 Jakub Jelinek <jakub@redhat.com>
PR middle-end/114157
* gimple-lower-bitint.cc: Include stor-layout.h.
(mergeable_op): Return true for BIT_FIELD_REF.
(struct bitint_large_huge): Declare handle_bit_field_ref method.
(bitint_large_huge::handle_bit_field_ref): New method.
(bitint_large_huge::handle_stmt): Use it for BIT_FIELD_REF.
* gcc.dg/bitint-98.c: New test.
* gcc.target/i386/avx2-pr114157.c: New test.
* gcc.target/i386/avx512f-pr114157.c: New test.
|
|
[PR114116]
As mentioned in the PR, on x86_64 currently a lot of ICEs end up
with crashes in the unwinder like:
during RTL pass: expand
pr114044-2.c: In function ‘foo’:
pr114044-2.c:5:3: internal compiler error: in expand_fn_using_insn, at internal-fn.cc:208
5 | __builtin_clzg (a);
| ^~~~~~~~~~~~~~~~~~
0x7d9246 expand_fn_using_insn
../../gcc/internal-fn.cc:208
pr114044-2.c:5:3: internal compiler error: Segmentation fault
0x1554262 crash_signal
../../gcc/toplev.cc:319
0x2b20320 x86_64_fallback_frame_state
./md-unwind-support.h:63
0x2b20320 uw_frame_state_for
../../../libgcc/unwind-dw2.c:1013
0x2b2165d _Unwind_Backtrace
../../../libgcc/unwind.inc:303
0x2acbd69 backtrace_full
../../libbacktrace/backtrace.c:127
0x2a32fa6 diagnostic_context::action_after_output(diagnostic_t)
../../gcc/diagnostic.cc:781
0x2a331bb diagnostic_action_after_output(diagnostic_context*, diagnostic_t)
../../gcc/diagnostic.h:1002
0x2a331bb diagnostic_context::report_diagnostic(diagnostic_info*)
../../gcc/diagnostic.cc:1633
0x2a33543 diagnostic_impl
../../gcc/diagnostic.cc:1767
0x2a33c26 internal_error(char const*, ...)
../../gcc/diagnostic.cc:2225
0xe232c8 fancy_abort(char const*, int, char const*)
../../gcc/diagnostic.cc:2336
0x7d9246 expand_fn_using_insn
../../gcc/internal-fn.cc:208
Segmentation fault (core dumped)
The problem are the PR38534 r14-8470 changes which avoid saving call-saved
registers in noreturn functions. If such functions ever touch the
bp register but because of the r14-8470 changes don't save it in the
prologue, the caller or any other function in the backtrace uses a frame
pointer and the noreturn function or anything it calls directly or
indirectly calls backtrace, then the unwinder crashes, because bp register
contains some unrelated value, but in the frames which do use frame pointer
CFA is based on the bp register.
In theory this could happen with any other call-saved register, e.g. code
written by hand in assembly with .cfi_* directives could use any other
call-saved register as register into which store the CFA or something
related to that, but in reality at least compiler generated code and usual
assembly probably just making sure bp doesn't contain garbage could be
enough for backtrace purposes. In the debugger of course it will not be
enough, the values of the arguments etc. can be lost (if DW_CFA_undefined
is emitted) or garbage.
So, I think for noreturn function we should at least save the bp register
if we use it. If user asks for it using no_callee_saved_registers
attribute, let's honor what is asked for (but then it is up to the user
to make sure e.g. backtrace isn't called from the function or anything it
calls). As discussed in the PR, whether to save bp or not shouldn't be
based on whether compiling with -g or -g0, because we don't want code
generation changes without/with debugging, it would also break
-fcompare-debug, and users can call backtrace(3), that doesn't use debug
info, just unwind info, even backtrace_symbols{,_fd}(3) don't use debug info
but just looks at dynamic symbol table.
The patch also adds check for no_caller_saved_registers
attribute in the implicit addition of not saving callee saved register
in noreturn functions, because on I think
__attribute__((no_caller_saved_registers, noreturn)) will otherwise
error that no_caller_saved_registers and no_callee_saved_registers
attributes are incompatible (but user didn't specify anything like that).
2024-03-05 Jakub Jelinek <jakub@redhat.com>
PR target/114116
* config/i386/i386.h (enum call_saved_registers_type): Add
TYPE_NO_CALLEE_SAVED_REGISTERS_EXCEPT_BP enumerator.
* config/i386/i386-options.cc (ix86_set_func_type): Remove
has_no_callee_saved_registers variable, add no_callee_saved_registers
instead, initialize it depending on whether it is
no_callee_saved_registers function or not. Don't set it if
no_caller_saved_registers attribute is present. Adjust users.
* config/i386/i386.cc (ix86_function_ok_for_sibcall): Handle
TYPE_NO_CALLEE_SAVED_REGISTERS_EXCEPT_BP like
TYPE_NO_CALLEE_SAVED_REGISTERS.
(ix86_save_reg): Handle TYPE_NO_CALLEE_SAVED_REGISTERS_EXCEPT_BP.
* gcc.target/i386/pr38534-1.c: Allow push/pop of bp.
* gcc.target/i386/pr38534-4.c: Likewise.
* gcc.target/i386/pr38534-2.c: Likewise.
* gcc.target/i386/pr38534-3.c: Likewise.
* gcc.target/i386/pr114097-1.c: Likewise.
* gcc.target/i386/stack-check-17.c: Expect no pop on ! ia32.
|
|
Cleanup mode_size related code which is not used anymore. Below tests are
passed for this patch.
* The RVV fully regresssion test.
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_v_adjust_bytesize): Cleanup unused
mode_size related code.
Signed-off-by: Pan Li <pan2.li@intel.com>
|
|
Issuing a hard error when the GMF doesn't consist only of preprocessing
directives happens to be inconvenient for automated testcase reduction
via cvise. This patch relaxes this diagnostic into a pedwarn that can
be disabled with -Wno-global-module.
gcc/c-family/ChangeLog:
* c.opt (Wglobal-module): New warning.
gcc/cp/ChangeLog:
* parser.cc (cp_parser_translation_unit): Relax GMF contents
error into a pedwarn.
gcc/ChangeLog:
* doc/invoke.texi (-Wno-global-module): Document.
gcc/testsuite/ChangeLog:
* g++.dg/modules/friend-6_a.C: Pass -Wno-global-module instead
of -Wno-pedantic. Remove now unnecessary preprocessing
directives from GMF.
Reviewed-by: Jason Merrill <jason@redhat.com>
|
|
|
|
Currently a using-declaration bringing a name into its own namespace is
a no-op, except for functions. This prevents people from being able to
redeclare a name brought in from the GMF as exported, however, which
this patch fixes.
Apart from marking declarations as exported they are also now marked as
effectively being in the module purview (due to the using-decl) so that
they are properly processed, as 'add_binding_entity' assumes that
declarations not in the module purview cannot possibly be exported.
gcc/cp/ChangeLog:
* name-lookup.cc (walk_module_binding): Remove completed FIXME.
(do_nonmember_using_decl): Mark redeclared entities as exported
when needed. Check for re-exporting internal linkage types.
gcc/testsuite/ChangeLog:
* g++.dg/modules/using-12.C: New test.
* g++.dg/modules/using-13.h: New test.
* g++.dg/modules/using-13_a.C: New test.
* g++.dg/modules/using-13_b.C: New test.
Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com>
|
|
This patch moves the initial/termination user procedure functionality in
pim and iso versions of M2RTS into M2Dependent. This ensures that
finalization/initialization procedures will always be invoked for both -fiso
and -fpim. Prior to this patch M2Dependent called M2RTS for
termination procedure cleanup and always invoked the pim M2RTS.
gcc/m2/ChangeLog:
PR modula2/114227
* gm2-libs-iso/M2RTS.mod (ProcedureChain): Remove.
(ProcedureList): Remove.
(ExecuteReverse): Remove.
(ExecuteTerminationProcedures): Rewrite.
(ExecuteInitialProcedures): Rewrite.
(AppendProc): Remove.
(InstallTerminationProcedure): Rewrite.
(InstallInitialProcedure): Rewrite.
(InitProcList): Remove.
* gm2-libs/M2Dependent.def (InstallTerminationProcedure):
New procedure.
(ExecuteTerminationProcedures): New procedure.
(InstallInitialProcedure): New procedure.
(ExecuteInitialProcedures): New procedure.
* gm2-libs/M2Dependent.mod (ProcedureChain): New type.
(ProcedureList): New type.
(ExecuteReverse): New procedure.
(ExecuteTerminationProcedures): New procedure.
(ExecuteInitialProcedures): New procedure.
(AppendProc): New procedure.
(InstallTerminationProcedure): New procedure.
(InstallInitialProcedure): New procedure.
(InitProcList): New procedure.
* gm2-libs/M2RTS.mod (ProcedureChain): Remove.
(ProcedureList): Remove.
(ExecuteReverse): Remove.
(ExecuteTerminationProcedures): Rewrite.
(ExecuteInitialProcedures): Rewrite.
(AppendProc): Remove.
(InstallTerminationProcedure): Rewrite.
(InstallInitialProcedure): Rewrite.
(InitProcList): Remove.
Signed-off-by: Gaius Mulley <gaiusmod2@gmail.com>
|
|
I caused a regression with commit r10-908 by adding a constraint to the
non-explicit allocator-extended default constructor, but seemingly
forgot to add an explicit overload with the corresponding constraint.
libstdc++-v3/ChangeLog:
PR libstdc++/114147
* include/std/tuple (tuple::tuple(allocator_arg_t, const Alloc&)):
Add missing overload of allocator-extended default constructor.
(tuple<T1,T2>::tuple(allocator_arg_t, const Alloc&)): Likewise.
* testsuite/20_util/tuple/cons/114147.cc: New test.
|
|
Similar to memmove and memcpy, the BPF backend cannot fall back on a
library call to implement __builtin_memset, and should always expand
calls to it inline if possible.
This patch implements simple inline expansion of memset in the BPF
backend in a verifier-friendly way. Similar to memcpy and memmove, the
size must be an integer constant, as is also required by clang.
gcc/
* config/bpf/bpf-protos.h (bpf_expand_setmem): New prototype.
* config/bpf/bpf.cc (bpf_expand_setmem): New.
* config/bpf/bpf.md (setmemdi): New define_expand.
gcc/testsuite/
* gcc.target/bpf/memset-1.c: New test.
|
|
* sv.po: Update.
|
|
On Mon, Mar 04, 2024 at 05:18:39PM +0100, Rainer Orth wrote:
> unfortunately, the patch broke Solaris/SPARC bootstrap
> (sparc-sun-solaris2.11):
>
> .../gcc/combine.cc: In function 'rtx_code simplify_comparison(rtx_code, rtx_def**, rtx_def**)':
> .../gcc/combine.cc:12101:25: error: '*(unsigned int*)((char*)&inner_mode + offsetof(scalar_int_mode, scalar_int_mode::m_mode))' may be used uninitialized [-Werror=maybe-uninitialized]
> 12101 | scalar_int_mode mode, inner_mode, tmode;
> | ^~~~~~~~~~
I don't see how it could ever work properly, inner_mode in that spot is
just uninitialized.
I think we shouldn't worry about paradoxical subregs of non-scalar_int_mode
REGs/MEMs and for the scalar_int_mode ones should initialize inner_mode
before we use it.
Another option would be to use
maybe_lt (GET_MODE_PRECISION (GET_MODE (SUBREG_REG (op0))), BITS_PER_WORD)
and
load_extend_op (GET_MODE (SUBREG_REG (op0))) == ZERO_EXTEND,
or set machine_mode smode = GET_MODE (SUBREG_REG (op0)); and use it in
those two spots.
2024-03-04 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/113010
* combine.cc (simplify_comparison): Guard the
WORD_REGISTER_OPERATIONS check on scalar_int_mode of SUBREG_REG
and initialize inner_mode.
|
|
This patch fixes the erroneous use of a mode attribute without a mode iterator
in the pattern and removes unused unspecs and iterators.
gcc/ChangeLog:
* config/arm/iterators.md (supf): Remove VMLALDAVXQ_U, VMLALDAVXQ_P_U,
VMLALDAVAXQ_U cases.
(VMLALDAVXQ): Remove iterator.
(VMLALDAVXQ_P): Likewise.
(VMLALDAVAXQ): Likewise.
* config/arm/mve.md (mve_vstrwq_p_fv4sf): Replace use of <MVE_VPRED>
mode iterator attribute with V4BI mode.
* config/arm/unspecs.md (VMLALDAVXQ_U, VMLALDAVXQ_P_U,
VMLALDAVAXQ_U): Remove unused unspecs.
|
|
This patch annotates some MVE across lane instructions with a new attribute.
We use this attribute to let the compiler know that these instructions can be
safely implicitly predicated when tail predicating if their operands are
guaranteed to have zeroed tail predicated lanes. These instructions were
selected because having the value 0 in those lanes or 'tail-predicating' those
lanes have the same effect.
gcc/ChangeLog:
* config/arm/arm.md (mve_safe_imp_xlane_pred): New attribute.
* config/arm/iterators.md (mve_vmaxmin_safe_imp): New iterator
attribute.
* config/arm/mve.md (vaddvq_s, vaddvq_u, vaddlvq_s, vaddlvq_u,
vaddvaq_s, vaddvaq_u, vmaxavq_s, vmaxvq_u, vmladavq_s, vmladavq_u,
vmladavxq_s, vmlsdavq_s, vmlsdavxq_s, vaddlvaq_s, vaddlvaq_u,
vmlaldavq_u, vmlaldavq_s, vmlaldavq_u, vmlaldavxq_s, vmlsldavq_s,
vmlsldavxq_s, vrmlaldavhq_u, vrmlaldavhq_s, vrmlaldavhxq_s,
vrmlsldavhq_s, vrmlsldavhxq_s, vrmlaldavhaq_s, vrmlaldavhaq_u,
vrmlaldavhaxq_s, vrmlsldavhaq_s, vrmlsldavhaxq_s, vabavq_s, vabavq_u,
vmladavaq_u, vmladavaq_s, vmladavaxq_s, vmlsdavaq_s, vmlsdavaxq_s,
vmlaldavaq_s, vmlaldavaq_u, vmlaldavaxq_s, vmlsldavaq_s,
vmlsldavaxq_s): Added mve_safe_imp_xlane_pred.
|
|
unpredicated insns
This patch adds an attribute to the mve md patterns to be able to identify
predicable MVE instructions and what their predicated and unpredicated variants
are. This attribute is used to encode the icode of the unpredicated variant of
an instruction in its predicated variant.
This will make it possible for us to transform VPT-predicated insns in
the insn chain into their unpredicated equivalents when transforming the loop
into a MVE Tail-Predicated Low Overhead Loop. For example:
`mve_vldrbq_z_<supf><mode> -> mve_vldrbq_<supf><mode>`.
gcc/ChangeLog:
* config/arm/arm.md (mve_unpredicated_insn): New attribute.
* config/arm/arm.h (MVE_VPT_PREDICATED_INSN_P): New define.
(MVE_VPT_UNPREDICATED_INSN_P): Likewise.
(MVE_VPT_PREDICABLE_INSN_P): Likewise.
* config/arm/vec-common.md (mve_vshlq_<supf><mode>): Add attribute.
* config/arm/mve.md (arm_vcx1q<a>_p_v16qi): Add attribute.
(arm_vcx1q<a>v16qi): Likewise.
(arm_vcx1qav16qi): Likewise.
(arm_vcx1qv16qi): Likewise.
(arm_vcx2q<a>_p_v16qi): Likewise.
(arm_vcx2q<a>v16qi): Likewise.
(arm_vcx2qav16qi): Likewise.
(arm_vcx2qv16qi): Likewise.
(arm_vcx3q<a>_p_v16qi): Likewise.
(arm_vcx3q<a>v16qi): Likewise.
(arm_vcx3qav16qi): Likewise.
(arm_vcx3qv16qi): Likewise.
(@mve_<mve_insn>q_<supf><mode>): Likewise.
(@mve_<mve_insn>q_int_<supf><mode>): Likewise.
(@mve_<mve_insn>q_<supf>v4si): Likewise.
(@mve_<mve_insn>q_n_<supf><mode>): Likewise.
(@mve_<mve_insn>q_r_<supf><mode>): Likewise.
(@mve_<mve_insn>q_f<mode>): Likewise.
(@mve_<mve_insn>q_m_<supf><mode>): Likewise.
(@mve_<mve_insn>q_m_n_<supf><mode>): Likewise.
(@mve_<mve_insn>q_m_r_<supf><mode>): Likewise.
(@mve_<mve_insn>q_m_f<mode>): Likewise.
(@mve_<mve_insn>q_int_m_<supf><mode>): Likewise.
(@mve_<mve_insn>q_p_<supf>v4si): Likewise.
(@mve_<mve_insn>q_p_<supf><mode>): Likewise.
(@mve_<mve_insn>q<mve_rot>_<supf><mode>): Likewise.
(@mve_<mve_insn>q<mve_rot>_f<mode>): Likewise.
(@mve_<mve_insn>q<mve_rot>_m_<supf><mode>): Likewise.
(@mve_<mve_insn>q<mve_rot>_m_f<mode>): Likewise.
(mve_v<absneg_str>q_f<mode>): Likewise.
(mve_<mve_addsubmul>q<mode>): Likewise.
(mve_<mve_addsubmul>q_f<mode>): Likewise.
(mve_vadciq_<supf>v4si): Likewise.
(mve_vadciq_m_<supf>v4si): Likewise.
(mve_vadcq_<supf>v4si): Likewise.
(mve_vadcq_m_<supf>v4si): Likewise.
(mve_vandq_<supf><mode>): Likewise.
(mve_vandq_f<mode>): Likewise.
(mve_vandq_m_<supf><mode>): Likewise.
(mve_vandq_m_f<mode>): Likewise.
(mve_vandq_s<mode>): Likewise.
(mve_vandq_u<mode>): Likewise.
(mve_vbicq_<supf><mode>): Likewise.
(mve_vbicq_f<mode>): Likewise.
(mve_vbicq_m_<supf><mode>): Likewise.
(mve_vbicq_m_f<mode>): Likewise.
(mve_vbicq_m_n_<supf><mode>): Likewise.
(mve_vbicq_n_<supf><mode>): Likewise.
(mve_vbicq_s<mode>): Likewise.
(mve_vbicq_u<mode>): Likewise.
(@mve_vclzq_s<mode>): Likewise.
(mve_vclzq_u<mode>): Likewise.
(@mve_vcmp_<mve_cmp_op>q_<mode>): Likewise.
(@mve_vcmp_<mve_cmp_op>q_n_<mode>): Likewise.
(@mve_vcmp_<mve_cmp_op>q_f<mode>): Likewise.
(@mve_vcmp_<mve_cmp_op>q_n_f<mode>): Likewise.
(@mve_vcmp_<mve_cmp_op1>q_m_f<mode>): Likewise.
(@mve_vcmp_<mve_cmp_op1>q_m_n_<supf><mode>): Likewise.
(@mve_vcmp_<mve_cmp_op1>q_m_<supf><mode>): Likewise.
(@mve_vcmp_<mve_cmp_op1>q_m_n_f<mode>): Likewise.
(mve_vctp<MVE_vctp>q<MVE_vpred>): Likewise.
(mve_vctp<MVE_vctp>q_m<MVE_vpred>): Likewise.
(mve_vcvtaq_<supf><mode>): Likewise.
(mve_vcvtaq_m_<supf><mode>): Likewise.
(mve_vcvtbq_f16_f32v8hf): Likewise.
(mve_vcvtbq_f32_f16v4sf): Likewise.
(mve_vcvtbq_m_f16_f32v8hf): Likewise.
(mve_vcvtbq_m_f32_f16v4sf): Likewise.
(mve_vcvtmq_<supf><mode>): Likewise.
(mve_vcvtmq_m_<supf><mode>): Likewise.
(mve_vcvtnq_<supf><mode>): Likewise.
(mve_vcvtnq_m_<supf><mode>): Likewise.
(mve_vcvtpq_<supf><mode>): Likewise.
(mve_vcvtpq_m_<supf><mode>): Likewise.
(mve_vcvtq_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_n_from_f_<supf><mode>): Likewise.
(mve_vcvtq_m_n_to_f_<supf><mode>): Likewise.
(mve_vcvtq_m_to_f_<supf><mode>): Likewise.
(mve_vcvtq_n_from_f_<supf><mode>): Likewise.
(mve_vcvtq_n_to_f_<supf><mode>): Likewise.
(mve_vcvtq_to_f_<supf><mode>): Likewise.
(mve_vcvttq_f16_f32v8hf): Likewise.
(mve_vcvttq_f32_f16v4sf): Likewise.
(mve_vcvttq_m_f16_f32v8hf): Likewise.
(mve_vcvttq_m_f32_f16v4sf): Likewise.
(mve_vdwdupq_m_wb_u<mode>_insn): Likewise.
(mve_vdwdupq_wb_u<mode>_insn): Likewise.
(mve_veorq_s><mode>): Likewise.
(mve_veorq_u><mode>): Likewise.
(mve_veorq_f<mode>): Likewise.
(mve_vidupq_m_wb_u<mode>_insn): Likewise.
(mve_vidupq_u<mode>_insn): Likewise.
(mve_viwdupq_m_wb_u<mode>_insn): Likewise.
(mve_viwdupq_wb_u<mode>_insn): Likewise.
(mve_vldrbq_<supf><mode>): Likewise.
(mve_vldrbq_gather_offset_<supf><mode>): Likewise.
(mve_vldrbq_gather_offset_z_<supf><mode>): Likewise.
(mve_vldrbq_z_<supf><mode>): Likewise.
(mve_vldrdq_gather_base_<supf>v2di): Likewise.
(mve_vldrdq_gather_base_wb_<supf>v2di_insn): Likewise.
(mve_vldrdq_gather_base_wb_z_<supf>v2di_insn): Likewise.
(mve_vldrdq_gather_base_z_<supf>v2di): Likewise.
(mve_vldrdq_gather_offset_<supf>v2di): Likewise.
(mve_vldrdq_gather_offset_z_<supf>v2di): Likewise.
(mve_vldrdq_gather_shifted_offset_<supf>v2di): Likewise.
(mve_vldrdq_gather_shifted_offset_z_<supf>v2di): Likewise.
(mve_vldrhq_<supf><mode>): Likewise.
(mve_vldrhq_fv8hf): Likewise.
(mve_vldrhq_gather_offset_<supf><mode>): Likewise.
(mve_vldrhq_gather_offset_fv8hf): Likewise.
(mve_vldrhq_gather_offset_z_<supf><mode>): Likewise.
(mve_vldrhq_gather_offset_z_fv8hf): Likewise.
(mve_vldrhq_gather_shifted_offset_<supf><mode>): Likewise.
(mve_vldrhq_gather_shifted_offset_fv8hf): Likewise.
(mve_vldrhq_gather_shifted_offset_z_<supf><mode>): Likewise.
(mve_vldrhq_gather_shifted_offset_z_fv8hf): Likewise.
(mve_vldrhq_z_<supf><mode>): Likewise.
(mve_vldrhq_z_fv8hf): Likewise.
(mve_vldrwq_<supf>v4si): Likewise.
(mve_vldrwq_fv4sf): Likewise.
(mve_vldrwq_gather_base_<supf>v4si): Likewise.
(mve_vldrwq_gather_base_fv4sf): Likewise.
(mve_vldrwq_gather_base_wb_<supf>v4si_insn): Likewise.
(mve_vldrwq_gather_base_wb_fv4sf_insn): Likewise.
(mve_vldrwq_gather_base_wb_z_<supf>v4si_insn): Likewise.
(mve_vldrwq_gather_base_wb_z_fv4sf_insn): Likewise.
(mve_vldrwq_gather_base_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_base_z_fv4sf): Likewise.
(mve_vldrwq_gather_offset_<supf>v4si): Likewise.
(mve_vldrwq_gather_offset_fv4sf): Likewise.
(mve_vldrwq_gather_offset_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_offset_z_fv4sf): Likewise.
(mve_vldrwq_gather_shifted_offset_<supf>v4si): Likewise.
(mve_vldrwq_gather_shifted_offset_fv4sf): Likewise.
(mve_vldrwq_gather_shifted_offset_z_<supf>v4si): Likewise.
(mve_vldrwq_gather_shifted_offset_z_fv4sf): Likewise.
(mve_vldrwq_z_<supf>v4si): Likewise.
(mve_vldrwq_z_fv4sf): Likewise.
(mve_vmvnq_s<mode>): Likewise.
(mve_vmvnq_u<mode>): Likewise.
(mve_vornq_<supf><mode>): Likewise.
(mve_vornq_f<mode>): Likewise.
(mve_vornq_m_<supf><mode>): Likewise.
(mve_vornq_m_f<mode>): Likewise.
(mve_vornq_s<mode>): Likewise.
(mve_vornq_u<mode>): Likewise.
(mve_vorrq_<supf><mode>): Likewise.
(mve_vorrq_f<mode>): Likewise.
(mve_vorrq_m_<supf><mode>): Likewise.
(mve_vorrq_m_f<mode>): Likewise.
(mve_vorrq_m_n_<supf><mode>): Likewise.
(mve_vorrq_n_<supf><mode>): Likewise.
(mve_vorrq_s<mode>): Likewise.
(mve_vorrq_s<mode>): Likewise.
(mve_vsbciq_<supf>v4si): Likewise.
(mve_vsbciq_m_<supf>v4si): Likewise.
(mve_vsbcq_<supf>v4si): Likewise.
(mve_vsbcq_m_<supf>v4si): Likewise.
(mve_vshlcq_<supf><mode>): Likewise.
(mve_vshlcq_m_<supf><mode>): Likewise.
(mve_vshrq_m_n_<supf><mode>): Likewise.
(mve_vshrq_n_<supf><mode>): Likewise.
(mve_vstrbq_<supf><mode>): Likewise.
(mve_vstrbq_p_<supf><mode>): Likewise.
(mve_vstrbq_scatter_offset_<supf><mode>_insn): Likewise.
(mve_vstrbq_scatter_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrdq_scatter_base_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_p_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_wb_<supf>v2di): Likewise.
(mve_vstrdq_scatter_base_wb_p_<supf>v2di): Likewise.
(mve_vstrdq_scatter_offset_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_offset_p_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_shifted_offset_<supf>v2di_insn): Likewise.
(mve_vstrdq_scatter_shifted_offset_p_<supf>v2di_insn): Likewise.
(mve_vstrhq_<supf><mode>): Likewise.
(mve_vstrhq_fv8hf): Likewise.
(mve_vstrhq_p_<supf><mode>): Likewise.
(mve_vstrhq_p_fv8hf): Likewise.
(mve_vstrhq_scatter_offset_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_offset_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_offset_p_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_fv8hf_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_p_<supf><mode>_insn): Likewise.
(mve_vstrhq_scatter_shifted_offset_p_fv8hf_insn): Likewise.
(mve_vstrwq_<supf>v4si): Likewise.
(mve_vstrwq_fv4sf): Likewise.
(mve_vstrwq_p_<supf>v4si): Likewise.
(mve_vstrwq_p_fv4sf): Likewise.
(mve_vstrwq_scatter_base_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_fv4sf): Likewise.
(mve_vstrwq_scatter_base_p_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_p_fv4sf): Likewise.
(mve_vstrwq_scatter_base_wb_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_wb_fv4sf): Likewise.
(mve_vstrwq_scatter_base_wb_p_<supf>v4si): Likewise.
(mve_vstrwq_scatter_base_wb_p_fv4sf): Likewise.
(mve_vstrwq_scatter_offset_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_offset_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_offset_p_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_offset_p_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_fv4sf_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_p_<supf>v4si_insn): Likewise.
(mve_vstrwq_scatter_shifted_offset_p_fv4sf_insn): Likewise.
|
|
...to offer a more realistic example.
gcc/ChangeLog:
* doc/extend.texi: Update [[gnu::no_dangling]].
|
|
The masks and bitvectors were broken when nunits==32 on hosts where int is
32-bit.
gcc/ChangeLog:
* dojump.cc (do_compare_and_jump): Use full-width integers for shifts.
* expr.cc (store_constructor): Likewise.
(do_store_flag): Likewise.
|
|
There were several commits that didn't regenerate the opt.urls files.
Fixes: 438ef143679e ("rs6000: Neuter option -mpower{8,9}-vector")
Fixes: 50c549ef3db6 ("gccrs: enable -Winfinite-recursion warnings by default")
Fixes: 25bb8a40abd9 ("Move docs for -Wuse-after-free and -Wuseless-cast")
Fixes: 48448055fb70 ("AVR: Support .rodata in Flash for AVR64* and AVR128*")
Fixes: 42503cc257fb ("AVR: Document option -mskip-bug")
Fixes: 7de5bb642c12 ("i386: [APX] Document inline asm behavior and new switch")
Fixes: 49a14ee488b8 ("Add -mevex512 into invoke.texi")
Fixes: 4666cbde5e6d ("Sort warning options in c-family/c.opt.")
Fixes: cda383616183 ("AVR: target/114100 - Better indirect accesses for reduced Tiny")
gcc/c-family/ChangeLog:
* c.opt.urls: Regenerate.
gcc/ChangeLog:
* common.opt.urls: Regenerate.
* config/avr/avr.opt.urls: Likewise.
* config/i386/i386.opt.urls: Likewise.
* config/pru/pru.opt.urls: Likewise.
* config/riscv/riscv.opt.urls: Likewise.
* config/rs6000/rs6000.opt.urls: Likewise.
gcc/rust/ChangeLog:
* lang.opt.urls: Regenerate.
|
|
Excerpt from gcc.sum:
[...]
PASS: gcc.c-torture/execute/20101011-1.c -O0 (test for excess errors)
FAIL: gcc.c-torture/execute/20101011-1.c -O0 execution test
PASS: gcc.c-torture/execute/20101011-1.c -O1 (test for excess errors)
FAIL: gcc.c-torture/execute/20101011-1.c -O1 execution test
[ ... ]
This is because H8 MCUs do not throw a "divide by zero" exception.
gcc/testsuite
* gcc.c-torture/execute/20101011-1.c: Do not test on H8 series.
|
|
The following avoids lowering a volatile bitfiled access and in case
the if-converted and original loops end up in different outer loops
because of simplifcations enabled scrap the result since that is not
how the vectorizer expects the loops to be laid out.
PR tree-optimization/114197
* tree-if-conv.cc (bitfields_to_lower_p): Do not lower if
there are volatile bitfield accesses.
(pass_if_conversion::execute): Throw away result if the
if-converted and original loops are not nested as expected.
* gcc.dg/torture/pr114197.c: New testcase.
|
|
The following avoids creating unsupported VEC_COND_EXPRs as part of
SIMD clone call mask argument setup during vectorization which results
in inefficient decomposing of the operation during vector lowering.
PR tree-optimization/114164
* tree-vect-stmts.cc (vectorizable_simd_clone_call): Fail if
the code generated for mask argument setup is not supported.
|
|
[PR114216]
For the type of the target callbacks we use elsehwere void (*) (void *) and
IMHO should use that for the reverse offload fallback as well (where the actual
callback is emitted using the same code as for host fallback or device kernel
entry routines), even when it is also ok to use void (*) () before C23 and
we aren't building libgomp with C23 yet. On some arches perhaps void (*) ()
could result in worse code generation because calls in that case like casts
to unprototyped functions need to sometimes pass argument in two different spots
etc. so that it deals with both passing it through ... and as a named argument.
2024-03-04 Jakub Jelinek <jakub@redhat.com>
PR libgomp/114216
* target.c (gomp_target_rev): Change host_fn type and corresponding
cast from void (*)() to void (*) (void *).
|
|
For precision less than int we apply the adjustment to make it defined
at zero after the adjustment to make it compute CLZ rather than CTZ.
That's wrong.
PR tree-optimization/114203
* tree-ssa-loop-niter.cc (build_cltz_expr): Apply CTZ->CLZ
adjustment before making the result defined at zero.
* gcc.dg/torture/pr114203.c: New testcase.
|
|
The following fixes a missing replacement of the reduction value
used in the epilog, causing the scalar reduction to be kept live
across the early break exit path.
PR tree-optimization/114192
* tree-vect-loop.cc (vect_create_epilog_for_reduction): Use the
appropriate def for the live out stmt in case of an alternate
exit.
|
|
We ICE on the following testcase due to invalid tree sharing.
The second hunk fixes that, the first one is from me looking around at
other spots which might need end up with invalid tree sharing too.
2024-03-04 Jakub Jelinek <jakub@redhat.com>
PR middle-end/114209
* gimple-lower-bitint.cc (bitint_large_huge::limb_access): Call
unshare_expr when creating a MEM_REF from MEM_REF.
(bitint_large_huge::lower_stmt): Call unshare_expr.
* gcc.dg/bitint-97.c: New test.
|
|
The vect_int_mod target selector is evaluated with the options in
DEFAULT_VECTCFLAGS in effect, but these options are not automatically
passed to tests out of the vect directories. So this test fails on
targets where integer vector modulo operation is supported but requiring
an option to enable, for example LoongArch.
In this test case, the only expected optimization not happened in
original is in corge because it needs forward propogation. So we can
scan the forwprop2 dump (where the vector operation is not expanded to
scalars yet) instead of optimized, then we don't need to consider
vect_int_mod or not.
gcc/testsuite/ChangeLog:
PR testsuite/113418
* gcc.dg/pr104992.c (dg-options): Use -fdump-tree-forwprop2
instead of -fdump-tree-optimized.
(dg-final): Scan forwprop2 dump instead of optimized, and remove
the use of vect_int_mod.
* lib/target-supports.exp (check_effective_target_vect_int_mod):
Remove because it's not used anymore.
|
|
The Intel extended format has the various weird number categories,
pseudo denormals, pseudo infinities, pseudo NaNs and unnormals.
Those are not representable in the GCC real_value and so neither
GIMPLE nor RTX VIEW_CONVERT_EXPR/SUBREG folding folds those into
constants.
As can be seen on the following testcase, because it isn't folded
(since GCC 12, before that we were folding it) we can end up with
a SUBREG of a CONST_VECTOR or similar constant, which isn't valid
general_operand, so we ICE during vregs pass trying to recognize
the move instruction.
Initially I thought it is a middle-end bug, the movxf instruction
has general_operand predicate, but the middle-end certainly never
tests that predicate, seems moves are special optabs.
And looking at other mov optabs, e.g. for vector modes the i386
patterns use nonimmediate_operand predicate on the input, yet
ix86_expand_vector_move deals with CONSTANT_P and SUBREG of CONSTANT_P
arguments which if the predicate was checked couldn't ever make it through.
The following patch handles this case similarly to the
ix86_expand_vector_move's SUBREG of CONSTANT_P case, does it just for XFmode
because I believe that is the only mode that needs it from the scalar ones,
others should just be folded.
2024-03-04 Jakub Jelinek <jakub@redhat.com>
PR target/114184
* config/i386/i386-expand.cc (ix86_expand_move): If XFmode op1
is SUBREG of CONSTANT_P, force the SUBREG_REG into memory or
register.
* gcc.target/i386/pr114184.c: New test.
|
|
ChangeLog:
* MAINTAINERS: Add myself
Signed-off-by: demin.han <demin.han@starfivetech.com>
|
|
This patch fixes PR target/114187 a typo/missed-optimization in simplify-rtx
that's exposed by (my) changes to x86_64's parameter passing. The context
is that construction of double word (TImode) values now uses the idiom:
(ior:TI (ashift:TI (zero_extend:TI (reg:DI x)) (const_int 64 [0x40]))
(zero_extend:TI (reg:DI y)))
Extracting the DImode highpart and lowpart halves of this complex expression
is supported by simplications in simplify_subreg. The problem is when the
doubleword TImode value represents two DFmode fields, there isn't a direct
simplification to extract the highpart or lowpart SUBREGs, but instead GCC
uses two steps, extract the DImode {high,low} part and then cast the result
back to a floating point mode, DFmode.
The (buggy) code to do this is:
/* If the outer mode is not integral, try taking a subreg with the equivalent
integer outer mode and then bitcasting the result.
Other simplifications rely on integer to integer subregs and we'd
potentially miss out on optimizations otherwise. */
if (known_gt (GET_MODE_SIZE (innermode),
GET_MODE_SIZE (outermode))
&& SCALAR_INT_MODE_P (innermode)
&& !SCALAR_INT_MODE_P (outermode)
&& int_mode_for_size (GET_MODE_BITSIZE (outermode),
0).exists (&int_outermode))
{
rtx tem = simplify_subreg (int_outermode, op, innermode, byte);
if (tem)
return simplify_gen_subreg (outermode, tem, int_outermode, byte);
}
The issue/mistake is that the second call, to simplify_subreg, shouldn't
use "byte" as the final argument; the offset has already been handled by
the first call, to simplify_subreg, and this second call is just a type
conversion from an integer mode to floating point (from DImode to DFmode).
Interestingly, this mistake was already spotted by Richard Sandiford when
the optimization was originally contributed in January 2023.
https://gcc.gnu.org/pipermail/gcc-patches/2023-January/610920.html
>> Richard Sandiford writes:
>> Also, the final line should pass 0 rather than byte.
Unfortunately a miscommunication/misunderstanding in a later thread
https://gcc.gnu.org/pipermail/gcc-patches/2023-February/612898.html
resulted in this correction being undone. Using lowpart_subreg should
avoid/reduce confusion in future.
2024-03-03 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR target/114187
* simplify-rtx.cc (simplify_context::simplify_subreg): Call
lowpart_subreg to perform type conversion, to avoid confusion
over the offset to use in the call to simplify_reg_subreg.
gcc/testsuite/ChangeLog
PR target/114187
* g++.target/i386/pr114187.C: New test case.
|
|
|
|
D front-end changes:
- Import dmd v2.108.1-beta-1.
D runtime changes:
- Import druntime v2.108.1-beta-1.
Phobos changes:
- Import phobos v2.108.1-beta-1.
gcc/d/ChangeLog:
* dmd/MERGE: Merge upstream dmd f8bae04558.
* dmd/VERSION: Bump version to v2.108.0-beta.1.
* d-builtins.cc (build_frontend_type): Update for new front-end
interface.
* d-codegen.cc (build_assert_call): Likewise.
* d-convert.cc (d_array_convert): Likewise.
* decl.cc (get_vtable_decl): Likewise.
* expr.cc (ExprVisitor::visit (EqualExp *)): Likewise.
(ExprVisitor::visit (VarExp *)): Likewise.
(ExprVisitor::visit (ArrayLiteralExp *)): Likewise.
(ExprVisitor::visit (AssocArrayLiteralExp)): Likewise.
* intrinsics.cc (build_shuffle_mask_type): Likewise.
(maybe_warn_intrinsic_mismatch): Likewise.
* runtime.cc (get_libcall_type): Likewise.
* typeinfo.cc (TypeInfoVisitor::layout_string): Likewise.
(TypeInfoVisitor::visit(TypeInfoTupleDeclaration *)): Likewise.
libphobos/ChangeLog:
* libdruntime/MERGE: Merge upstream druntime 02d6d07a69.
* src/MERGE: Merge upstream phobos a2ade9dec.
|
|
WORD_REGISTER_OPERATIONS [PR113010]
The sign-bit-copies of a sign-extending load cannot be known until runtime on
WORD_REGISTER_OPERATIONS targets, except in the case of a zero-extending MEM
load. See the fix for PR112758.
gcc/
PR rtl-optimization/113010
* combine.cc (simplify_comparison): Simplify a SUBREG on
WORD_REGISTER_OPERATIONS targets only if it is a zero-extending
MEM load.
gcc/testsuite
* gcc.c-torture/execute/pr113010.c: New test.
|
|
gcc/
* config/avr/avr.cc: Resolve ATTRIBUTE_UNUSED.
Use bool in place of int for boolean logic (if possible).
Move declarations to definitions (if possible).
* config/avr/avr.md: Use C++ comments. Fix some indentation glitches.
* config/avr/avr-dimode.md: Same.
* config/avr/constraints.md: Same.
* config/avr/predicates.md: Same.
|
|
umuldi3_highpart expander does:
if (REG_P (operands[2]))
operands[2] = gen_rtx_ZERO_EXTEND (TImode, operands[2]);
on register_operand predicate, which also allows SUBREG RTX. So,
subregs were emitted without ZERO_EXTEND RTX.
But nowadays we have UMUL_HIGHPART that allows us to fix this
issue while also simplifying the instruction RTX.
PR target/113720
gcc/ChangeLog:
* config/alpha/alpha.md (umuldi3_highpart): Remove expander.
(*umuldi3_highpart_reg): Rename to umuldi3_highpart and
simplify insn RTX using UMUL_HIGHPART rtx_code.
(*umuldi3_highpart_const): Remove.
|
|
Without -mfuse-add, when fake reg+offset addressing is used, the
output routines are saving some instructions when the base reg
is unused after. This patch adds that optimization for the case
when the base is the frame pointer and the frame pointer adjustments
are split away from the move insn by -mfuse-add in .split2.
Direct usage of reg_unused_after is not possible because that
function looks at the destination of the current insn, which won't
work for offsetting the frame pointer in printing PLUS code.
It can use an extended version of _reg_unused_after though.
gcc/
PR target/114100
* config/avr/avr-protos.h (_reg_unused_after): Remove proto.
* config/avr/avr.cc (_reg_unused_after): Make static. And
add 3rd argument to skip the current insn.
(reg_unused_after): Adjust call of reg_unused_after.
(avr_out_plus_1) [AVR_TINY && -mfuse-add >= 2]: Don't output
unneeded frame pointer adjustments.
|
|
The backend has remains of cc0 condition code. Unfortunately,
all that information is useless with CCmode, and their use was
removed with the removal of NOTICE_UPDATE_CC in PR92729 with
r12-226 and r12-327.
gcc/
PR target/92729
* config/avr/avr.md (define_attr "cc"): Remove.
* config/avr/avr-protos.h (avr_out_plus): Remove pcc argument
from prototype.
* config/avr/avr.cc (avr_out_plus_1): Remove pcc argument and
its uses. Add insn argument.
(avr_out_plus_symbol): Remove pcc argument and its uses.
(avr_out_plus): Remove pcc argument and its uses.
Adjust calls of avr_out_plus_symbol and avr_out_plus_1.
(avr_out_round): Adjust call of avr_out_plus.
|
|
gcc/
* config/avr/avr.cc (avr_init_cumulative_args): Fix a typo
from r14-9273.
|
|
gcc/ChangeLog:
PR target/101737
* config/sh/sh.cc (sh_is_nott_insn): Handle case where the input
is not an insn, but e.g. a code label.
|
|
PR d/114171
gcc/d/ChangeLog:
* d-codegen.cc (lower_struct_comparison): Keep alignment of original
type in reinterpret cast for comparison.
gcc/testsuite/ChangeLog:
* gdc.dg/torture/pr114171.d: New test.
|
|
|
|
* Makefile.am (libbacktrace_testing_ldflags): Define.
(*_LDFLAGS): Add $(libbacktrace_testing_ldflags) for test
programs.
* Makefile.in: Regenerate
|
|
Fixes https://github.com/ianlancetaylor/libbacktrace/issues/118
* elf.c (elf_uncompress_lzma_block): Skip all header padding bytes
and verify that they are zero.
|
|
There are some places where avr.cc uses magic numbers like 17 that
are actually register numbers. This patch defines constants like
REG_17 and uses them instead of the magic numbers when a register
number is meant.
gcc/
* config/avr/avr.md (REG_0, ... REG_36): New define_constants.
* config/avr/avr.cc: Use them instead of magic numbers when it
means a register number.
|
|
gcc/
* config/avr/avr.cc: Adjust some comments.
|