aboutsummaryrefslogtreecommitdiff
path: root/gcc
diff options
context:
space:
mode:
authorPaul Brook <paul@codesourcery.com>2007-01-03 23:48:10 +0000
committerPaul Brook <pbrook@gcc.gnu.org>2007-01-03 23:48:10 +0000
commit5b3e666315f8d173a883a778dc7b500730977cd5 (patch)
treee7fbd881338bffa6ed838d87847ff2484257e411 /gcc
parentf8e7718c6ff2e04fdfb21ab4cc060259c007044b (diff)
downloadgcc-5b3e666315f8d173a883a778dc7b500730977cd5.zip
gcc-5b3e666315f8d173a883a778dc7b500730977cd5.tar.gz
gcc-5b3e666315f8d173a883a778dc7b500730977cd5.tar.bz2
backport: thumb2.md: New file.
2007-01-03 Paul Brook <paul@codesourcery.com> Merge from sourcerygxx-4_1. gcc/ * config/arm/thumb2.md: New file. * config/arm/elf.h (JUMP_TABLES_IN_TEXT_SECTION): Return True for Thumb-2. * config/arm/coff.h (JUMP_TABLES_IN_TEXT_SECTION): Ditto. * config/arm/aout.h (ASM_OUTPUT_ADDR_VEC_ELT): Add !Thumb-2 assertion. (ASM_OUTPUT_ADDR_DIFF_ELT): Output Thumb-2 jump tables. * config/arm/aof.h (ASM_OUTPUT_ADDR_DIFF_ELT): Output Thumb-2 jump tables. (ASM_OUTPUT_ADDR_VEC_ELT): Add !Thumb-2 assertion. * config/arm/ieee754-df.S: Use macros for Thumb-2/Unified asm comptibility. * config/arm/ieee754-sf.S: Ditto. * config/arm/arm.c (thumb_base_register_rtx_p): Rename... (thumb1_base_register_rtx_p): ... to this. (thumb_index_register_rtx_p): Rename... (thumb1_index_register_rtx_p): ... to this. (thumb_output_function_prologue): Rename... (thumb1_output_function_prologue): ... to this. (thumb_legitimate_address_p): Rename... (thumb1_legitimate_address_p): ... to this. (thumb_rtx_costs): Rename... (thumb1_rtx_costs): ... to this. (thumb_compute_save_reg_mask): Rename... (thumb1_compute_save_reg_mask): ... to this. (thumb_final_prescan_insn): Rename... (thumb1_final_prescan_insn): ... to this. (thumb_expand_epilogue): Rename... (thumb1_expand_epilogue): ... to this. (arm_unwind_emit_stm): Rename... (arm_unwind_emit_sequence): ... to this. (thumb2_legitimate_index_p, thumb2_legitimate_address_p, thumb1_compute_save_reg_mask, arm_dwarf_handle_frame_unspec, thumb2_index_mul_operand, output_move_vfp, arm_shift_nmem, arm_save_coproc_regs, thumb_set_frame_pointer, arm_print_condition, thumb2_final_prescan_insn, thumb2_asm_output_opcode, arm_output_shift, thumb2_output_casesi): New functions. (TARGET_DWARF_HANDLE_FRAME_UNSPEC): Define. (FL_THUMB2, FL_NOTM, FL_DIV, FL_FOR_ARCH6T2, FL_FOR_ARCH7, FL_FOR_ARCH7A, FL_FOR_ARCH7R, FL_FOR_ARCH7M, ARM_LSL_NAME, THUMB2_WORK_REGS): Define. (arm_arch_notm, arm_arch_thumb2, arm_arch_hwdiv, arm_condexec_count, arm_condexec_mask, arm_condexec_masklen)): New variables. (all_architectures): Add armv6t2, armv7, armv7a, armv7r and armv7m. (arm_override_options): Check new CPU capabilities. Set new architecture flag variables. (arm_isr_value): Handle v7m interrupt functions. (user_return_insn): Return 0 for v7m interrupt functions. Handle Thumb-2. (const_ok_for_arm): Handle Thumb-2 constants. (arm_gen_constant): Ditto. Use movw when available. (arm_function_ok_for_sibcall): Return false for v7m interrupt functions. (legitimize_pic_address, arm_call_tls_get_addr): Handle Thumb-2. (thumb_find_work_register, arm_load_pic_register, legitimize_tls_address, arm_address_cost, load_multiple_sequence, emit_ldm_seq, emit_stm_seq, arm_select_cc_mode, get_jump_table_size, print_multi_reg, output_mov_long_double_fpa_from_arm, output_mov_long_double_arm_from_fpa, output_mov_double_fpa_from_arm, output_mov_double_fpa_from_arm, output_move_double, arm_compute_save_reg_mask, arm_compute_save_reg0_reg12_mask, output_return_instruction, arm_output_function_prologue, arm_output_epilogue, arm_get_frame_offsets, arm_regno_class, arm_output_mi_thunk, thumb_set_return_address): Ditto. (arm_expand_prologue): Handle Thumb-2. Use arm_save_coproc_regs. (arm_coproc_mem_operand): Allow POST_INC/PRE_DEC. (arithmetic_instr, shift_op): Use arm_shift_nmem. (arm_print_operand): Use arm_print_condition. Handle '(', ')', '.', '!' and 'L'. (arm_final_prescan_insn): Use extract_constrain_insn_cached. (thumb_expand_prologue): Use thumb_set_frame_pointer. (arm_file_start): Output directive for unified syntax. (arm_unwind_emit_set): Handle stack alignment instruction. * config/arm/lib1funcs.asm: Remove default for __ARM_ARCH__. Add v6t2, v7, v7a, v7r and v7m. (RETLDM): Add Thumb-2 code. (do_it, shift1, do_push, do_pop, COND, THUMB_SYNTAX): New macros. * config/arm/arm.h (TARGET_CPU_CPP_BUILTINS): Define __thumb2__. (TARGET_THUMB1, TARGET_32BIT, TARGET_THUMB2, TARGET_DSP_MULTIPLY, TARGET_INT_SIMD, TARGET_UNIFIED_ASM, ARM_FT_STACKALIGN, IS_STACKALIGN, THUMB2_TRAMPOLINE_TEMPLATE, TRAMPOLINE_ADJUST_ADDRESS, ASM_OUTPUT_OPCODE, THUMB2_GO_IF_LEGITIMATE_ADDRESS, THUMB2_LEGITIMIZE_ADDRESS, CASE_VECTOR_PC_RELATIVE, CASE_VECTOR_SHORTEN_MODE, ADDR_VEC_ALIGN, ASM_OUTPUT_CASE_END, ADJUST_INSN_LENGTH): Define. (TARGET_REALLY_IWMMXT, TARGET_IWMMXT_ABI, CONDITIONAL_REGISTER_USAGE, STATIC_CHAIN_REGNUM, HARD_REGNO_NREGS, INDEX_REG_CLASS, BASE_REG_CLASS, MODE_BASE_REG_CLASS, SMALL_REGISTER_CLASSES, PREFERRED_RELOAD_CLASS, SECONDARY_OUTPUT_RELOAD_CLASS, SECONDARY_INPUT_RELOAD_CLASS, LIBCALL_VALUE, FUNCTION_VALUE_REGNO_P, TRAMPOLINE_SIZE, INITIALIZE_TRAMPOLINE, HAVE_PRE_INCREMENT, HAVE_POST_DECREMENT, HAVE_PRE_DECREMENT, HAVE_PRE_MODIFY_DISP, HAVE_POST_MODIFY_DISP, HAVE_PRE_MODIFY_REG, HAVE_POST_MODIFY_REG, REGNO_MODE_OK_FOR_BASE_P, LEGITIMATE_CONSTANT_P, REG_MODE_OK_FOR_BASE_P, REG_OK_FOR_INDEX_P, GO_IF_LEGITIMATE_ADDRESS, LEGITIMIZE_ADDRESS, THUMB2_LEGITIMIZE_ADDRESS, GO_IF_MODE_DEPENDENT_ADDRESS, MEMORY_MOVE_COST, BRANCH_COST, ASM_APP_OFF, ASM_OUTPUT_CASE_LABEL, ARM_DECLARE_FUNCTION_NAME, FINAL_PRESCAN_INSN, PRINT_OPERAND_PUNCT_VALID_P, PRINT_OPERAND_ADDRESS): Adjust for Thumb-2. (arm_arch_notm, arm_arch_thumb2, arm_arch_hwdiv): New declarations. * config/arm/arm-cores.def: Add arm1156t2-s, cortex-a8, cortex-r4 and cortex-m3. * config/arm/arm-tune.md: Regenerate. * config/arm/arm-protos.h: Update prototypes. * config/arm/vfp.md: Enable patterns for Thumb-2. (arm_movsi_vfp): Add movw alternative. Use output_move_vfp. (arm_movdi_vfp, movsf_vfp, movdf_vfp): Use output_move_vfp. (thumb2_movsi_vfp, thumb2_movdi_vfp, thumb2_movsf_vfp, thumb2_movdf_vfp, thumb2_movsfcc_vfp, thumb2_movdfcc_vfp): New. * config/arm/libunwind.S: Add Thumb-2 code. * config/arm/constraints.md: Update include Thumb-2. * config/arm/ieee754-sf.S: Add Thumb-2/Unified asm support. * config/arm/ieee754-df.S: Ditto. * config/arm/bpabi.S: Ditto. * config/arm/t-arm (MD_INCLUDES): Add thumb2.md. * config/arm/predicates.md (low_register_operand, low_reg_or_int_operand, thumb_16bit_operator): New. (thumb_cmp_operand, thumb_cmpneg_operand): Rename... (thumb1_cmp_operand, thumb1_cmpneg_operand): ... to this. * config/arm/t-arm-elf: Add armv7 multilib. * config/arm/arm.md: Update patterns for Thumb-2 and Unified asm. Include thumb2.md. (UNSPEC_STACK_ALIGN, ce_count): New. (arm_incscc, arm_decscc, arm_umaxsi3, arm_uminsi3, arm_zero_extendsidi2, arm_zero_extendqidi2): New insns/expanders. * config/arm/fpa.md: Update patterns for Thumb-2 and Unified asm. (thumb2_movsf_fpa, thumb2_movdf_fpa, thumb2_movxf_fpa, thumb2_movsfcc_fpa, thumb2_movdfcc_fpa): New insns. * config/arm/cirrus.md: Update patterns for Thumb-2 and Unified asm. (cirrus_thumb2_movdi, cirrus_thumb2_movsi_insn, thumb2_cirrus_movsf_hard_insn, thumb2_cirrus_movdf_hard_insn): New insns. * doc/extend.texi: Document ARMv7-M interrupt functions. * doc/invoke.texi: Document Thumb-2 new cores+architectures. From-SVN: r120408
Diffstat (limited to 'gcc')
-rw-r--r--gcc/ChangeLog139
-rw-r--r--gcc/config/arm/aof.h51
-rw-r--r--gcc/config/arm/aout.h39
-rw-r--r--gcc/config/arm/arm-cores.def6
-rw-r--r--gcc/config/arm/arm-protos.h18
-rw-r--r--gcc/config/arm/arm-tune.md2
-rw-r--r--gcc/config/arm/arm.c1550
-rw-r--r--gcc/config/arm/arm.h305
-rw-r--r--gcc/config/arm/arm.md1176
-rw-r--r--gcc/config/arm/bpabi.S29
-rw-r--r--gcc/config/arm/cirrus.md182
-rw-r--r--gcc/config/arm/coff.h11
-rw-r--r--gcc/config/arm/constraints.md87
-rw-r--r--gcc/config/arm/elf.h9
-rw-r--r--gcc/config/arm/fpa.md242
-rw-r--r--gcc/config/arm/ieee754-df.S276
-rw-r--r--gcc/config/arm/ieee754-sf.S166
-rw-r--r--gcc/config/arm/iwmmxt.md3
-rw-r--r--gcc/config/arm/lib1funcs.asm92
-rw-r--r--gcc/config/arm/libunwind.S25
-rw-r--r--gcc/config/arm/predicates.md20
-rw-r--r--gcc/config/arm/t-arm3
-rw-r--r--gcc/config/arm/t-arm-elf10
-rw-r--r--gcc/config/arm/thumb2.md1188
-rw-r--r--gcc/config/arm/vfp.md359
-rw-r--r--gcc/doc/extend.texi5
-rw-r--r--gcc/doc/invoke.texi16
27 files changed, 4642 insertions, 1367 deletions
diff --git a/gcc/ChangeLog b/gcc/ChangeLog
index db31113..ae2fc67 100644
--- a/gcc/ChangeLog
+++ b/gcc/ChangeLog
@@ -1,3 +1,142 @@
+2007-01-03 Paul Brook <paul@codesourcery.com>
+
+ Merge from sourcerygxx-4_1.
+ * config/arm/thumb2.md: New file.
+ * config/arm/elf.h (JUMP_TABLES_IN_TEXT_SECTION): Return True for
+ Thumb-2.
+ * config/arm/coff.h (JUMP_TABLES_IN_TEXT_SECTION): Ditto.
+ * config/arm/aout.h (ASM_OUTPUT_ADDR_VEC_ELT): Add !Thumb-2 assertion.
+ (ASM_OUTPUT_ADDR_DIFF_ELT): Output Thumb-2 jump tables.
+ * config/arm/aof.h (ASM_OUTPUT_ADDR_DIFF_ELT): Output Thumb-2 jump
+ tables.
+ (ASM_OUTPUT_ADDR_VEC_ELT): Add !Thumb-2 assertion.
+ * config/arm/ieee754-df.S: Use macros for Thumb-2/Unified asm
+ comptibility.
+ * config/arm/ieee754-sf.S: Ditto.
+ * config/arm/arm.c (thumb_base_register_rtx_p): Rename...
+ (thumb1_base_register_rtx_p): ... to this.
+ (thumb_index_register_rtx_p): Rename...
+ (thumb1_index_register_rtx_p): ... to this.
+ (thumb_output_function_prologue): Rename...
+ (thumb1_output_function_prologue): ... to this.
+ (thumb_legitimate_address_p): Rename...
+ (thumb1_legitimate_address_p): ... to this.
+ (thumb_rtx_costs): Rename...
+ (thumb1_rtx_costs): ... to this.
+ (thumb_compute_save_reg_mask): Rename...
+ (thumb1_compute_save_reg_mask): ... to this.
+ (thumb_final_prescan_insn): Rename...
+ (thumb1_final_prescan_insn): ... to this.
+ (thumb_expand_epilogue): Rename...
+ (thumb1_expand_epilogue): ... to this.
+ (arm_unwind_emit_stm): Rename...
+ (arm_unwind_emit_sequence): ... to this.
+ (thumb2_legitimate_index_p, thumb2_legitimate_address_p,
+ thumb1_compute_save_reg_mask, arm_dwarf_handle_frame_unspec,
+ thumb2_index_mul_operand, output_move_vfp, arm_shift_nmem,
+ arm_save_coproc_regs, thumb_set_frame_pointer, arm_print_condition,
+ thumb2_final_prescan_insn, thumb2_asm_output_opcode, arm_output_shift,
+ thumb2_output_casesi): New functions.
+ (TARGET_DWARF_HANDLE_FRAME_UNSPEC): Define.
+ (FL_THUMB2, FL_NOTM, FL_DIV, FL_FOR_ARCH6T2, FL_FOR_ARCH7,
+ FL_FOR_ARCH7A, FL_FOR_ARCH7R, FL_FOR_ARCH7M, ARM_LSL_NAME,
+ THUMB2_WORK_REGS): Define.
+ (arm_arch_notm, arm_arch_thumb2, arm_arch_hwdiv, arm_condexec_count,
+ arm_condexec_mask, arm_condexec_masklen)): New variables.
+ (all_architectures): Add armv6t2, armv7, armv7a, armv7r and armv7m.
+ (arm_override_options): Check new CPU capabilities.
+ Set new architecture flag variables.
+ (arm_isr_value): Handle v7m interrupt functions.
+ (user_return_insn): Return 0 for v7m interrupt functions. Handle
+ Thumb-2.
+ (const_ok_for_arm): Handle Thumb-2 constants.
+ (arm_gen_constant): Ditto. Use movw when available.
+ (arm_function_ok_for_sibcall): Return false for v7m interrupt
+ functions.
+ (legitimize_pic_address, arm_call_tls_get_addr): Handle Thumb-2.
+ (thumb_find_work_register, arm_load_pic_register,
+ legitimize_tls_address, arm_address_cost, load_multiple_sequence,
+ emit_ldm_seq, emit_stm_seq, arm_select_cc_mode, get_jump_table_size,
+ print_multi_reg, output_mov_long_double_fpa_from_arm,
+ output_mov_long_double_arm_from_fpa, output_mov_double_fpa_from_arm,
+ output_mov_double_fpa_from_arm, output_move_double,
+ arm_compute_save_reg_mask, arm_compute_save_reg0_reg12_mask,
+ output_return_instruction, arm_output_function_prologue,
+ arm_output_epilogue, arm_get_frame_offsets, arm_regno_class,
+ arm_output_mi_thunk, thumb_set_return_address): Ditto.
+ (arm_expand_prologue): Handle Thumb-2. Use arm_save_coproc_regs.
+ (arm_coproc_mem_operand): Allow POST_INC/PRE_DEC.
+ (arithmetic_instr, shift_op): Use arm_shift_nmem.
+ (arm_print_operand): Use arm_print_condition. Handle '(', ')', '.',
+ '!' and 'L'.
+ (arm_final_prescan_insn): Use extract_constrain_insn_cached.
+ (thumb_expand_prologue): Use thumb_set_frame_pointer.
+ (arm_file_start): Output directive for unified syntax.
+ (arm_unwind_emit_set): Handle stack alignment instruction.
+ * config/arm/lib1funcs.asm: Remove default for __ARM_ARCH__.
+ Add v6t2, v7, v7a, v7r and v7m.
+ (RETLDM): Add Thumb-2 code.
+ (do_it, shift1, do_push, do_pop, COND, THUMB_SYNTAX): New macros.
+ * config/arm/arm.h (TARGET_CPU_CPP_BUILTINS): Define __thumb2__.
+ (TARGET_THUMB1, TARGET_32BIT, TARGET_THUMB2, TARGET_DSP_MULTIPLY,
+ TARGET_INT_SIMD, TARGET_UNIFIED_ASM, ARM_FT_STACKALIGN, IS_STACKALIGN,
+ THUMB2_TRAMPOLINE_TEMPLATE, TRAMPOLINE_ADJUST_ADDRESS,
+ ASM_OUTPUT_OPCODE, THUMB2_GO_IF_LEGITIMATE_ADDRESS,
+ THUMB2_LEGITIMIZE_ADDRESS, CASE_VECTOR_PC_RELATIVE,
+ CASE_VECTOR_SHORTEN_MODE, ADDR_VEC_ALIGN, ASM_OUTPUT_CASE_END,
+ ADJUST_INSN_LENGTH): Define.
+ (TARGET_REALLY_IWMMXT, TARGET_IWMMXT_ABI, CONDITIONAL_REGISTER_USAGE,
+ STATIC_CHAIN_REGNUM, HARD_REGNO_NREGS, INDEX_REG_CLASS,
+ BASE_REG_CLASS, MODE_BASE_REG_CLASS, SMALL_REGISTER_CLASSES,
+ PREFERRED_RELOAD_CLASS, SECONDARY_OUTPUT_RELOAD_CLASS,
+ SECONDARY_INPUT_RELOAD_CLASS, LIBCALL_VALUE, FUNCTION_VALUE_REGNO_P,
+ TRAMPOLINE_SIZE, INITIALIZE_TRAMPOLINE, HAVE_PRE_INCREMENT,
+ HAVE_POST_DECREMENT, HAVE_PRE_DECREMENT, HAVE_PRE_MODIFY_DISP,
+ HAVE_POST_MODIFY_DISP, HAVE_PRE_MODIFY_REG, HAVE_POST_MODIFY_REG,
+ REGNO_MODE_OK_FOR_BASE_P, LEGITIMATE_CONSTANT_P,
+ REG_MODE_OK_FOR_BASE_P, REG_OK_FOR_INDEX_P, GO_IF_LEGITIMATE_ADDRESS,
+ LEGITIMIZE_ADDRESS, THUMB2_LEGITIMIZE_ADDRESS,
+ GO_IF_MODE_DEPENDENT_ADDRESS, MEMORY_MOVE_COST, BRANCH_COST,
+ ASM_APP_OFF, ASM_OUTPUT_CASE_LABEL, ARM_DECLARE_FUNCTION_NAME,
+ FINAL_PRESCAN_INSN, PRINT_OPERAND_PUNCT_VALID_P,
+ PRINT_OPERAND_ADDRESS): Adjust for Thumb-2.
+ (arm_arch_notm, arm_arch_thumb2, arm_arch_hwdiv): New declarations.
+ * config/arm/arm-cores.def: Add arm1156t2-s, cortex-a8, cortex-r4 and
+ cortex-m3.
+ * config/arm/arm-tune.md: Regenerate.
+ * config/arm/arm-protos.h: Update prototypes.
+ * config/arm/vfp.md: Enable patterns for Thumb-2.
+ (arm_movsi_vfp): Add movw alternative. Use output_move_vfp.
+ (arm_movdi_vfp, movsf_vfp, movdf_vfp): Use output_move_vfp.
+ (thumb2_movsi_vfp, thumb2_movdi_vfp, thumb2_movsf_vfp,
+ thumb2_movdf_vfp, thumb2_movsfcc_vfp, thumb2_movdfcc_vfp): New.
+ * config/arm/libunwind.S: Add Thumb-2 code.
+ * config/arm/constraints.md: Update include Thumb-2.
+ * config/arm/ieee754-sf.S: Add Thumb-2/Unified asm support.
+ * config/arm/ieee754-df.S: Ditto.
+ * config/arm/bpabi.S: Ditto.
+ * config/arm/t-arm (MD_INCLUDES): Add thumb2.md.
+ * config/arm/predicates.md (low_register_operand,
+ low_reg_or_int_operand, thumb_16bit_operator): New.
+ (thumb_cmp_operand, thumb_cmpneg_operand): Rename...
+ (thumb1_cmp_operand, thumb1_cmpneg_operand): ... to this.
+ * config/arm/t-arm-elf: Add armv7 multilib.
+ * config/arm/arm.md: Update patterns for Thumb-2 and Unified asm.
+ Include thumb2.md.
+ (UNSPEC_STACK_ALIGN, ce_count): New.
+ (arm_incscc, arm_decscc, arm_umaxsi3, arm_uminsi3,
+ arm_zero_extendsidi2, arm_zero_extendqidi2): New
+ insns/expanders.
+ * config/arm/fpa.md: Update patterns for Thumb-2 and Unified asm.
+ (thumb2_movsf_fpa, thumb2_movdf_fpa, thumb2_movxf_fpa,
+ thumb2_movsfcc_fpa, thumb2_movdfcc_fpa): New insns.
+ * config/arm/cirrus.md: Update patterns for Thumb-2 and Unified asm.
+ (cirrus_thumb2_movdi, cirrus_thumb2_movsi_insn,
+ thumb2_cirrus_movsf_hard_insn, thumb2_cirrus_movdf_hard_insn): New
+ insns.
+ * doc/extend.texi: Document ARMv7-M interrupt functions.
+ * doc/invoke.texi: Document Thumb-2 new cores+architectures.
+
2007-01-03 Jakub Jelinek <jakub@redhat.com>
* unwind-dw2.c (SIGNAL_FRAME_BIT, EXTENDED_CONTEXT_BIT): Define.
diff --git a/gcc/config/arm/aof.h b/gcc/config/arm/aof.h
index 8a1223c..e6694dc 100644
--- a/gcc/config/arm/aof.h
+++ b/gcc/config/arm/aof.h
@@ -1,6 +1,6 @@
/* Definitions of target machine for GNU compiler, for Advanced RISC Machines
ARM compilation, AOF Assembler.
- Copyright (C) 1995, 1996, 1997, 2000, 2003, 2004
+ Copyright (C) 1995, 1996, 1997, 2000, 2003, 2004, 2007
Free Software Foundation, Inc.
Contributed by Richard Earnshaw (rearnsha@armltd.co.uk)
@@ -258,18 +258,47 @@ do { \
#define ARM_MCOUNT_NAME "_mcount"
/* Output of Dispatch Tables. */
-#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM, BODY, VALUE, REL) \
- do \
- { \
- if (TARGET_ARM) \
- fprintf ((STREAM), "\tb\t|L..%d|\n", (VALUE)); \
- else \
- fprintf ((STREAM), "\tDCD\t|L..%d| - |L..%d|\n", (VALUE), (REL)); \
- } \
+#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM, BODY, VALUE, REL) \
+ do \
+ { \
+ if (TARGET_ARM) \
+ fprintf ((STREAM), "\tb\t|L..%d|\n", (VALUE)); \
+ else if (TARGET_THUMB1) \
+ fprintf ((STREAM), "\tDCD\t|L..%d| - |L..%d|\n", (VALUE), (REL)); \
+ else /* Thumb-2 */ \
+ { \
+ switch (GET_MODE(body)) \
+ { \
+ case QImode: /* TBB */ \
+ asm_fprintf (STREAM, "\tDCB\t(|L..%d| - |L..%d|)/2\n", \
+ VALUE, REL); \
+ break; \
+ case HImode: /* TBH */ \
+ asm_fprintf (STREAM, "\tDCW\t|L..%d| - |L..%d|)/2\n", \
+ VALUE, REL); \
+ break; \
+ case SImode: \
+ if (flag_pic) \
+ asm_fprintf (STREAM, "\tDCD\t|L..%d| + 1 - |L..%d|\n", \
+ VALUE, REL); \
+ else \
+ asm_fprintf (STREAM, "\tDCD\t|L..%d| + 1\n", VALUE); \
+ break; \
+ default: \
+ gcc_unreachable(); \
+ } \
+ } \
+ } \
while (0)
-#define ASM_OUTPUT_ADDR_VEC_ELT(STREAM, VALUE) \
- fprintf ((STREAM), "\tDCD\t|L..%d|\n", (VALUE))
+#define ASM_OUTPUT_ADDR_VEC_ELT(STREAM, VALUE) \
+ do \
+ { \
+ gcc_assert (!TARGET_THUMB2) \
+ fprintf ((STREAM), "\tDCD\t|L..%d|\n", (VALUE)) \
+ } \
+ while (0)
+
/* A label marking the start of a jump table is a data label. */
#define ASM_OUTPUT_CASE_LABEL(STREAM, PREFIX, NUM, TABLE) \
diff --git a/gcc/config/arm/aout.h b/gcc/config/arm/aout.h
index 903afa7..b910318 100644
--- a/gcc/config/arm/aout.h
+++ b/gcc/config/arm/aout.h
@@ -1,5 +1,5 @@
/* Definitions of target machine for GNU compiler, for ARM with a.out
- Copyright (C) 1995, 1996, 1997, 1998, 1999, 2000, 2004
+ Copyright (C) 1995, 1996, 1997, 1998, 1999, 2000, 2004, 2007
Free Software Foundation, Inc.
Contributed by Richard Earnshaw (rearnsha@armltd.co.uk).
@@ -214,16 +214,47 @@
#endif
/* Output an element of a dispatch table. */
-#define ASM_OUTPUT_ADDR_VEC_ELT(STREAM, VALUE) \
- asm_fprintf (STREAM, "\t.word\t%LL%d\n", VALUE)
+#define ASM_OUTPUT_ADDR_VEC_ELT(STREAM, VALUE) \
+ do \
+ { \
+ gcc_assert (!TARGET_THUMB2); \
+ asm_fprintf (STREAM, "\t.word\t%LL%d\n", VALUE); \
+ } \
+ while (0)
+
+/* Thumb-2 always uses addr_diff_elf so that the Table Branch instructions
+ can be used. For non-pic code where the offsets do not suitable for
+ TBB/TBH the elements are output as absolute labels. */
#define ASM_OUTPUT_ADDR_DIFF_ELT(STREAM, BODY, VALUE, REL) \
do \
{ \
if (TARGET_ARM) \
asm_fprintf (STREAM, "\tb\t%LL%d\n", VALUE); \
- else \
+ else if (TARGET_THUMB1) \
asm_fprintf (STREAM, "\t.word\t%LL%d-%LL%d\n", VALUE, REL); \
+ else /* Thumb-2 */ \
+ { \
+ switch (GET_MODE(body)) \
+ { \
+ case QImode: /* TBB */ \
+ asm_fprintf (STREAM, "\t.byte\t(%LL%d-%LL%d)/2\n", \
+ VALUE, REL); \
+ break; \
+ case HImode: /* TBH */ \
+ asm_fprintf (STREAM, "\t.2byte\t(%LL%d-%LL%d)/2\n", \
+ VALUE, REL); \
+ break; \
+ case SImode: \
+ if (flag_pic) \
+ asm_fprintf (STREAM, "\t.word\t%LL%d+1-%LL%d\n", VALUE, REL); \
+ else \
+ asm_fprintf (STREAM, "\t.word\t%LL%d+1\n", VALUE); \
+ break; \
+ default: \
+ gcc_unreachable(); \
+ } \
+ } \
} \
while (0)
diff --git a/gcc/config/arm/arm-cores.def b/gcc/config/arm/arm-cores.def
index 3f9b7ba..0241cdf 100644
--- a/gcc/config/arm/arm-cores.def
+++ b/gcc/config/arm/arm-cores.def
@@ -1,5 +1,5 @@
/* ARM CPU Cores
- Copyright (C) 2003, 2005 Free Software Foundation, Inc.
+ Copyright (C) 2003, 2005, 2006, 2007 Free Software Foundation, Inc.
Written by CodeSourcery, LLC
This file is part of GCC.
@@ -115,3 +115,7 @@ ARM_CORE("arm1176jz-s", arm1176jzs, 6ZK, FL_LDSCHED, 9e)
ARM_CORE("arm1176jzf-s", arm1176jzfs, 6ZK, FL_LDSCHED | FL_VFPV2, 9e)
ARM_CORE("mpcorenovfp", mpcorenovfp, 6K, FL_LDSCHED, 9e)
ARM_CORE("mpcore", mpcore, 6K, FL_LDSCHED | FL_VFPV2, 9e)
+ARM_CORE("arm1156t2-s", arm1156t2s, 6T2, FL_LDSCHED, 9e)
+ARM_CORE("cortex-a8", cortexa8, 7A, FL_LDSCHED, 9e)
+ARM_CORE("cortex-r4", cortexr4, 7R, FL_LDSCHED, 9e)
+ARM_CORE("cortex-m3", cortexm3, 7M, FL_LDSCHED, 9e)
diff --git a/gcc/config/arm/arm-protos.h b/gcc/config/arm/arm-protos.h
index 032c4e6..6a9a549 100644
--- a/gcc/config/arm/arm-protos.h
+++ b/gcc/config/arm/arm-protos.h
@@ -1,5 +1,5 @@
/* Prototypes for exported functions defined in arm.c and pe.c
- Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005
+ Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
Free Software Foundation, Inc.
Contributed by Richard Earnshaw (rearnsha@arm.com)
Minor hacks by Nick Clifton (nickc@cygnus.com)
@@ -33,6 +33,7 @@ extern const char *arm_output_epilogue (rtx);
extern void arm_expand_prologue (void);
extern const char *arm_strip_name_encoding (const char *);
extern void arm_asm_output_labelref (FILE *, const char *);
+extern void thumb2_asm_output_opcode (FILE *);
extern unsigned long arm_current_func_type (void);
extern HOST_WIDE_INT arm_compute_initial_elimination_offset (unsigned int,
unsigned int);
@@ -58,7 +59,8 @@ extern int legitimate_pic_operand_p (rtx);
extern rtx legitimize_pic_address (rtx, enum machine_mode, rtx);
extern rtx legitimize_tls_address (rtx, rtx);
extern int arm_legitimate_address_p (enum machine_mode, rtx, RTX_CODE, int);
-extern int thumb_legitimate_address_p (enum machine_mode, rtx, int);
+extern int thumb1_legitimate_address_p (enum machine_mode, rtx, int);
+extern int thumb2_legitimate_address_p (enum machine_mode, rtx, int);
extern int thumb_legitimate_offset_p (enum machine_mode, HOST_WIDE_INT);
extern rtx arm_legitimize_address (rtx, rtx, enum machine_mode);
extern rtx thumb_legitimize_address (rtx, rtx, enum machine_mode);
@@ -108,6 +110,7 @@ extern const char *output_mov_long_double_arm_from_arm (rtx *);
extern const char *output_mov_double_fpa_from_arm (rtx *);
extern const char *output_mov_double_arm_from_fpa (rtx *);
extern const char *output_move_double (rtx *);
+extern const char *output_move_vfp (rtx *operands);
extern const char *output_add_immediate (rtx *);
extern const char *arithmetic_instr (rtx, int);
extern void output_ascii_pseudo_op (FILE *, const unsigned char *, int);
@@ -116,7 +119,6 @@ extern void arm_poke_function_name (FILE *, const char *);
extern void arm_print_operand (FILE *, rtx, int);
extern void arm_print_operand_address (FILE *, rtx);
extern void arm_final_prescan_insn (rtx);
-extern int arm_go_if_legitimate_address (enum machine_mode, rtx);
extern int arm_debugger_arg_offset (int, rtx);
extern int arm_is_longcall_p (rtx, int, int);
extern int arm_emit_vector_const (FILE *, rtx);
@@ -124,6 +126,7 @@ extern const char * arm_output_load_gr (rtx *);
extern const char *vfp_output_fstmd (rtx *);
extern void arm_set_return_address (rtx, rtx);
extern int arm_eliminable_register (rtx);
+extern const char *arm_output_shift(rtx *, int);
extern bool arm_output_addr_const_extra (FILE *, rtx);
@@ -151,23 +154,24 @@ extern int arm_float_words_big_endian (void);
/* Thumb functions. */
extern void arm_init_expanders (void);
extern const char *thumb_unexpanded_epilogue (void);
-extern void thumb_expand_prologue (void);
-extern void thumb_expand_epilogue (void);
+extern void thumb1_expand_prologue (void);
+extern void thumb1_expand_epilogue (void);
#ifdef TREE_CODE
extern int is_called_in_ARM_mode (tree);
#endif
extern int thumb_shiftable_const (unsigned HOST_WIDE_INT);
#ifdef RTX_CODE
-extern void thumb_final_prescan_insn (rtx);
+extern void thumb1_final_prescan_insn (rtx);
+extern void thumb2_final_prescan_insn (rtx);
extern const char *thumb_load_double_from_address (rtx *);
extern const char *thumb_output_move_mem_multiple (int, rtx *);
extern const char *thumb_call_via_reg (rtx);
extern void thumb_expand_movmemqi (rtx *);
-extern int thumb_go_if_legitimate_address (enum machine_mode, rtx);
extern rtx arm_return_addr (int, rtx);
extern void thumb_reload_out_hi (rtx *);
extern void thumb_reload_in_hi (rtx *);
extern void thumb_set_return_address (rtx, rtx);
+extern const char *thumb2_output_casesi(rtx *);
#endif
/* Defined in pe.c. */
diff --git a/gcc/config/arm/arm-tune.md b/gcc/config/arm/arm-tune.md
index 950cd91..5b4c46f 100644
--- a/gcc/config/arm/arm-tune.md
+++ b/gcc/config/arm/arm-tune.md
@@ -1,5 +1,5 @@
;; -*- buffer-read-only: t -*-
;; Generated automatically by gentune.sh from arm-cores.def
(define_attr "tune"
- "arm2,arm250,arm3,arm6,arm60,arm600,arm610,arm620,arm7,arm7d,arm7di,arm70,arm700,arm700i,arm710,arm720,arm710c,arm7100,arm7500,arm7500fe,arm7m,arm7dm,arm7dmi,arm8,arm810,strongarm,strongarm110,strongarm1100,strongarm1110,arm7tdmi,arm7tdmis,arm710t,arm720t,arm740t,arm9,arm9tdmi,arm920,arm920t,arm922t,arm940t,ep9312,arm10tdmi,arm1020t,arm9e,arm946es,arm966es,arm968es,arm10e,arm1020e,arm1022e,xscale,iwmmxt,arm926ejs,arm1026ejs,arm1136js,arm1136jfs,arm1176jzs,arm1176jzfs,mpcorenovfp,mpcore"
+ "arm2,arm250,arm3,arm6,arm60,arm600,arm610,arm620,arm7,arm7d,arm7di,arm70,arm700,arm700i,arm710,arm720,arm710c,arm7100,arm7500,arm7500fe,arm7m,arm7dm,arm7dmi,arm8,arm810,strongarm,strongarm110,strongarm1100,strongarm1110,arm7tdmi,arm7tdmis,arm710t,arm720t,arm740t,arm9,arm9tdmi,arm920,arm920t,arm922t,arm940t,ep9312,arm10tdmi,arm1020t,arm9e,arm946es,arm966es,arm968es,arm10e,arm1020e,arm1022e,xscale,iwmmxt,arm926ejs,arm1026ejs,arm1136js,arm1136jfs,arm1176jzs,arm1176jzfs,mpcorenovfp,mpcore,arm1156t2s,cortexa8,cortexr4,cortexm3"
(const (symbol_ref "arm_tune")))
diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
index c7a1ec2..9558275 100644
--- a/gcc/config/arm/arm.c
+++ b/gcc/config/arm/arm.c
@@ -1,6 +1,6 @@
/* Output routines for GCC for ARM.
Copyright (C) 1991, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001,
- 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
+ 2002, 2003, 2004, 2005, 2006, 2007 Free Software Foundation, Inc.
Contributed by Pieter `Tiggr' Schoenmakers (rcpieter@win.tue.nl)
and Martin Simmons (@harleqn.co.uk).
More major hacks by Richard Earnshaw (rearnsha@arm.com).
@@ -67,10 +67,12 @@ static int arm_gen_constant (enum rtx_code, enum machine_mode, rtx,
static unsigned bit_count (unsigned long);
static int arm_address_register_rtx_p (rtx, int);
static int arm_legitimate_index_p (enum machine_mode, rtx, RTX_CODE, int);
-static int thumb_base_register_rtx_p (rtx, enum machine_mode, int);
-inline static int thumb_index_register_rtx_p (rtx, int);
+static int thumb2_legitimate_index_p (enum machine_mode, rtx, int);
+static int thumb1_base_register_rtx_p (rtx, enum machine_mode, int);
+inline static int thumb1_index_register_rtx_p (rtx, int);
static int thumb_far_jump_used_p (void);
static bool thumb_force_lr_save (void);
+static unsigned long thumb1_compute_save_reg_mask (void);
static int const_ok_for_op (HOST_WIDE_INT, enum rtx_code);
static rtx emit_sfm (int, int);
static int arm_size_return_regs (void);
@@ -114,7 +116,7 @@ static tree arm_handle_notshared_attribute (tree *, tree, tree, int, bool *);
#endif
static void arm_output_function_epilogue (FILE *, HOST_WIDE_INT);
static void arm_output_function_prologue (FILE *, HOST_WIDE_INT);
-static void thumb_output_function_prologue (FILE *, HOST_WIDE_INT);
+static void thumb1_output_function_prologue (FILE *, HOST_WIDE_INT);
static int arm_comp_type_attributes (tree, tree);
static void arm_set_default_type_attributes (tree);
static int arm_adjust_cost (rtx, rtx, rtx, int);
@@ -177,6 +179,7 @@ static bool arm_must_pass_in_stack (enum machine_mode, tree);
static void arm_unwind_emit (FILE *, rtx);
static bool arm_output_ttype (rtx);
#endif
+static void arm_dwarf_handle_frame_unspec (const char *, rtx, int);
static tree arm_cxx_guard_type (void);
static bool arm_cxx_guard_mask_bit (void);
@@ -205,7 +208,6 @@ static bool arm_tls_symbol_p (rtx x);
#undef TARGET_ASM_FILE_START
#define TARGET_ASM_FILE_START arm_file_start
-
#undef TARGET_ASM_FILE_END
#define TARGET_ASM_FILE_END arm_file_end
@@ -361,6 +363,9 @@ static bool arm_tls_symbol_p (rtx x);
#define TARGET_ARM_EABI_UNWINDER true
#endif /* TARGET_UNWIND_INFO */
+#undef TARGET_DWARF_HANDLE_FRAME_UNSPEC
+#define TARGET_DWARF_HANDLE_FRAME_UNSPEC arm_dwarf_handle_frame_unspec
+
#undef TARGET_CANNOT_COPY_INSN_P
#define TARGET_CANNOT_COPY_INSN_P arm_cannot_copy_insn_p
@@ -441,11 +446,15 @@ static int thumb_call_reg_needed;
#define FL_WBUF (1 << 14) /* Schedule for write buffer ops.
Note: ARM6 & 7 derivatives only. */
#define FL_ARCH6K (1 << 15) /* Architecture rel 6 K extensions. */
+#define FL_THUMB2 (1 << 16) /* Thumb-2. */
+#define FL_NOTM (1 << 17) /* Instructions not present in the 'M'
+ profile. */
+#define FL_DIV (1 << 18) /* Hardware divde. */
#define FL_IWMMXT (1 << 29) /* XScale v2 or "Intel Wireless MMX technology". */
-#define FL_FOR_ARCH2 0
-#define FL_FOR_ARCH3 FL_MODE32
+#define FL_FOR_ARCH2 FL_NOTM
+#define FL_FOR_ARCH3 (FL_FOR_ARCH2 | FL_MODE32)
#define FL_FOR_ARCH3M (FL_FOR_ARCH3 | FL_ARCH3M)
#define FL_FOR_ARCH4 (FL_FOR_ARCH3M | FL_ARCH4)
#define FL_FOR_ARCH4T (FL_FOR_ARCH4 | FL_THUMB)
@@ -459,6 +468,11 @@ static int thumb_call_reg_needed;
#define FL_FOR_ARCH6K (FL_FOR_ARCH6 | FL_ARCH6K)
#define FL_FOR_ARCH6Z FL_FOR_ARCH6
#define FL_FOR_ARCH6ZK FL_FOR_ARCH6K
+#define FL_FOR_ARCH6T2 (FL_FOR_ARCH6 | FL_THUMB2)
+#define FL_FOR_ARCH7 (FL_FOR_ARCH6T2 &~ FL_NOTM)
+#define FL_FOR_ARCH7A (FL_FOR_ARCH7 | FL_NOTM)
+#define FL_FOR_ARCH7R (FL_FOR_ARCH7A | FL_DIV)
+#define FL_FOR_ARCH7M (FL_FOR_ARCH7 | FL_DIV)
/* The bits in this mask specify which
instructions we are allowed to generate. */
@@ -492,6 +506,9 @@ int arm_arch6 = 0;
/* Nonzero if this chip supports the ARM 6K extensions. */
int arm_arch6k = 0;
+/* Nonzero if instructions not present in the 'M' profile can be used. */
+int arm_arch_notm = 0;
+
/* Nonzero if this chip can benefit from load scheduling. */
int arm_ld_sched = 0;
@@ -524,6 +541,12 @@ int thumb_code = 0;
interworking clean. */
int arm_cpp_interwork = 0;
+/* Nonzero if chip supports Thumb 2. */
+int arm_arch_thumb2;
+
+/* Nonzero if chip supports integer division instruction. */
+int arm_arch_hwdiv;
+
/* In case of a PRE_INC, POST_INC, PRE_DEC, POST_DEC memory reference, we
must report the mode of the memory reference from PRINT_OPERAND to
PRINT_OPERAND_ADDRESS. */
@@ -545,9 +568,17 @@ static int arm_constant_limit = 3;
/* For an explanation of these variables, see final_prescan_insn below. */
int arm_ccfsm_state;
+/* arm_current_cc is also used for Thumb-2 cond_exec blocks. */
enum arm_cond_code arm_current_cc;
rtx arm_target_insn;
int arm_target_label;
+/* The number of conditionally executed insns, including the current insn. */
+int arm_condexec_count = 0;
+/* A bitmask specifying the patterns for the IT block.
+ Zero means do not output an IT block before this insn. */
+int arm_condexec_mask = 0;
+/* The number of bits used in arm_condexec_mask. */
+int arm_condexec_masklen = 0;
/* The condition codes of the ARM, and the inverse function. */
static const char * const arm_condition_codes[] =
@@ -556,7 +587,12 @@ static const char * const arm_condition_codes[] =
"hi", "ls", "ge", "lt", "gt", "le", "al", "nv"
};
+#define ARM_LSL_NAME (TARGET_UNIFIED_ASM ? "lsl" : "asl")
#define streq(string1, string2) (strcmp (string1, string2) == 0)
+
+#define THUMB2_WORK_REGS (0xff & ~( (1 << THUMB_HARD_FRAME_POINTER_REGNUM) \
+ | (1 << SP_REGNUM) | (1 << PC_REGNUM) \
+ | (1 << PIC_OFFSET_TABLE_REGNUM)))
/* Initialization code. */
@@ -604,6 +640,11 @@ static const struct processors all_architectures[] =
{"armv6k", mpcore, "6K", FL_CO_PROC | FL_FOR_ARCH6K, NULL},
{"armv6z", arm1176jzs, "6Z", FL_CO_PROC | FL_FOR_ARCH6Z, NULL},
{"armv6zk", arm1176jzs, "6ZK", FL_CO_PROC | FL_FOR_ARCH6ZK, NULL},
+ {"armv6t2", arm1156t2s, "6T2", FL_CO_PROC | FL_FOR_ARCH6T2, NULL},
+ {"armv7", cortexa8, "7", FL_CO_PROC | FL_FOR_ARCH7, NULL},
+ {"armv7-a", cortexa8, "7A", FL_CO_PROC | FL_FOR_ARCH7A, NULL},
+ {"armv7-r", cortexr4, "7R", FL_CO_PROC | FL_FOR_ARCH7R, NULL},
+ {"armv7-m", cortexm3, "7M", FL_CO_PROC | FL_FOR_ARCH7M, NULL},
{"ep9312", ep9312, "4T", FL_LDSCHED | FL_CIRRUS | FL_FOR_ARCH4, NULL},
{"iwmmxt", iwmmxt, "5TE", FL_LDSCHED | FL_STRONG | FL_FOR_ARCH5TE | FL_XSCALE | FL_IWMMXT , NULL},
{NULL, arm_none, NULL, 0 , NULL}
@@ -1044,6 +1085,9 @@ arm_override_options (void)
/* Make sure that the processor choice does not conflict with any of the
other command line choices. */
+ if (TARGET_ARM && !(insn_flags & FL_NOTM))
+ error ("target CPU does not support ARM mode");
+
if (TARGET_INTERWORK && !(insn_flags & FL_THUMB))
{
warning (0, "target CPU does not support interworking" );
@@ -1117,6 +1161,8 @@ arm_override_options (void)
arm_arch5e = (insn_flags & FL_ARCH5E) != 0;
arm_arch6 = (insn_flags & FL_ARCH6) != 0;
arm_arch6k = (insn_flags & FL_ARCH6K) != 0;
+ arm_arch_notm = (insn_flags & FL_NOTM) != 0;
+ arm_arch_thumb2 = (insn_flags & FL_THUMB2) != 0;
arm_arch_xscale = (insn_flags & FL_XSCALE) != 0;
arm_arch_cirrus = (insn_flags & FL_CIRRUS) != 0;
@@ -1126,6 +1172,7 @@ arm_override_options (void)
arm_tune_wbuf = (tune_flags & FL_WBUF) != 0;
arm_tune_xscale = (tune_flags & FL_XSCALE) != 0;
arm_arch_iwmmxt = (insn_flags & FL_IWMMXT) != 0;
+ arm_arch_hwdiv = (insn_flags & FL_DIV) != 0;
/* V5 code we generate is completely interworking capable, so we turn off
TARGET_INTERWORK here to avoid many tests later on. */
@@ -1240,6 +1287,10 @@ arm_override_options (void)
if (TARGET_IWMMXT && !TARGET_SOFT_FLOAT)
sorry ("iWMMXt and hardware floating point");
+ /* ??? iWMMXt insn patterns need auditing for Thumb-2. */
+ if (TARGET_THUMB2 && TARGET_IWMMXT)
+ sorry ("Thumb-2 iWMMXt");
+
/* If soft-float is specified then don't use FPU. */
if (TARGET_SOFT_FLOAT)
arm_fpu_arch = FPUTYPE_NONE;
@@ -1273,8 +1324,8 @@ arm_override_options (void)
target_thread_pointer = TP_SOFT;
}
- if (TARGET_HARD_TP && TARGET_THUMB)
- error ("can not use -mtp=cp15 with -mthumb");
+ if (TARGET_HARD_TP && TARGET_THUMB1)
+ error ("can not use -mtp=cp15 with 16-bit Thumb");
/* Override the default structure alignment for AAPCS ABI. */
if (TARGET_AAPCS_BASED)
@@ -1309,6 +1360,7 @@ arm_override_options (void)
arm_pic_register = pic_register;
}
+ /* ??? We might want scheduling for thumb2. */
if (TARGET_THUMB && flag_schedule_insns)
{
/* Don't warn since it's on by default in -O2. */
@@ -1390,6 +1442,9 @@ arm_isr_value (tree argument)
const isr_attribute_arg * ptr;
const char * arg;
+ if (!arm_arch_notm)
+ return ARM_FT_NORMAL | ARM_FT_STACKALIGN;
+
/* No argument - default to IRQ. */
if (argument == NULL_TREE)
return ARM_FT_ISR;
@@ -1483,9 +1538,9 @@ use_return_insn (int iscond, rtx sibling)
func_type = arm_current_func_type ();
- /* Naked functions and volatile functions need special
+ /* Naked, volatile and stack alignment functions need special
consideration. */
- if (func_type & (ARM_FT_VOLATILE | ARM_FT_NAKED))
+ if (func_type & (ARM_FT_VOLATILE | ARM_FT_NAKED | ARM_FT_STACKALIGN))
return 0;
/* So do interrupt functions that use the frame pointer. */
@@ -1522,7 +1577,7 @@ use_return_insn (int iscond, rtx sibling)
We test for !arm_arch5 here, because code for any architecture
less than this could potentially be run on one of the buggy
chips. */
- if (stack_adjust == 4 && !arm_arch5)
+ if (stack_adjust == 4 && !arm_arch5 && TARGET_ARM)
{
/* Validate that r3 is a call-clobbered register (always true in
the default abi) ... */
@@ -1616,17 +1671,36 @@ const_ok_for_arm (HOST_WIDE_INT i)
if ((i & ~(unsigned HOST_WIDE_INT) 0xff) == 0)
return TRUE;
- /* Get the number of trailing zeros, rounded down to the nearest even
- number. */
- lowbit = (ffs ((int) i) - 1) & ~1;
+ /* Get the number of trailing zeros. */
+ lowbit = ffs((int) i) - 1;
+
+ /* Only even shifts are allowed in ARM mode so round down to the
+ nearest even number. */
+ if (TARGET_ARM)
+ lowbit &= ~1;
if ((i & ~(((unsigned HOST_WIDE_INT) 0xff) << lowbit)) == 0)
return TRUE;
- else if (lowbit <= 4
+
+ if (TARGET_ARM)
+ {
+ /* Allow rotated constants in ARM mode. */
+ if (lowbit <= 4
&& ((i & ~0xc000003f) == 0
|| (i & ~0xf000000f) == 0
|| (i & ~0xfc000003) == 0))
- return TRUE;
+ return TRUE;
+ }
+ else
+ {
+ HOST_WIDE_INT v;
+
+ /* Allow repeated pattern. */
+ v = i & 0xff;
+ v |= v << 16;
+ if (i == v || i == (v | (v << 8)))
+ return TRUE;
+ }
return FALSE;
}
@@ -1666,6 +1740,7 @@ const_ok_for_op (HOST_WIDE_INT i, enum rtx_code code)
either produce a simpler sequence, or we will want to cse the values.
Return value is the number of insns emitted. */
+/* ??? Tweak this for thumb2. */
int
arm_split_constant (enum rtx_code code, enum machine_mode mode, rtx insn,
HOST_WIDE_INT val, rtx target, rtx source, int subtargets)
@@ -1724,6 +1799,8 @@ arm_split_constant (enum rtx_code code, enum machine_mode mode, rtx insn,
1);
}
+/* Return the number of ARM instructions required to synthesize the given
+ constant. */
static int
count_insns_for_constant (HOST_WIDE_INT remainder, int i)
{
@@ -1765,6 +1842,7 @@ emit_constant_insn (rtx cond, rtx pattern)
/* As above, but extra parameter GENERATE which, if clear, suppresses
RTL generation. */
+/* ??? This needs more work for thumb2. */
static int
arm_gen_constant (enum rtx_code code, enum machine_mode mode, rtx cond,
@@ -1941,6 +2019,15 @@ arm_gen_constant (enum rtx_code code, enum machine_mode mode, rtx cond,
switch (code)
{
case SET:
+ /* See if we can use movw. */
+ if (arm_arch_thumb2 && (remainder & 0xffff0000) == 0)
+ {
+ if (generate)
+ emit_constant_insn (cond, gen_rtx_SET (VOIDmode, target,
+ GEN_INT (val)));
+ return 1;
+ }
+
/* See if we can do this by sign_extending a constant that is known
to be negative. This is a good, way of doing it, since the shift
may well merge into a subsequent insn. */
@@ -2282,59 +2369,67 @@ arm_gen_constant (enum rtx_code code, enum machine_mode mode, rtx cond,
We start by looking for the largest block of zeros that are aligned on
a 2-bit boundary, we then fill up the temps, wrapping around to the
top of the word when we drop off the bottom.
- In the worst case this code should produce no more than four insns. */
+ In the worst case this code should produce no more than four insns.
+ Thumb-2 constants are shifted, not rotated, so the MSB is always the
+ best place to start. */
+
+ /* ??? Use thumb2 replicated constants when the high and low halfwords are
+ the same. */
{
int best_start = 0;
- int best_consecutive_zeros = 0;
-
- for (i = 0; i < 32; i += 2)
+ if (!TARGET_THUMB2)
{
- int consecutive_zeros = 0;
+ int best_consecutive_zeros = 0;
- if (!(remainder & (3 << i)))
+ for (i = 0; i < 32; i += 2)
{
- while ((i < 32) && !(remainder & (3 << i)))
- {
- consecutive_zeros += 2;
- i += 2;
- }
- if (consecutive_zeros > best_consecutive_zeros)
+ int consecutive_zeros = 0;
+
+ if (!(remainder & (3 << i)))
{
- best_consecutive_zeros = consecutive_zeros;
- best_start = i - consecutive_zeros;
+ while ((i < 32) && !(remainder & (3 << i)))
+ {
+ consecutive_zeros += 2;
+ i += 2;
+ }
+ if (consecutive_zeros > best_consecutive_zeros)
+ {
+ best_consecutive_zeros = consecutive_zeros;
+ best_start = i - consecutive_zeros;
+ }
+ i -= 2;
}
- i -= 2;
}
- }
-
- /* So long as it won't require any more insns to do so, it's
- desirable to emit a small constant (in bits 0...9) in the last
- insn. This way there is more chance that it can be combined with
- a later addressing insn to form a pre-indexed load or store
- operation. Consider:
-
- *((volatile int *)0xe0000100) = 1;
- *((volatile int *)0xe0000110) = 2;
- We want this to wind up as:
-
- mov rA, #0xe0000000
- mov rB, #1
- str rB, [rA, #0x100]
- mov rB, #2
- str rB, [rA, #0x110]
-
- rather than having to synthesize both large constants from scratch.
-
- Therefore, we calculate how many insns would be required to emit
- the constant starting from `best_start', and also starting from
- zero (i.e. with bit 31 first to be output). If `best_start' doesn't
- yield a shorter sequence, we may as well use zero. */
- if (best_start != 0
- && ((((unsigned HOST_WIDE_INT) 1) << best_start) < remainder)
- && (count_insns_for_constant (remainder, 0) <=
- count_insns_for_constant (remainder, best_start)))
- best_start = 0;
+ /* So long as it won't require any more insns to do so, it's
+ desirable to emit a small constant (in bits 0...9) in the last
+ insn. This way there is more chance that it can be combined with
+ a later addressing insn to form a pre-indexed load or store
+ operation. Consider:
+
+ *((volatile int *)0xe0000100) = 1;
+ *((volatile int *)0xe0000110) = 2;
+
+ We want this to wind up as:
+
+ mov rA, #0xe0000000
+ mov rB, #1
+ str rB, [rA, #0x100]
+ mov rB, #2
+ str rB, [rA, #0x110]
+
+ rather than having to synthesize both large constants from scratch.
+
+ Therefore, we calculate how many insns would be required to emit
+ the constant starting from `best_start', and also starting from
+ zero (i.e. with bit 31 first to be output). If `best_start' doesn't
+ yield a shorter sequence, we may as well use zero. */
+ if (best_start != 0
+ && ((((unsigned HOST_WIDE_INT) 1) << best_start) < remainder)
+ && (count_insns_for_constant (remainder, 0) <=
+ count_insns_for_constant (remainder, best_start)))
+ best_start = 0;
+ }
/* Now start emitting the insns. */
i = best_start;
@@ -2400,9 +2495,17 @@ arm_gen_constant (enum rtx_code code, enum machine_mode mode, rtx cond,
code = PLUS;
insns++;
- i -= 6;
+ if (TARGET_ARM)
+ i -= 6;
+ else
+ i -= 7;
}
- i -= 2;
+ /* Arm allows rotates by a multiple of two. Thumb-2 allows arbitary
+ shifts. */
+ if (TARGET_ARM)
+ i -= 2;
+ else
+ i--;
}
while (remainder);
}
@@ -3149,6 +3252,7 @@ static bool
arm_function_ok_for_sibcall (tree decl, tree exp ATTRIBUTE_UNUSED)
{
int call_type = TARGET_LONG_CALLS ? CALL_LONG : CALL_NORMAL;
+ unsigned long func_type;
if (cfun->machine->sibcall_blocked)
return false;
@@ -3176,8 +3280,13 @@ arm_function_ok_for_sibcall (tree decl, tree exp ATTRIBUTE_UNUSED)
if (TARGET_INTERWORK && TREE_PUBLIC (decl) && !TREE_ASM_WRITTEN (decl))
return false;
+ func_type = arm_current_func_type ();
/* Never tailcall from an ISR routine - it needs a special exit sequence. */
- if (IS_INTERRUPT (arm_current_func_type ()))
+ if (IS_INTERRUPT (func_type))
+ return false;
+
+ /* Never tailcall if function may be called with a misaligned SP. */
+ if (IS_STACKALIGN (func_type))
return false;
/* Everything else is ok. */
@@ -3276,8 +3385,10 @@ legitimize_pic_address (rtx orig, enum machine_mode mode, rtx reg)
if (TARGET_ARM)
emit_insn (gen_pic_load_addr_arm (address, orig));
- else
- emit_insn (gen_pic_load_addr_thumb (address, orig));
+ else if (TARGET_THUMB2)
+ emit_insn (gen_pic_load_addr_thumb2 (address, orig));
+ else /* TARGET_THUMB1 */
+ emit_insn (gen_pic_load_addr_thumb1 (address, orig));
if ((GET_CODE (orig) == LABEL_REF
|| (GET_CODE (orig) == SYMBOL_REF &&
@@ -3352,7 +3463,7 @@ legitimize_pic_address (rtx orig, enum machine_mode mode, rtx reg)
}
-/* Find a spare low register to use during the prolog of a function. */
+/* Find a spare register to use during the prolog of a function. */
static int
thumb_find_work_register (unsigned long pushed_regs_mask)
@@ -3401,6 +3512,13 @@ thumb_find_work_register (unsigned long pushed_regs_mask)
if (pushed_regs_mask & (1 << reg))
return reg;
+ if (TARGET_THUMB2)
+ {
+ /* Thumb-2 can use high regs. */
+ for (reg = FIRST_HI_REGNUM; reg < 15; reg ++)
+ if (pushed_regs_mask & (1 << reg))
+ return reg;
+ }
/* Something went wrong - thumb_compute_save_reg_mask()
should have arranged for a suitable register to be pushed. */
gcc_unreachable ();
@@ -3448,7 +3566,27 @@ arm_load_pic_register (unsigned long saved_regs ATTRIBUTE_UNUSED)
emit_insn (gen_pic_add_dot_plus_eight (cfun->machine->pic_reg,
cfun->machine->pic_reg, labelno));
}
- else
+ else if (TARGET_THUMB2)
+ {
+ /* Thumb-2 only allows very limited access to the PC. Calculate the
+ address in a temporary register. */
+ if (arm_pic_register != INVALID_REGNUM)
+ {
+ pic_tmp = gen_rtx_REG (SImode,
+ thumb_find_work_register (saved_regs));
+ }
+ else
+ {
+ gcc_assert (!no_new_pseudos);
+ pic_tmp = gen_reg_rtx (Pmode);
+ }
+
+ emit_insn (gen_pic_load_addr_thumb2 (cfun->machine->pic_reg, pic_rtx));
+ emit_insn (gen_pic_load_dot_plus_four (pic_tmp, labelno));
+ emit_insn (gen_addsi3(cfun->machine->pic_reg, cfun->machine->pic_reg,
+ pic_tmp));
+ }
+ else /* TARGET_THUMB1 */
{
if (arm_pic_register != INVALID_REGNUM
&& REGNO (cfun->machine->pic_reg) > LAST_LO_REGNUM)
@@ -3457,11 +3595,11 @@ arm_load_pic_register (unsigned long saved_regs ATTRIBUTE_UNUSED)
able to find a work register. */
pic_tmp = gen_rtx_REG (SImode,
thumb_find_work_register (saved_regs));
- emit_insn (gen_pic_load_addr_thumb (pic_tmp, pic_rtx));
+ emit_insn (gen_pic_load_addr_thumb1 (pic_tmp, pic_rtx));
emit_insn (gen_movsi (pic_offset_table_rtx, pic_tmp));
}
else
- emit_insn (gen_pic_load_addr_thumb (cfun->machine->pic_reg, pic_rtx));
+ emit_insn (gen_pic_load_addr_thumb1 (cfun->machine->pic_reg, pic_rtx));
emit_insn (gen_pic_add_dot_plus_four (cfun->machine->pic_reg,
cfun->machine->pic_reg, labelno));
}
@@ -3591,6 +3729,80 @@ arm_legitimate_address_p (enum machine_mode mode, rtx x, RTX_CODE outer,
return 0;
}
+/* Return nonzero if X is a valid Thumb-2 address operand. */
+int
+thumb2_legitimate_address_p (enum machine_mode mode, rtx x, int strict_p)
+{
+ bool use_ldrd;
+ enum rtx_code code = GET_CODE (x);
+
+ if (arm_address_register_rtx_p (x, strict_p))
+ return 1;
+
+ use_ldrd = (TARGET_LDRD
+ && (mode == DImode
+ || (mode == DFmode && (TARGET_SOFT_FLOAT || TARGET_VFP))));
+
+ if (code == POST_INC || code == PRE_DEC
+ || ((code == PRE_INC || code == POST_DEC)
+ && (use_ldrd || GET_MODE_SIZE (mode) <= 4)))
+ return arm_address_register_rtx_p (XEXP (x, 0), strict_p);
+
+ else if ((code == POST_MODIFY || code == PRE_MODIFY)
+ && arm_address_register_rtx_p (XEXP (x, 0), strict_p)
+ && GET_CODE (XEXP (x, 1)) == PLUS
+ && rtx_equal_p (XEXP (XEXP (x, 1), 0), XEXP (x, 0)))
+ {
+ /* Thumb-2 only has autoincrement by constant. */
+ rtx addend = XEXP (XEXP (x, 1), 1);
+ HOST_WIDE_INT offset;
+
+ if (GET_CODE (addend) != CONST_INT)
+ return 0;
+
+ offset = INTVAL(addend);
+ if (GET_MODE_SIZE (mode) <= 4)
+ return (offset > -256 && offset < 256);
+
+ return (use_ldrd && offset > -1024 && offset < 1024
+ && (offset & 3) == 0);
+ }
+
+ /* After reload constants split into minipools will have addresses
+ from a LABEL_REF. */
+ else if (reload_completed
+ && (code == LABEL_REF
+ || (code == CONST
+ && GET_CODE (XEXP (x, 0)) == PLUS
+ && GET_CODE (XEXP (XEXP (x, 0), 0)) == LABEL_REF
+ && GET_CODE (XEXP (XEXP (x, 0), 1)) == CONST_INT)))
+ return 1;
+
+ else if (mode == TImode)
+ return 0;
+
+ else if (code == PLUS)
+ {
+ rtx xop0 = XEXP (x, 0);
+ rtx xop1 = XEXP (x, 1);
+
+ return ((arm_address_register_rtx_p (xop0, strict_p)
+ && thumb2_legitimate_index_p (mode, xop1, strict_p))
+ || (arm_address_register_rtx_p (xop1, strict_p)
+ && thumb2_legitimate_index_p (mode, xop0, strict_p)));
+ }
+
+ else if (GET_MODE_CLASS (mode) != MODE_FLOAT
+ && code == SYMBOL_REF
+ && CONSTANT_POOL_ADDRESS_P (x)
+ && ! (flag_pic
+ && symbol_mentioned_p (get_pool_constant (x))
+ && ! pcrel_constant_p (get_pool_constant (x))))
+ return 1;
+
+ return 0;
+}
+
/* Return nonzero if INDEX is valid for an address index operand in
ARM state. */
static int
@@ -3678,9 +3890,87 @@ arm_legitimate_index_p (enum machine_mode mode, rtx index, RTX_CODE outer,
&& INTVAL (index) > -range);
}
-/* Return nonzero if X is valid as a Thumb state base register. */
+/* Return true if OP is a valid index scaling factor for Thumb-2 address
+ index operand. i.e. 1, 2, 4 or 8. */
+static bool
+thumb2_index_mul_operand (rtx op)
+{
+ HOST_WIDE_INT val;
+
+ if (GET_CODE(op) != CONST_INT)
+ return false;
+
+ val = INTVAL(op);
+ return (val == 1 || val == 2 || val == 4 || val == 8);
+}
+
+/* Return nonzero if INDEX is a valid Thumb-2 address index operand. */
+static int
+thumb2_legitimate_index_p (enum machine_mode mode, rtx index, int strict_p)
+{
+ enum rtx_code code = GET_CODE (index);
+
+ /* ??? Combine arm and thumb2 coprocessor addressing modes. */
+ /* Standard coprocessor addressing modes. */
+ if (TARGET_HARD_FLOAT
+ && (TARGET_FPA || TARGET_MAVERICK)
+ && (GET_MODE_CLASS (mode) == MODE_FLOAT
+ || (TARGET_MAVERICK && mode == DImode)))
+ return (code == CONST_INT && INTVAL (index) < 1024
+ && INTVAL (index) > -1024
+ && (INTVAL (index) & 3) == 0);
+
+ if (TARGET_REALLY_IWMMXT && VALID_IWMMXT_REG_MODE (mode))
+ return (code == CONST_INT
+ && INTVAL (index) < 1024
+ && INTVAL (index) > -1024
+ && (INTVAL (index) & 3) == 0);
+
+ if (arm_address_register_rtx_p (index, strict_p)
+ && (GET_MODE_SIZE (mode) <= 4))
+ return 1;
+
+ if (mode == DImode || mode == DFmode)
+ {
+ HOST_WIDE_INT val = INTVAL (index);
+ /* ??? Can we assume ldrd for thumb2? */
+ /* Thumb-2 ldrd only has reg+const addressing modes. */
+ if (code != CONST_INT)
+ return 0;
+
+ /* ldrd supports offsets of +-1020.
+ However the ldr fallback does not. */
+ return val > -256 && val < 256 && (val & 3) == 0;
+ }
+
+ if (code == MULT)
+ {
+ rtx xiop0 = XEXP (index, 0);
+ rtx xiop1 = XEXP (index, 1);
+
+ return ((arm_address_register_rtx_p (xiop0, strict_p)
+ && thumb2_index_mul_operand (xiop1))
+ || (arm_address_register_rtx_p (xiop1, strict_p)
+ && thumb2_index_mul_operand (xiop0)));
+ }
+ else if (code == ASHIFT)
+ {
+ rtx op = XEXP (index, 1);
+
+ return (arm_address_register_rtx_p (XEXP (index, 0), strict_p)
+ && GET_CODE (op) == CONST_INT
+ && INTVAL (op) > 0
+ && INTVAL (op) <= 3);
+ }
+
+ return (code == CONST_INT
+ && INTVAL (index) < 4096
+ && INTVAL (index) > -256);
+}
+
+/* Return nonzero if X is valid as a 16-bit Thumb state base register. */
static int
-thumb_base_register_rtx_p (rtx x, enum machine_mode mode, int strict_p)
+thumb1_base_register_rtx_p (rtx x, enum machine_mode mode, int strict_p)
{
int regno;
@@ -3690,7 +3980,7 @@ thumb_base_register_rtx_p (rtx x, enum machine_mode mode, int strict_p)
regno = REGNO (x);
if (strict_p)
- return THUMB_REGNO_MODE_OK_FOR_BASE_P (regno, mode);
+ return THUMB1_REGNO_MODE_OK_FOR_BASE_P (regno, mode);
return (regno <= LAST_LO_REGNUM
|| regno > LAST_VIRTUAL_REGISTER
@@ -3705,12 +3995,12 @@ thumb_base_register_rtx_p (rtx x, enum machine_mode mode, int strict_p)
/* Return nonzero if x is a legitimate index register. This is the case
for any base register that can access a QImode object. */
inline static int
-thumb_index_register_rtx_p (rtx x, int strict_p)
+thumb1_index_register_rtx_p (rtx x, int strict_p)
{
- return thumb_base_register_rtx_p (x, QImode, strict_p);
+ return thumb1_base_register_rtx_p (x, QImode, strict_p);
}
-/* Return nonzero if x is a legitimate Thumb-state address.
+/* Return nonzero if x is a legitimate 16-bit Thumb-state address.
The AP may be eliminated to either the SP or the FP, so we use the
least common denominator, e.g. SImode, and offsets from 0 to 64.
@@ -3728,7 +4018,7 @@ thumb_index_register_rtx_p (rtx x, int strict_p)
reload pass starts. This is so that eliminating such addresses
into stack based ones won't produce impossible code. */
int
-thumb_legitimate_address_p (enum machine_mode mode, rtx x, int strict_p)
+thumb1_legitimate_address_p (enum machine_mode mode, rtx x, int strict_p)
{
/* ??? Not clear if this is right. Experiment. */
if (GET_MODE_SIZE (mode) < 4
@@ -3742,7 +4032,7 @@ thumb_legitimate_address_p (enum machine_mode mode, rtx x, int strict_p)
return 0;
/* Accept any base register. SP only in SImode or larger. */
- else if (thumb_base_register_rtx_p (x, mode, strict_p))
+ else if (thumb1_base_register_rtx_p (x, mode, strict_p))
return 1;
/* This is PC relative data before arm_reorg runs. */
@@ -3762,7 +4052,7 @@ thumb_legitimate_address_p (enum machine_mode mode, rtx x, int strict_p)
/* Post-inc indexing only supported for SImode and larger. */
else if (GET_CODE (x) == POST_INC && GET_MODE_SIZE (mode) >= 4
- && thumb_index_register_rtx_p (XEXP (x, 0), strict_p))
+ && thumb1_index_register_rtx_p (XEXP (x, 0), strict_p))
return 1;
else if (GET_CODE (x) == PLUS)
@@ -3774,12 +4064,12 @@ thumb_legitimate_address_p (enum machine_mode mode, rtx x, int strict_p)
if (GET_MODE_SIZE (mode) <= 4
&& XEXP (x, 0) != frame_pointer_rtx
&& XEXP (x, 1) != frame_pointer_rtx
- && thumb_index_register_rtx_p (XEXP (x, 0), strict_p)
- && thumb_index_register_rtx_p (XEXP (x, 1), strict_p))
+ && thumb1_index_register_rtx_p (XEXP (x, 0), strict_p)
+ && thumb1_index_register_rtx_p (XEXP (x, 1), strict_p))
return 1;
/* REG+const has 5-7 bit offset for non-SP registers. */
- else if ((thumb_index_register_rtx_p (XEXP (x, 0), strict_p)
+ else if ((thumb1_index_register_rtx_p (XEXP (x, 0), strict_p)
|| XEXP (x, 0) == arg_pointer_rtx)
&& GET_CODE (XEXP (x, 1)) == CONST_INT
&& thumb_legitimate_offset_p (mode, INTVAL (XEXP (x, 1))))
@@ -3914,7 +4204,16 @@ arm_call_tls_get_addr (rtx x, rtx reg, rtx *valuep, int reloc)
if (TARGET_ARM)
emit_insn (gen_pic_add_dot_plus_eight (reg, reg, labelno));
- else
+ else if (TARGET_THUMB2)
+ {
+ rtx tmp;
+ /* Thumb-2 only allows very limited access to the PC. Calculate
+ the address in a temporary register. */
+ tmp = gen_reg_rtx (SImode);
+ emit_insn (gen_pic_load_dot_plus_four (tmp, labelno));
+ emit_insn (gen_addsi3(reg, reg, tmp));
+ }
+ else /* TARGET_THUMB1 */
emit_insn (gen_pic_add_dot_plus_four (reg, reg, labelno));
*valuep = emit_library_call_value (get_tls_get_addr (), NULL_RTX, LCT_PURE, /* LCT_CONST? */
@@ -3968,6 +4267,16 @@ legitimize_tls_address (rtx x, rtx reg)
if (TARGET_ARM)
emit_insn (gen_tls_load_dot_plus_eight (reg, reg, labelno));
+ else if (TARGET_THUMB2)
+ {
+ rtx tmp;
+ /* Thumb-2 only allows very limited access to the PC. Calculate
+ the address in a temporary register. */
+ tmp = gen_reg_rtx (SImode);
+ emit_insn (gen_pic_load_dot_plus_four (tmp, labelno));
+ emit_insn (gen_addsi3(reg, reg, tmp));
+ emit_move_insn (reg, gen_const_mem (SImode, reg));
+ }
else
{
emit_insn (gen_pic_add_dot_plus_four (reg, reg, labelno));
@@ -4275,7 +4584,7 @@ arm_tls_referenced_p (rtx x)
#define COSTS_N_INSNS(N) ((N) * 4 - 2)
#endif
static inline int
-thumb_rtx_costs (rtx x, enum rtx_code code, enum rtx_code outer)
+thumb1_rtx_costs (rtx x, enum rtx_code code, enum rtx_code outer)
{
enum machine_mode mode = GET_MODE (x);
@@ -4393,6 +4702,7 @@ thumb_rtx_costs (rtx x, enum rtx_code code, enum rtx_code outer)
/* Worker routine for arm_rtx_costs. */
+/* ??? This needs updating for thumb2. */
static inline int
arm_rtx_costs_1 (rtx x, enum rtx_code code, enum rtx_code outer)
{
@@ -4574,6 +4884,7 @@ arm_rtx_costs_1 (rtx x, enum rtx_code code, enum rtx_code outer)
return 4 + (mode == DImode ? 4 : 0);
case SIGN_EXTEND:
+ /* ??? value extensions are cheaper on armv6. */
if (GET_MODE (XEXP (x, 0)) == QImode)
return (4 + (mode == DImode ? 4 : 0)
+ (GET_CODE (XEXP (x, 0)) == MEM ? 10 : 0));
@@ -4644,7 +4955,7 @@ arm_size_rtx_costs (rtx x, int code, int outer_code, int *total)
if (TARGET_THUMB)
{
/* XXX TBD. For now, use the standard costs. */
- *total = thumb_rtx_costs (x, code, outer_code);
+ *total = thumb1_rtx_costs (x, code, outer_code);
return true;
}
@@ -4856,7 +5167,8 @@ arm_size_rtx_costs (rtx x, int code, int outer_code, int *total)
}
}
-/* RTX costs for cores with a slow MUL implementation. */
+/* RTX costs for cores with a slow MUL implementation. Thumb-2 is not
+ supported on any "slowmul" cores, so it can be ignored. */
static bool
arm_slowmul_rtx_costs (rtx x, int code, int outer_code, int *total)
@@ -4865,7 +5177,7 @@ arm_slowmul_rtx_costs (rtx x, int code, int outer_code, int *total)
if (TARGET_THUMB)
{
- *total = thumb_rtx_costs (x, code, outer_code);
+ *total = thumb1_rtx_costs (x, code, outer_code);
return true;
}
@@ -4917,12 +5229,13 @@ arm_fastmul_rtx_costs (rtx x, int code, int outer_code, int *total)
{
enum machine_mode mode = GET_MODE (x);
- if (TARGET_THUMB)
+ if (TARGET_THUMB1)
{
- *total = thumb_rtx_costs (x, code, outer_code);
+ *total = thumb1_rtx_costs (x, code, outer_code);
return true;
}
+ /* ??? should thumb2 use different costs? */
switch (code)
{
case MULT:
@@ -4976,7 +5289,8 @@ arm_fastmul_rtx_costs (rtx x, int code, int outer_code, int *total)
}
-/* RTX cost for XScale CPUs. */
+/* RTX cost for XScale CPUs. Thumb-2 is not supported on any xscale cores,
+ so it can be ignored. */
static bool
arm_xscale_rtx_costs (rtx x, int code, int outer_code, int *total)
@@ -4985,7 +5299,7 @@ arm_xscale_rtx_costs (rtx x, int code, int outer_code, int *total)
if (TARGET_THUMB)
{
- *total = thumb_rtx_costs (x, code, outer_code);
+ *total = thumb1_rtx_costs (x, code, outer_code);
return true;
}
@@ -5066,7 +5380,7 @@ arm_9e_rtx_costs (rtx x, int code, int outer_code, int *total)
int nonreg_cost;
int cost;
- if (TARGET_THUMB)
+ if (TARGET_THUMB1)
{
switch (code)
{
@@ -5075,7 +5389,7 @@ arm_9e_rtx_costs (rtx x, int code, int outer_code, int *total)
return true;
default:
- *total = thumb_rtx_costs (x, code, outer_code);
+ *total = thumb1_rtx_costs (x, code, outer_code);
return true;
}
}
@@ -5167,7 +5481,7 @@ arm_thumb_address_cost (rtx x)
static int
arm_address_cost (rtx x)
{
- return TARGET_ARM ? arm_arm_address_cost (x) : arm_thumb_address_cost (x);
+ return TARGET_32BIT ? arm_arm_address_cost (x) : arm_thumb_address_cost (x);
}
static int
@@ -5360,7 +5674,9 @@ cirrus_memory_offset (rtx op)
}
/* Return TRUE if OP is a valid coprocessor memory address pattern.
- WB if true if writeback address modes are allowed. */
+ WB is true if full writeback address modes are allowed and is false
+ if limited writeback address modes (POST_INC and PRE_DEC) are
+ allowed. */
int
arm_coproc_mem_operand (rtx op, bool wb)
@@ -5395,12 +5711,15 @@ arm_coproc_mem_operand (rtx op, bool wb)
if (GET_CODE (ind) == REG)
return arm_address_register_rtx_p (ind, 0);
- /* Autoincremment addressing modes. */
- if (wb
- && (GET_CODE (ind) == PRE_INC
- || GET_CODE (ind) == POST_INC
- || GET_CODE (ind) == PRE_DEC
- || GET_CODE (ind) == POST_DEC))
+ /* Autoincremment addressing modes. POST_INC and PRE_DEC are
+ acceptable in any case (subject to verification by
+ arm_address_register_rtx_p). We need WB to be true to accept
+ PRE_INC and POST_DEC. */
+ if (GET_CODE (ind) == POST_INC
+ || GET_CODE (ind) == PRE_DEC
+ || (wb
+ && (GET_CODE (ind) == PRE_INC
+ || GET_CODE (ind) == POST_DEC)))
return arm_address_register_rtx_p (XEXP (ind, 0), 0);
if (wb
@@ -5948,10 +6267,10 @@ load_multiple_sequence (rtx *operands, int nops, int *regs, int *base,
if (unsorted_offsets[order[0]] == 0)
return 1; /* ldmia */
- if (unsorted_offsets[order[0]] == 4)
+ if (TARGET_ARM && unsorted_offsets[order[0]] == 4)
return 2; /* ldmib */
- if (unsorted_offsets[order[nops - 1]] == 0)
+ if (TARGET_ARM && unsorted_offsets[order[nops - 1]] == 0)
return 3; /* ldmda */
if (unsorted_offsets[order[nops - 1]] == -4)
@@ -6007,19 +6326,19 @@ emit_ldm_seq (rtx *operands, int nops)
switch (load_multiple_sequence (operands, nops, regs, &base_reg, &offset))
{
case 1:
- strcpy (buf, "ldm%?ia\t");
+ strcpy (buf, "ldm%(ia%)\t");
break;
case 2:
- strcpy (buf, "ldm%?ib\t");
+ strcpy (buf, "ldm%(ib%)\t");
break;
case 3:
- strcpy (buf, "ldm%?da\t");
+ strcpy (buf, "ldm%(da%)\t");
break;
case 4:
- strcpy (buf, "ldm%?db\t");
+ strcpy (buf, "ldm%(db%)\t");
break;
case 5:
@@ -6033,7 +6352,7 @@ emit_ldm_seq (rtx *operands, int nops)
(long) -offset);
output_asm_insn (buf, operands);
base_reg = regs[0];
- strcpy (buf, "ldm%?ia\t");
+ strcpy (buf, "ldm%(ia%)\t");
break;
default:
@@ -6196,19 +6515,19 @@ emit_stm_seq (rtx *operands, int nops)
switch (store_multiple_sequence (operands, nops, regs, &base_reg, &offset))
{
case 1:
- strcpy (buf, "stm%?ia\t");
+ strcpy (buf, "stm%(ia%)\t");
break;
case 2:
- strcpy (buf, "stm%?ib\t");
+ strcpy (buf, "stm%(ib%)\t");
break;
case 3:
- strcpy (buf, "stm%?da\t");
+ strcpy (buf, "stm%(da%)\t");
break;
case 4:
- strcpy (buf, "stm%?db\t");
+ strcpy (buf, "stm%(db%)\t");
break;
default:
@@ -6769,7 +7088,7 @@ arm_select_cc_mode (enum rtx_code op, rtx x, rtx y)
/* An operation (on Thumb) where we want to test for a single bit.
This is done by shifting that bit up into the top bit of a
scratch register; we can then branch on the sign bit. */
- if (TARGET_THUMB
+ if (TARGET_THUMB1
&& GET_MODE (x) == SImode
&& (op == EQ || op == NE)
&& GET_CODE (x) == ZERO_EXTRACT
@@ -6780,6 +7099,7 @@ arm_select_cc_mode (enum rtx_code op, rtx x, rtx y)
V flag is not set correctly, so we can only use comparisons where
this doesn't matter. (For LT and GE we can use "mi" and "pl"
instead.) */
+ /* ??? Does the ZERO_EXTRACT case really apply to thumb2? */
if (GET_MODE (x) == SImode
&& y == const0_rtx
&& (op == EQ || op == NE || op == LT || op == GE)
@@ -6790,7 +7110,7 @@ arm_select_cc_mode (enum rtx_code op, rtx x, rtx y)
|| GET_CODE (x) == LSHIFTRT
|| GET_CODE (x) == ASHIFT || GET_CODE (x) == ASHIFTRT
|| GET_CODE (x) == ROTATERT
- || (TARGET_ARM && GET_CODE (x) == ZERO_EXTRACT)))
+ || (TARGET_32BIT && GET_CODE (x) == ZERO_EXTRACT)))
return CC_NOOVmode;
if (GET_MODE (x) == QImode && (op == EQ || op == NE))
@@ -7373,8 +7693,29 @@ get_jump_table_size (rtx insn)
{
rtx body = PATTERN (insn);
int elt = GET_CODE (body) == ADDR_DIFF_VEC ? 1 : 0;
+ HOST_WIDE_INT size;
+ HOST_WIDE_INT modesize;
- return GET_MODE_SIZE (GET_MODE (body)) * XVECLEN (body, elt);
+ modesize = GET_MODE_SIZE (GET_MODE (body));
+ size = modesize * XVECLEN (body, elt);
+ switch (modesize)
+ {
+ case 1:
+ /* Round up size of TBB table to a hafword boundary. */
+ size = (size + 1) & ~(HOST_WIDE_INT)1;
+ break;
+ case 2:
+ /* No padding neccessary for TBH. */
+ break;
+ case 4:
+ /* Add two bytes for alignment on Thumb. */
+ if (TARGET_THUMB)
+ size += 2;
+ break;
+ default:
+ gcc_unreachable ();
+ }
+ return size;
}
return 0;
@@ -8409,7 +8750,7 @@ print_multi_reg (FILE *stream, const char *instr, unsigned reg,
fputc ('\t', stream);
asm_fprintf (stream, instr, reg);
- fputs (", {", stream);
+ fputc ('{', stream);
for (i = 0; i <= LAST_ARM_REGNUM; i++)
if (mask & (1 << i))
@@ -8634,7 +8975,7 @@ output_mov_long_double_fpa_from_arm (rtx *operands)
ops[1] = gen_rtx_REG (SImode, 1 + arm_reg0);
ops[2] = gen_rtx_REG (SImode, 2 + arm_reg0);
- output_asm_insn ("stm%?fd\t%|sp!, {%0, %1, %2}", ops);
+ output_asm_insn ("stm%(fd%)\t%|sp!, {%0, %1, %2}", ops);
output_asm_insn ("ldf%?e\t%0, [%|sp], #12", operands);
return "";
@@ -8656,7 +8997,7 @@ output_mov_long_double_arm_from_fpa (rtx *operands)
ops[2] = gen_rtx_REG (SImode, 2 + arm_reg0);
output_asm_insn ("stf%?e\t%1, [%|sp, #-12]!", operands);
- output_asm_insn ("ldm%?fd\t%|sp!, {%0, %1, %2}", ops);
+ output_asm_insn ("ldm%(fd%)\t%|sp!, {%0, %1, %2}", ops);
return "";
}
@@ -8708,7 +9049,7 @@ output_mov_double_fpa_from_arm (rtx *operands)
ops[0] = gen_rtx_REG (SImode, arm_reg0);
ops[1] = gen_rtx_REG (SImode, 1 + arm_reg0);
- output_asm_insn ("stm%?fd\t%|sp!, {%0, %1}", ops);
+ output_asm_insn ("stm%(fd%)\t%|sp!, {%0, %1}", ops);
output_asm_insn ("ldf%?d\t%0, [%|sp], #8", operands);
return "";
}
@@ -8727,7 +9068,7 @@ output_mov_double_arm_from_fpa (rtx *operands)
ops[0] = gen_rtx_REG (SImode, arm_reg0);
ops[1] = gen_rtx_REG (SImode, 1 + arm_reg0);
output_asm_insn ("stf%?d\t%1, [%|sp, #-8]!", operands);
- output_asm_insn ("ldm%?fd\t%|sp!, {%0, %1}", ops);
+ output_asm_insn ("ldm%(fd%)\t%|sp!, {%0, %1}", ops);
return "";
}
@@ -8752,25 +9093,28 @@ output_move_double (rtx *operands)
switch (GET_CODE (XEXP (operands[1], 0)))
{
case REG:
- output_asm_insn ("ldm%?ia\t%m1, %M0", operands);
+ output_asm_insn ("ldm%(ia%)\t%m1, %M0", operands);
break;
case PRE_INC:
gcc_assert (TARGET_LDRD);
- output_asm_insn ("ldr%?d\t%0, [%m1, #8]!", operands);
+ output_asm_insn ("ldr%(d%)\t%0, [%m1, #8]!", operands);
break;
case PRE_DEC:
- output_asm_insn ("ldm%?db\t%m1!, %M0", operands);
+ if (TARGET_LDRD)
+ output_asm_insn ("ldr%(d%)\t%0, [%m1, #-8]!", operands);
+ else
+ output_asm_insn ("ldm%(db%)\t%m1!, %M0", operands);
break;
case POST_INC:
- output_asm_insn ("ldm%?ia\t%m1!, %M0", operands);
+ output_asm_insn ("ldm%(ia%)\t%m1!, %M0", operands);
break;
case POST_DEC:
gcc_assert (TARGET_LDRD);
- output_asm_insn ("ldr%?d\t%0, [%m1], #-8", operands);
+ output_asm_insn ("ldr%(d%)\t%0, [%m1], #-8", operands);
break;
case PRE_MODIFY:
@@ -8785,24 +9129,25 @@ output_move_double (rtx *operands)
{
/* Registers overlap so split out the increment. */
output_asm_insn ("add%?\t%1, %1, %2", otherops);
- output_asm_insn ("ldr%?d\t%0, [%1] @split", otherops);
+ output_asm_insn ("ldr%(d%)\t%0, [%1] @split", otherops);
}
else
- output_asm_insn ("ldr%?d\t%0, [%1, %2]!", otherops);
+ output_asm_insn ("ldr%(d%)\t%0, [%1, %2]!", otherops);
}
else
{
/* We only allow constant increments, so this is safe. */
- output_asm_insn ("ldr%?d\t%0, [%1], %2", otherops);
+ output_asm_insn ("ldr%(d%)\t%0, [%1], %2", otherops);
}
break;
case LABEL_REF:
case CONST:
output_asm_insn ("adr%?\t%0, %1", operands);
- output_asm_insn ("ldm%?ia\t%0, %M0", operands);
+ output_asm_insn ("ldm%(ia%)\t%0, %M0", operands);
break;
+ /* ??? This needs checking for thumb2. */
default:
if (arm_add_operand (XEXP (XEXP (operands[1], 0), 1),
GET_MODE (XEXP (XEXP (operands[1], 0), 1))))
@@ -8818,13 +9163,17 @@ output_move_double (rtx *operands)
switch ((int) INTVAL (otherops[2]))
{
case -8:
- output_asm_insn ("ldm%?db\t%1, %M0", otherops);
+ output_asm_insn ("ldm%(db%)\t%1, %M0", otherops);
return "";
case -4:
- output_asm_insn ("ldm%?da\t%1, %M0", otherops);
+ if (TARGET_THUMB2)
+ break;
+ output_asm_insn ("ldm%(da%)\t%1, %M0", otherops);
return "";
case 4:
- output_asm_insn ("ldm%?ib\t%1, %M0", otherops);
+ if (TARGET_THUMB2)
+ break;
+ output_asm_insn ("ldm%(ib%)\t%1, %M0", otherops);
return "";
}
}
@@ -8847,11 +9196,11 @@ output_move_double (rtx *operands)
if (reg_overlap_mentioned_p (otherops[0], otherops[2]))
{
output_asm_insn ("add%?\t%1, %1, %2", otherops);
- output_asm_insn ("ldr%?d\t%0, [%1]",
+ output_asm_insn ("ldr%(d%)\t%0, [%1]",
otherops);
}
else
- output_asm_insn ("ldr%?d\t%0, [%1, %2]", otherops);
+ output_asm_insn ("ldr%(d%)\t%0, [%1, %2]", otherops);
return "";
}
@@ -8868,7 +9217,7 @@ output_move_double (rtx *operands)
else
output_asm_insn ("sub%?\t%0, %1, %2", otherops);
- return "ldm%?ia\t%0, %M0";
+ return "ldm%(ia%)\t%0, %M0";
}
else
{
@@ -8896,25 +9245,28 @@ output_move_double (rtx *operands)
switch (GET_CODE (XEXP (operands[0], 0)))
{
case REG:
- output_asm_insn ("stm%?ia\t%m0, %M1", operands);
+ output_asm_insn ("stm%(ia%)\t%m0, %M1", operands);
break;
case PRE_INC:
gcc_assert (TARGET_LDRD);
- output_asm_insn ("str%?d\t%1, [%m0, #8]!", operands);
+ output_asm_insn ("str%(d%)\t%1, [%m0, #8]!", operands);
break;
case PRE_DEC:
- output_asm_insn ("stm%?db\t%m0!, %M1", operands);
+ if (TARGET_LDRD)
+ output_asm_insn ("str%(d%)\t%1, [%m0, #-8]!", operands);
+ else
+ output_asm_insn ("stm%(db%)\t%m0!, %M1", operands);
break;
case POST_INC:
- output_asm_insn ("stm%?ia\t%m0!, %M1", operands);
+ output_asm_insn ("stm%(ia%)\t%m0!, %M1", operands);
break;
case POST_DEC:
gcc_assert (TARGET_LDRD);
- output_asm_insn ("str%?d\t%1, [%m0], #-8", operands);
+ output_asm_insn ("str%(d%)\t%1, [%m0], #-8", operands);
break;
case PRE_MODIFY:
@@ -8924,9 +9276,9 @@ output_move_double (rtx *operands)
otherops[2] = XEXP (XEXP (XEXP (operands[0], 0), 1), 1);
if (GET_CODE (XEXP (operands[0], 0)) == PRE_MODIFY)
- output_asm_insn ("str%?d\t%0, [%1, %2]!", otherops);
+ output_asm_insn ("str%(d%)\t%0, [%1, %2]!", otherops);
else
- output_asm_insn ("str%?d\t%0, [%1], %2", otherops);
+ output_asm_insn ("str%(d%)\t%0, [%1], %2", otherops);
break;
case PLUS:
@@ -8936,15 +9288,19 @@ output_move_double (rtx *operands)
switch ((int) INTVAL (XEXP (XEXP (operands[0], 0), 1)))
{
case -8:
- output_asm_insn ("stm%?db\t%m0, %M1", operands);
+ output_asm_insn ("stm%(db%)\t%m0, %M1", operands);
return "";
case -4:
- output_asm_insn ("stm%?da\t%m0, %M1", operands);
+ if (TARGET_THUMB2)
+ break;
+ output_asm_insn ("stm%(da%)\t%m0, %M1", operands);
return "";
case 4:
- output_asm_insn ("stm%?ib\t%m0, %M1", operands);
+ if (TARGET_THUMB2)
+ break;
+ output_asm_insn ("stm%(ib%)\t%m0, %M1", operands);
return "";
}
}
@@ -8956,7 +9312,7 @@ output_move_double (rtx *operands)
{
otherops[0] = operands[1];
otherops[1] = XEXP (XEXP (operands[0], 0), 0);
- output_asm_insn ("str%?d\t%0, [%1, %2]", otherops);
+ output_asm_insn ("str%(d%)\t%0, [%1, %2]", otherops);
return "";
}
/* Fall through */
@@ -8972,6 +9328,62 @@ output_move_double (rtx *operands)
return "";
}
+/* Output a VFP load or store instruction. */
+
+const char *
+output_move_vfp (rtx *operands)
+{
+ rtx reg, mem, addr, ops[2];
+ int load = REG_P (operands[0]);
+ int dp = GET_MODE_SIZE (GET_MODE (operands[0])) == 8;
+ int integer_p = GET_MODE_CLASS (GET_MODE (operands[0])) == MODE_INT;
+ const char *template;
+ char buff[50];
+
+ reg = operands[!load];
+ mem = operands[load];
+
+ gcc_assert (REG_P (reg));
+ gcc_assert (IS_VFP_REGNUM (REGNO (reg)));
+ gcc_assert (GET_MODE (reg) == SFmode
+ || GET_MODE (reg) == DFmode
+ || GET_MODE (reg) == SImode
+ || GET_MODE (reg) == DImode);
+ gcc_assert (MEM_P (mem));
+
+ addr = XEXP (mem, 0);
+
+ switch (GET_CODE (addr))
+ {
+ case PRE_DEC:
+ template = "f%smdb%c%%?\t%%0!, {%%%s1}%s";
+ ops[0] = XEXP (addr, 0);
+ ops[1] = reg;
+ break;
+
+ case POST_INC:
+ template = "f%smia%c%%?\t%%0!, {%%%s1}%s";
+ ops[0] = XEXP (addr, 0);
+ ops[1] = reg;
+ break;
+
+ default:
+ template = "f%s%c%%?\t%%%s0, %%1%s";
+ ops[0] = reg;
+ ops[1] = mem;
+ break;
+ }
+
+ sprintf (buff, template,
+ load ? "ld" : "st",
+ dp ? 'd' : 's',
+ dp ? "P" : "",
+ integer_p ? "\t%@ int" : "");
+ output_asm_insn (buff, ops);
+
+ return "";
+}
+
/* Output an ADD r, s, #n where n may be too big for one instruction.
If adding zero to one register, output nothing. */
const char *
@@ -9035,6 +9447,29 @@ output_multi_immediate (rtx *operands, const char *instr1, const char *instr2,
return "";
}
+/* Return the name of a shifter operation. */
+static const char *
+arm_shift_nmem(enum rtx_code code)
+{
+ switch (code)
+ {
+ case ASHIFT:
+ return ARM_LSL_NAME;
+
+ case ASHIFTRT:
+ return "asr";
+
+ case LSHIFTRT:
+ return "lsr";
+
+ case ROTATERT:
+ return "ror";
+
+ default:
+ abort();
+ }
+}
+
/* Return the appropriate ARM instruction for the operation code.
The returned result should not be overwritten. OP is the rtx of the
operation. SHIFT_FIRST_ARG is TRUE if the first argument of the operator
@@ -9059,6 +9494,12 @@ arithmetic_instr (rtx op, int shift_first_arg)
case AND:
return "and";
+ case ASHIFT:
+ case ASHIFTRT:
+ case LSHIFTRT:
+ case ROTATERT:
+ return arm_shift_nmem(GET_CODE(op));
+
default:
gcc_unreachable ();
}
@@ -9092,26 +9533,18 @@ shift_op (rtx op, HOST_WIDE_INT *amountp)
switch (code)
{
- case ASHIFT:
- mnem = "asl";
- break;
-
- case ASHIFTRT:
- mnem = "asr";
- break;
-
- case LSHIFTRT:
- mnem = "lsr";
- break;
-
case ROTATE:
gcc_assert (*amountp != -1);
*amountp = 32 - *amountp;
+ code = ROTATERT;
/* Fall through. */
+ case ASHIFT:
+ case ASHIFTRT:
+ case LSHIFTRT:
case ROTATERT:
- mnem = "ror";
+ mnem = arm_shift_nmem(code);
break;
case MULT:
@@ -9119,7 +9552,7 @@ shift_op (rtx op, HOST_WIDE_INT *amountp)
power of 2, since this case can never be reloaded from a reg. */
gcc_assert (*amountp != -1);
*amountp = int_log2 (*amountp);
- return "asl";
+ return ARM_LSL_NAME;
default:
gcc_unreachable ();
@@ -9129,7 +9562,7 @@ shift_op (rtx op, HOST_WIDE_INT *amountp)
{
/* This is not 100% correct, but follows from the desire to merge
multiplication by a power of 2 with the recognizer for a
- shift. >=32 is not a valid shift for "asl", so we must try and
+ shift. >=32 is not a valid shift for "lsl", so we must try and
output a shift that produces the correct arithmetical result.
Using lsr #32 is identical except for the fact that the carry bit
is not set correctly if we set the flags; but we never use the
@@ -9259,17 +9692,22 @@ arm_compute_save_reg0_reg12_mask (void)
}
else
{
+ /* In arm mode we handle r11 (FP) as a special case. */
+ unsigned last_reg = TARGET_ARM ? 10 : 11;
+
/* In the normal case we only need to save those registers
which are call saved and which are used by this function. */
- for (reg = 0; reg <= 10; reg++)
+ for (reg = 0; reg <= last_reg; reg++)
if (regs_ever_live[reg] && ! call_used_regs [reg])
save_reg_mask |= (1 << reg);
/* Handle the frame pointer as a special case. */
- if (! TARGET_APCS_FRAME
- && ! frame_pointer_needed
- && regs_ever_live[HARD_FRAME_POINTER_REGNUM]
- && ! call_used_regs[HARD_FRAME_POINTER_REGNUM])
+ if (TARGET_THUMB2 && frame_pointer_needed)
+ save_reg_mask |= 1 << HARD_FRAME_POINTER_REGNUM;
+ else if (! TARGET_APCS_FRAME
+ && ! frame_pointer_needed
+ && regs_ever_live[HARD_FRAME_POINTER_REGNUM]
+ && ! call_used_regs[HARD_FRAME_POINTER_REGNUM])
save_reg_mask |= 1 << HARD_FRAME_POINTER_REGNUM;
/* If we aren't loading the PIC register,
@@ -9280,6 +9718,10 @@ arm_compute_save_reg0_reg12_mask (void)
&& (regs_ever_live[PIC_OFFSET_TABLE_REGNUM]
|| current_function_uses_pic_offset_table))
save_reg_mask |= 1 << PIC_OFFSET_TABLE_REGNUM;
+
+ /* The prologue will copy SP into R0, so save it. */
+ if (IS_STACKALIGN (func_type))
+ save_reg_mask |= 1;
}
/* Save registers so the exception handler can modify them. */
@@ -9299,6 +9741,7 @@ arm_compute_save_reg0_reg12_mask (void)
return save_reg_mask;
}
+
/* Compute a bit mask of which registers need to be
saved on the stack for the current function. */
@@ -9307,6 +9750,7 @@ arm_compute_save_reg_mask (void)
{
unsigned int save_reg_mask = 0;
unsigned long func_type = arm_current_func_type ();
+ unsigned int reg;
if (IS_NAKED (func_type))
/* This should never really happen. */
@@ -9314,7 +9758,7 @@ arm_compute_save_reg_mask (void)
/* If we are creating a stack frame, then we must save the frame pointer,
IP (which will hold the old stack pointer), LR and the PC. */
- if (frame_pointer_needed)
+ if (frame_pointer_needed && TARGET_ARM)
save_reg_mask |=
(1 << ARM_HARD_FRAME_POINTER_REGNUM)
| (1 << IP_REGNUM)
@@ -9351,8 +9795,6 @@ arm_compute_save_reg_mask (void)
&& ((bit_count (save_reg_mask)
+ ARM_NUM_INTS (current_function_pretend_args_size)) % 2) != 0)
{
- unsigned int reg;
-
/* The total number of registers that are going to be pushed
onto the stack is odd. We need to ensure that the stack
is 64-bit aligned before we start to save iWMMXt registers,
@@ -9375,6 +9817,16 @@ arm_compute_save_reg_mask (void)
}
}
+ /* We may need to push an additional register for use initializing the
+ PIC base register. */
+ if (TARGET_THUMB2 && IS_NESTED (func_type) && flag_pic
+ && (save_reg_mask & THUMB2_WORK_REGS) == 0)
+ {
+ reg = thumb_find_work_register (1 << 4);
+ if (!call_used_regs[reg])
+ save_reg_mask |= (1 << reg);
+ }
+
return save_reg_mask;
}
@@ -9382,7 +9834,7 @@ arm_compute_save_reg_mask (void)
/* Compute a bit mask of which registers need to be
saved on the stack for the current function. */
static unsigned long
-thumb_compute_save_reg_mask (void)
+thumb1_compute_save_reg_mask (void)
{
unsigned long mask;
unsigned reg;
@@ -9578,7 +10030,7 @@ output_return_instruction (rtx operand, int really_return, int reverse)
stack_adjust = offsets->outgoing_args - offsets->saved_regs;
gcc_assert (stack_adjust == 0 || stack_adjust == 4);
- if (stack_adjust && arm_arch5)
+ if (stack_adjust && arm_arch5 && TARGET_ARM)
sprintf (instr, "ldm%sib\t%%|sp, {", conditional);
else
{
@@ -9643,6 +10095,7 @@ output_return_instruction (rtx operand, int really_return, int reverse)
{
case ARM_FT_ISR:
case ARM_FT_FIQ:
+ /* ??? This is wrong for unified assembly syntax. */
sprintf (instr, "sub%ss\t%%|pc, %%|lr, #4", conditional);
break;
@@ -9651,6 +10104,7 @@ output_return_instruction (rtx operand, int really_return, int reverse)
break;
case ARM_FT_EXCEPTION:
+ /* ??? This is wrong for unified assembly syntax. */
sprintf (instr, "mov%ss\t%%|pc, %%|lr", conditional);
break;
@@ -9718,9 +10172,9 @@ arm_output_function_prologue (FILE *f, HOST_WIDE_INT frame_size)
{
unsigned long func_type;
- if (!TARGET_ARM)
+ if (TARGET_THUMB1)
{
- thumb_output_function_prologue (f, frame_size);
+ thumb1_output_function_prologue (f, frame_size);
return;
}
@@ -9756,6 +10210,8 @@ arm_output_function_prologue (FILE *f, HOST_WIDE_INT frame_size)
if (IS_NESTED (func_type))
asm_fprintf (f, "\t%@ Nested: function declared inside another function.\n");
+ if (IS_STACKALIGN (func_type))
+ asm_fprintf (f, "\t%@ Stack Align: May be called with mis-aligned SP.\n");
asm_fprintf (f, "\t%@ args = %d, pretend = %d, frame = %wd\n",
current_function_args_size,
@@ -9834,7 +10290,7 @@ arm_output_epilogue (rtx sibling)
if (saved_regs_mask & (1 << reg))
floats_offset += 4;
- if (frame_pointer_needed)
+ if (frame_pointer_needed && TARGET_ARM)
{
/* This variable is for the Virtual Frame Pointer, not VFP regs. */
int vfp_offset = offsets->frame;
@@ -9971,22 +10427,40 @@ arm_output_epilogue (rtx sibling)
|| current_function_calls_alloca)
asm_fprintf (f, "\tsub\t%r, %r, #%d\n", SP_REGNUM, FP_REGNUM,
4 * bit_count (saved_regs_mask));
- print_multi_reg (f, "ldmfd\t%r", SP_REGNUM, saved_regs_mask);
+ print_multi_reg (f, "ldmfd\t%r, ", SP_REGNUM, saved_regs_mask);
if (IS_INTERRUPT (func_type))
/* Interrupt handlers will have pushed the
IP onto the stack, so restore it now. */
- print_multi_reg (f, "ldmfd\t%r!", SP_REGNUM, 1 << IP_REGNUM);
+ print_multi_reg (f, "ldmfd\t%r!, ", SP_REGNUM, 1 << IP_REGNUM);
}
else
{
+ HOST_WIDE_INT amount;
/* Restore stack pointer if necessary. */
- if (offsets->outgoing_args != offsets->saved_regs)
+ if (frame_pointer_needed)
{
- operands[0] = operands[1] = stack_pointer_rtx;
- operands[2] = GEN_INT (offsets->outgoing_args - offsets->saved_regs);
+ /* For Thumb-2 restore sp from the frame pointer.
+ Operand restrictions mean we have to incrememnt FP, then copy
+ to SP. */
+ amount = offsets->locals_base - offsets->saved_regs;
+ operands[0] = hard_frame_pointer_rtx;
+ }
+ else
+ {
+ operands[0] = stack_pointer_rtx;
+ amount = offsets->outgoing_args - offsets->saved_regs;
+ }
+
+ if (amount)
+ {
+ operands[1] = operands[0];
+ operands[2] = GEN_INT (amount);
output_add_immediate (operands);
}
+ if (frame_pointer_needed)
+ asm_fprintf (f, "\tmov\t%r, %r\n",
+ SP_REGNUM, HARD_FRAME_POINTER_REGNUM);
if (arm_fpu_arch == FPUTYPE_FPA_EMU2)
{
@@ -10054,6 +10528,7 @@ arm_output_epilogue (rtx sibling)
/* If we can, restore the LR into the PC. */
if (ARM_FUNC_TYPE (func_type) == ARM_FT_NORMAL
+ && !IS_STACKALIGN (func_type)
&& really_return
&& current_function_pretend_args_size == 0
&& saved_regs_mask & (1 << LR_REGNUM)
@@ -10064,8 +10539,9 @@ arm_output_epilogue (rtx sibling)
}
/* Load the registers off the stack. If we only have one register
- to load use the LDR instruction - it is faster. */
- if (saved_regs_mask == (1 << LR_REGNUM))
+ to load use the LDR instruction - it is faster. For Thumb-2
+ always use pop and the assembler will pick the best instruction.*/
+ if (TARGET_ARM && saved_regs_mask == (1 << LR_REGNUM))
{
asm_fprintf (f, "\tldr\t%r, [%r], #4\n", LR_REGNUM, SP_REGNUM);
}
@@ -10076,9 +10552,11 @@ arm_output_epilogue (rtx sibling)
(i.e. "ldmfd sp!..."). We know that the stack pointer is
in the list of registers and if we add writeback the
instruction becomes UNPREDICTABLE. */
- print_multi_reg (f, "ldmfd\t%r", SP_REGNUM, saved_regs_mask);
+ print_multi_reg (f, "ldmfd\t%r, ", SP_REGNUM, saved_regs_mask);
+ else if (TARGET_ARM)
+ print_multi_reg (f, "ldmfd\t%r!, ", SP_REGNUM, saved_regs_mask);
else
- print_multi_reg (f, "ldmfd\t%r!", SP_REGNUM, saved_regs_mask);
+ print_multi_reg (f, "pop\t", SP_REGNUM, saved_regs_mask);
}
if (current_function_pretend_args_size)
@@ -10116,6 +10594,11 @@ arm_output_epilogue (rtx sibling)
break;
default:
+ if (IS_STACKALIGN (func_type))
+ {
+ /* See comment in arm_expand_prologue. */
+ asm_fprintf (f, "\tmov\t%r, %r\n", SP_REGNUM, 0);
+ }
if (arm_arch5 || arm_arch4t)
asm_fprintf (f, "\tbx\t%r\n", LR_REGNUM);
else
@@ -10132,7 +10615,7 @@ arm_output_function_epilogue (FILE *file ATTRIBUTE_UNUSED,
{
arm_stack_offsets *offsets;
- if (TARGET_THUMB)
+ if (TARGET_THUMB1)
{
int regno;
@@ -10156,7 +10639,7 @@ arm_output_function_epilogue (FILE *file ATTRIBUTE_UNUSED,
RTL for it. This does not happen for inline functions. */
return_used_this_function = 0;
}
- else
+ else /* TARGET_32BIT */
{
/* We need to take into account any stack-frame rounding. */
offsets = arm_get_frame_offsets ();
@@ -10464,9 +10947,10 @@ arm_get_frame_offsets (void)
/* Space for variadic functions. */
offsets->saved_args = current_function_pretend_args_size;
+ /* In Thumb mode this is incorrect, but never used. */
offsets->frame = offsets->saved_args + (frame_pointer_needed ? 4 : 0);
- if (TARGET_ARM)
+ if (TARGET_32BIT)
{
unsigned int regno;
@@ -10499,9 +10983,9 @@ arm_get_frame_offsets (void)
saved += arm_get_vfp_saved_size ();
}
}
- else /* TARGET_THUMB */
+ else /* TARGET_THUMB1 */
{
- saved = bit_count (thumb_compute_save_reg_mask ()) * 4;
+ saved = bit_count (thumb1_compute_save_reg_mask ()) * 4;
if (TARGET_BACKTRACE)
saved += 16;
}
@@ -10618,11 +11102,132 @@ arm_compute_initial_elimination_offset (unsigned int from, unsigned int to)
}
-/* Generate the prologue instructions for entry into an ARM function. */
+/* Emit RTL to save coprocessor registers on funciton entry. Returns the
+ number of bytes pushed. */
+
+static int
+arm_save_coproc_regs(void)
+{
+ int saved_size = 0;
+ unsigned reg;
+ unsigned start_reg;
+ rtx insn;
+
+ for (reg = LAST_IWMMXT_REGNUM; reg >= FIRST_IWMMXT_REGNUM; reg--)
+ if (regs_ever_live[reg] && ! call_used_regs [reg])
+ {
+ insn = gen_rtx_PRE_DEC (V2SImode, stack_pointer_rtx);
+ insn = gen_rtx_MEM (V2SImode, insn);
+ insn = emit_set_insn (insn, gen_rtx_REG (V2SImode, reg));
+ RTX_FRAME_RELATED_P (insn) = 1;
+ saved_size += 8;
+ }
+
+ /* Save any floating point call-saved registers used by this
+ function. */
+ if (arm_fpu_arch == FPUTYPE_FPA_EMU2)
+ {
+ for (reg = LAST_FPA_REGNUM; reg >= FIRST_FPA_REGNUM; reg--)
+ if (regs_ever_live[reg] && !call_used_regs[reg])
+ {
+ insn = gen_rtx_PRE_DEC (XFmode, stack_pointer_rtx);
+ insn = gen_rtx_MEM (XFmode, insn);
+ insn = emit_set_insn (insn, gen_rtx_REG (XFmode, reg));
+ RTX_FRAME_RELATED_P (insn) = 1;
+ saved_size += 12;
+ }
+ }
+ else
+ {
+ start_reg = LAST_FPA_REGNUM;
+
+ for (reg = LAST_FPA_REGNUM; reg >= FIRST_FPA_REGNUM; reg--)
+ {
+ if (regs_ever_live[reg] && !call_used_regs[reg])
+ {
+ if (start_reg - reg == 3)
+ {
+ insn = emit_sfm (reg, 4);
+ RTX_FRAME_RELATED_P (insn) = 1;
+ saved_size += 48;
+ start_reg = reg - 1;
+ }
+ }
+ else
+ {
+ if (start_reg != reg)
+ {
+ insn = emit_sfm (reg + 1, start_reg - reg);
+ RTX_FRAME_RELATED_P (insn) = 1;
+ saved_size += (start_reg - reg) * 12;
+ }
+ start_reg = reg - 1;
+ }
+ }
+
+ if (start_reg != reg)
+ {
+ insn = emit_sfm (reg + 1, start_reg - reg);
+ saved_size += (start_reg - reg) * 12;
+ RTX_FRAME_RELATED_P (insn) = 1;
+ }
+ }
+ if (TARGET_HARD_FLOAT && TARGET_VFP)
+ {
+ start_reg = FIRST_VFP_REGNUM;
+
+ for (reg = FIRST_VFP_REGNUM; reg < LAST_VFP_REGNUM; reg += 2)
+ {
+ if ((!regs_ever_live[reg] || call_used_regs[reg])
+ && (!regs_ever_live[reg + 1] || call_used_regs[reg + 1]))
+ {
+ if (start_reg != reg)
+ saved_size += vfp_emit_fstmd (start_reg,
+ (reg - start_reg) / 2);
+ start_reg = reg + 2;
+ }
+ }
+ if (start_reg != reg)
+ saved_size += vfp_emit_fstmd (start_reg,
+ (reg - start_reg) / 2);
+ }
+ return saved_size;
+}
+
+
+/* Set the Thumb frame pointer from the stack pointer. */
+
+static void
+thumb_set_frame_pointer (arm_stack_offsets *offsets)
+{
+ HOST_WIDE_INT amount;
+ rtx insn, dwarf;
+
+ amount = offsets->outgoing_args - offsets->locals_base;
+ if (amount < 1024)
+ insn = emit_insn (gen_addsi3 (hard_frame_pointer_rtx,
+ stack_pointer_rtx, GEN_INT (amount)));
+ else
+ {
+ emit_insn (gen_movsi (hard_frame_pointer_rtx, GEN_INT (amount)));
+ insn = emit_insn (gen_addsi3 (hard_frame_pointer_rtx,
+ hard_frame_pointer_rtx,
+ stack_pointer_rtx));
+ dwarf = gen_rtx_SET (VOIDmode, hard_frame_pointer_rtx,
+ plus_constant (stack_pointer_rtx, amount));
+ RTX_FRAME_RELATED_P (dwarf) = 1;
+ REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_FRAME_RELATED_EXPR, dwarf,
+ REG_NOTES (insn));
+ }
+
+ RTX_FRAME_RELATED_P (insn) = 1;
+}
+
+/* Generate the prologue instructions for entry into an ARM or Thumb-2
+ function. */
void
arm_expand_prologue (void)
{
- int reg;
rtx amount;
rtx insn;
rtx ip_rtx;
@@ -10648,7 +11253,38 @@ arm_expand_prologue (void)
ip_rtx = gen_rtx_REG (SImode, IP_REGNUM);
- if (frame_pointer_needed)
+ if (IS_STACKALIGN (func_type))
+ {
+ rtx dwarf;
+ rtx r0;
+ rtx r1;
+ /* Handle a word-aligned stack pointer. We generate the following:
+
+ mov r0, sp
+ bic r1, r0, #7
+ mov sp, r1
+ <save and restore r0 in normal prologue/epilogue>
+ mov sp, r0
+ bx lr
+
+ The unwinder doesn't need to know about the stack realignment.
+ Just tell it we saved SP in r0. */
+ gcc_assert (TARGET_THUMB2 && !arm_arch_notm && args_to_push == 0);
+
+ r0 = gen_rtx_REG (SImode, 0);
+ r1 = gen_rtx_REG (SImode, 1);
+ dwarf = gen_rtx_UNSPEC (SImode, NULL_RTVEC, UNSPEC_STACK_ALIGN);
+ dwarf = gen_rtx_SET (VOIDmode, r0, dwarf);
+ insn = gen_movsi (r0, stack_pointer_rtx);
+ RTX_FRAME_RELATED_P (insn) = 1;
+ REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_FRAME_RELATED_EXPR,
+ dwarf, REG_NOTES (insn));
+ emit_insn (insn);
+ emit_insn (gen_andsi3 (r1, r0, GEN_INT (~(HOST_WIDE_INT)7)));
+ emit_insn (gen_movsi (stack_pointer_rtx, r1));
+ }
+
+ if (frame_pointer_needed && TARGET_ARM)
{
if (IS_INTERRUPT (func_type))
{
@@ -10767,113 +11403,32 @@ arm_expand_prologue (void)
RTX_FRAME_RELATED_P (insn) = 1;
}
- if (TARGET_IWMMXT)
- for (reg = LAST_IWMMXT_REGNUM; reg >= FIRST_IWMMXT_REGNUM; reg--)
- if (regs_ever_live[reg] && ! call_used_regs [reg])
- {
- insn = gen_rtx_PRE_DEC (V2SImode, stack_pointer_rtx);
- insn = gen_frame_mem (V2SImode, insn);
- insn = emit_set_insn (insn, gen_rtx_REG (V2SImode, reg));
- RTX_FRAME_RELATED_P (insn) = 1;
- saved_regs += 8;
- }
-
if (! IS_VOLATILE (func_type))
- {
- int start_reg;
-
- /* Save any floating point call-saved registers used by this
- function. */
- if (arm_fpu_arch == FPUTYPE_FPA_EMU2)
- {
- for (reg = LAST_FPA_REGNUM; reg >= FIRST_FPA_REGNUM; reg--)
- if (regs_ever_live[reg] && !call_used_regs[reg])
- {
- insn = gen_rtx_PRE_DEC (XFmode, stack_pointer_rtx);
- insn = gen_frame_mem (XFmode, insn);
- insn = emit_set_insn (insn, gen_rtx_REG (XFmode, reg));
- RTX_FRAME_RELATED_P (insn) = 1;
- saved_regs += 12;
- }
- }
- else
- {
- start_reg = LAST_FPA_REGNUM;
+ saved_regs += arm_save_coproc_regs ();
- for (reg = LAST_FPA_REGNUM; reg >= FIRST_FPA_REGNUM; reg--)
- {
- if (regs_ever_live[reg] && !call_used_regs[reg])
- {
- if (start_reg - reg == 3)
- {
- insn = emit_sfm (reg, 4);
- RTX_FRAME_RELATED_P (insn) = 1;
- saved_regs += 48;
- start_reg = reg - 1;
- }
- }
- else
- {
- if (start_reg != reg)
- {
- insn = emit_sfm (reg + 1, start_reg - reg);
- RTX_FRAME_RELATED_P (insn) = 1;
- saved_regs += (start_reg - reg) * 12;
- }
- start_reg = reg - 1;
- }
- }
-
- if (start_reg != reg)
- {
- insn = emit_sfm (reg + 1, start_reg - reg);
- saved_regs += (start_reg - reg) * 12;
- RTX_FRAME_RELATED_P (insn) = 1;
- }
- }
- if (TARGET_HARD_FLOAT && TARGET_VFP)
+ if (frame_pointer_needed && TARGET_ARM)
+ {
+ /* Create the new frame pointer. */
{
- start_reg = FIRST_VFP_REGNUM;
+ insn = GEN_INT (-(4 + args_to_push + fp_offset));
+ insn = emit_insn (gen_addsi3 (hard_frame_pointer_rtx, ip_rtx, insn));
+ RTX_FRAME_RELATED_P (insn) = 1;
- for (reg = FIRST_VFP_REGNUM; reg < LAST_VFP_REGNUM; reg += 2)
+ if (IS_NESTED (func_type))
{
- if ((!regs_ever_live[reg] || call_used_regs[reg])
- && (!regs_ever_live[reg + 1] || call_used_regs[reg + 1]))
+ /* Recover the static chain register. */
+ if (regs_ever_live [3] == 0
+ || saved_pretend_args)
+ insn = gen_rtx_REG (SImode, 3);
+ else /* if (current_function_pretend_args_size == 0) */
{
- if (start_reg != reg)
- saved_regs += vfp_emit_fstmd (start_reg,
- (reg - start_reg) / 2);
- start_reg = reg + 2;
+ insn = plus_constant (hard_frame_pointer_rtx, 4);
+ insn = gen_frame_mem (SImode, insn);
}
+ emit_set_insn (ip_rtx, insn);
+ /* Add a USE to stop propagate_one_insn() from barfing. */
+ emit_insn (gen_prologue_use (ip_rtx));
}
- if (start_reg != reg)
- saved_regs += vfp_emit_fstmd (start_reg,
- (reg - start_reg) / 2);
- }
- }
-
- if (frame_pointer_needed)
- {
- /* Create the new frame pointer. */
- insn = GEN_INT (-(4 + args_to_push + fp_offset));
- insn = emit_insn (gen_addsi3 (hard_frame_pointer_rtx, ip_rtx, insn));
- RTX_FRAME_RELATED_P (insn) = 1;
-
- if (IS_NESTED (func_type))
- {
- /* Recover the static chain register. */
- if (regs_ever_live [3] == 0
- || saved_pretend_args)
- insn = gen_rtx_REG (SImode, 3);
- else /* if (current_function_pretend_args_size == 0) */
- {
- insn = plus_constant (hard_frame_pointer_rtx, 4);
- insn = gen_frame_mem (SImode, insn);
- }
-
- emit_set_insn (ip_rtx, insn);
- /* Add a USE to stop propagate_one_insn() from barfing. */
- emit_insn (gen_prologue_use (ip_rtx));
}
}
@@ -10905,8 +11460,19 @@ arm_expand_prologue (void)
}
+ if (frame_pointer_needed && TARGET_THUMB2)
+ thumb_set_frame_pointer (offsets);
+
if (flag_pic && arm_pic_register != INVALID_REGNUM)
- arm_load_pic_register (0UL);
+ {
+ unsigned long mask;
+
+ mask = live_regs_mask;
+ mask &= THUMB2_WORK_REGS;
+ if (!IS_NESTED (func_type))
+ mask |= (1 << IP_REGNUM);
+ arm_load_pic_register (mask);
+ }
/* If we are profiling, make sure no instructions are scheduled before
the call to mcount. Similarly if the user has requested no
@@ -10926,6 +11492,43 @@ arm_expand_prologue (void)
}
}
+/* Print condition code to STREAM. Helper function for arm_print_operand. */
+static void
+arm_print_condition (FILE *stream)
+{
+ if (arm_ccfsm_state == 3 || arm_ccfsm_state == 4)
+ {
+ /* Branch conversion is not implemented for Thumb-2. */
+ if (TARGET_THUMB)
+ {
+ output_operand_lossage ("predicated Thumb instruction");
+ return;
+ }
+ if (current_insn_predicate != NULL)
+ {
+ output_operand_lossage
+ ("predicated instruction in conditional sequence");
+ return;
+ }
+
+ fputs (arm_condition_codes[arm_current_cc], stream);
+ }
+ else if (current_insn_predicate)
+ {
+ enum arm_cond_code code;
+
+ if (TARGET_THUMB1)
+ {
+ output_operand_lossage ("predicated Thumb instruction");
+ return;
+ }
+
+ code = get_arm_condition_code (current_insn_predicate);
+ fputs (arm_condition_codes[code], stream);
+ }
+}
+
+
/* If CODE is 'd', then the X is a condition operand and the instruction
should only be executed if the condition is true.
if CODE is 'D', then the X is a condition operand and the instruction
@@ -10957,37 +11560,46 @@ arm_print_operand (FILE *stream, rtx x, int code)
return;
case '?':
- if (arm_ccfsm_state == 3 || arm_ccfsm_state == 4)
- {
- if (TARGET_THUMB)
- {
- output_operand_lossage ("predicated Thumb instruction");
- break;
- }
- if (current_insn_predicate != NULL)
- {
- output_operand_lossage
- ("predicated instruction in conditional sequence");
- break;
- }
+ arm_print_condition (stream);
+ return;
+
+ case '(':
+ /* Nothing in unified syntax, otherwise the current condition code. */
+ if (!TARGET_UNIFIED_ASM)
+ arm_print_condition (stream);
+ break;
- fputs (arm_condition_codes[arm_current_cc], stream);
+ case ')':
+ /* The current condition code in unified syntax, otherwise nothing. */
+ if (TARGET_UNIFIED_ASM)
+ arm_print_condition (stream);
+ break;
+
+ case '.':
+ /* The current condition code for a condition code setting instruction.
+ Preceeded by 's' in unified syntax, otherwise followed by 's'. */
+ if (TARGET_UNIFIED_ASM)
+ {
+ fputc('s', stream);
+ arm_print_condition (stream);
}
- else if (current_insn_predicate)
+ else
{
- enum arm_cond_code code;
-
- if (TARGET_THUMB)
- {
- output_operand_lossage ("predicated Thumb instruction");
- break;
- }
-
- code = get_arm_condition_code (current_insn_predicate);
- fputs (arm_condition_codes[code], stream);
+ arm_print_condition (stream);
+ fputc('s', stream);
}
return;
+ case '!':
+ /* If the instruction is conditionally executed then print
+ the current condition code, otherwise print 's'. */
+ gcc_assert (TARGET_THUMB2 && TARGET_UNIFIED_ASM);
+ if (current_insn_predicate)
+ arm_print_condition (stream);
+ else
+ fputc('s', stream);
+ break;
+
case 'N':
{
REAL_VALUE_TYPE r;
@@ -11011,6 +11623,11 @@ arm_print_operand (FILE *stream, rtx x, int code)
}
return;
+ case 'L':
+ /* The low 16 bits of an immediate constant. */
+ fprintf (stream, HOST_WIDE_INT_PRINT_DEC, INTVAL(x) & 0xffff);
+ return;
+
case 'i':
fprintf (stream, "%s", arithmetic_instr (x, 1));
return;
@@ -11419,6 +12036,13 @@ arm_elf_asm_constructor (rtx symbol, int priority)
time. But then, I want to reduce the code size to somewhere near what
/bin/cc produces. */
+/* In addition to this, state is maintained for Thumb-2 COND_EXEC
+ instructions. When a COND_EXEC instruction is seen the subsequent
+ instructions are scanned so that multiple conditional instructions can be
+ combined into a single IT block. arm_condexec_count and arm_condexec_mask
+ specify the length and true/false mask for the IT block. These will be
+ decremented/zeroed by arm_asm_output_opcode as the insns are output. */
+
/* Returns the index of the ARM condition code string in
`arm_condition_codes'. COMPARISON should be an rtx like
`(eq (...) (...))'. */
@@ -11548,6 +12172,88 @@ get_arm_condition_code (rtx comparison)
}
}
+/* Tell arm_asm_ouput_opcode to output IT blocks for conditionally executed
+ instructions. */
+void
+thumb2_final_prescan_insn (rtx insn)
+{
+ rtx first_insn = insn;
+ rtx body = PATTERN (insn);
+ rtx predicate;
+ enum arm_cond_code code;
+ int n;
+ int mask;
+
+ /* Remove the previous insn from the count of insns to be output. */
+ if (arm_condexec_count)
+ arm_condexec_count--;
+
+ /* Nothing to do if we are already inside a conditional block. */
+ if (arm_condexec_count)
+ return;
+
+ if (GET_CODE (body) != COND_EXEC)
+ return;
+
+ /* Conditional jumps are implemented directly. */
+ if (GET_CODE (insn) == JUMP_INSN)
+ return;
+
+ predicate = COND_EXEC_TEST (body);
+ arm_current_cc = get_arm_condition_code (predicate);
+
+ n = get_attr_ce_count (insn);
+ arm_condexec_count = 1;
+ arm_condexec_mask = (1 << n) - 1;
+ arm_condexec_masklen = n;
+ /* See if subsequent instructions can be combined into the same block. */
+ for (;;)
+ {
+ insn = next_nonnote_insn (insn);
+
+ /* Jumping into the middle of an IT block is illegal, so a label or
+ barrier terminates the block. */
+ if (GET_CODE (insn) != INSN && GET_CODE(insn) != JUMP_INSN)
+ break;
+
+ body = PATTERN (insn);
+ /* USE and CLOBBER aren't really insns, so just skip them. */
+ if (GET_CODE (body) == USE
+ || GET_CODE (body) == CLOBBER)
+ {
+ arm_condexec_count++;
+ continue;
+ }
+
+ /* ??? Recognise conditional jumps, and combine them with IT blocks. */
+ if (GET_CODE (body) != COND_EXEC)
+ break;
+ /* Allow up to 4 conditionally executed instructions in a block. */
+ n = get_attr_ce_count (insn);
+ if (arm_condexec_masklen + n > 4)
+ break;
+
+ predicate = COND_EXEC_TEST (body);
+ code = get_arm_condition_code (predicate);
+ mask = (1 << n) - 1;
+ if (arm_current_cc == code)
+ arm_condexec_mask |= (mask << arm_condexec_masklen);
+ else if (arm_current_cc != ARM_INVERSE_CONDITION_CODE(code))
+ break;
+
+ arm_condexec_count++;
+ arm_condexec_masklen += n;
+
+ /* A jump must be the last instruction in a conditional block. */
+ if (GET_CODE(insn) == JUMP_INSN)
+ break;
+ }
+ /* Restore recog_data (getting the attributes of other insns can
+ destroy this array, but final.c assumes that it remains intact
+ across this call). */
+ extract_constrain_insn_cached (first_insn);
+}
+
void
arm_final_prescan_insn (rtx insn)
{
@@ -11852,7 +12558,7 @@ arm_final_prescan_insn (rtx insn)
if (!this_insn)
{
/* Oh, dear! we ran off the end.. give up. */
- recog (PATTERN (insn), insn, NULL);
+ extract_constrain_insn_cached (insn);
arm_ccfsm_state = 0;
arm_target_insn = NULL;
return;
@@ -11885,9 +12591,26 @@ arm_final_prescan_insn (rtx insn)
/* Restore recog_data (getting the attributes of other insns can
destroy this array, but final.c assumes that it remains intact
- across this call; since the insn has been recognized already we
- call recog direct). */
- recog (PATTERN (insn), insn, NULL);
+ across this call. */
+ extract_constrain_insn_cached (insn);
+ }
+}
+
+/* Output IT instructions. */
+void
+thumb2_asm_output_opcode (FILE * stream)
+{
+ char buff[5];
+ int n;
+
+ if (arm_condexec_mask)
+ {
+ for (n = 0; n < arm_condexec_masklen; n++)
+ buff[n] = (arm_condexec_mask & (1 << n)) ? 't' : 'e';
+ buff[n] = 0;
+ asm_fprintf(stream, "i%s\t%s\n\t", buff,
+ arm_condition_codes[arm_current_cc]);
+ arm_condexec_mask = 0;
}
}
@@ -11901,7 +12624,7 @@ arm_hard_regno_mode_ok (unsigned int regno, enum machine_mode mode)
|| (TARGET_HARD_FLOAT && TARGET_VFP
&& regno == VFPCC_REGNUM));
- if (TARGET_THUMB)
+ if (TARGET_THUMB1)
/* For the Thumb we only allow values bigger than SImode in
registers 0 - 6, so that there is always a second low
register available to hold the upper part of the value.
@@ -11958,10 +12681,12 @@ arm_hard_regno_mode_ok (unsigned int regno, enum machine_mode mode)
&& regno <= LAST_FPA_REGNUM);
}
+/* For efficiency and historical reasons LO_REGS, HI_REGS and CC_REGS are
+ not used in arm mode. */
int
arm_regno_class (int regno)
{
- if (TARGET_THUMB)
+ if (TARGET_THUMB1)
{
if (regno == STACK_POINTER_REGNUM)
return STACK_REG;
@@ -11972,13 +12697,16 @@ arm_regno_class (int regno)
return HI_REGS;
}
+ if (TARGET_THUMB2 && regno < 8)
+ return LO_REGS;
+
if ( regno <= LAST_ARM_REGNUM
|| regno == FRAME_POINTER_REGNUM
|| regno == ARG_POINTER_REGNUM)
- return GENERAL_REGS;
+ return TARGET_THUMB2 ? HI_REGS : GENERAL_REGS;
if (regno == CC_REGNUM || regno == VFPCC_REGNUM)
- return NO_REGS;
+ return TARGET_THUMB2 ? CC_REG : NO_REGS;
if (IS_CIRRUS_REGNUM (regno))
return CIRRUS_REGS;
@@ -12017,6 +12745,7 @@ arm_debugger_arg_offset (int value, rtx addr)
/* If we are using the stack pointer to point at the
argument, then an offset of 0 is correct. */
+ /* ??? Check this is consistent with thumb2 frame layout. */
if ((TARGET_THUMB || !frame_pointer_needed)
&& REGNO (addr) == SP_REGNUM)
return 0;
@@ -13293,7 +14022,7 @@ thumb_exit (FILE *f, int reg_containing_return_addr)
void
-thumb_final_prescan_insn (rtx insn)
+thumb1_final_prescan_insn (rtx insn)
{
if (flag_print_asm_name)
asm_fprintf (asm_out_file, "%@ 0x%04x\n",
@@ -13420,7 +14149,7 @@ thumb_unexpanded_epilogue (void)
if (IS_NAKED (arm_current_func_type ()))
return "";
- live_regs_mask = thumb_compute_save_reg_mask ();
+ live_regs_mask = thumb1_compute_save_reg_mask ();
high_regs_pushed = bit_count (live_regs_mask & 0x0f00);
/* If we can deduce the registers used from the function's return value.
@@ -13658,10 +14387,9 @@ thumb_compute_initial_elimination_offset (unsigned int from, unsigned int to)
}
}
-
/* Generate the rest of a function's prologue. */
void
-thumb_expand_prologue (void)
+thumb1_expand_prologue (void)
{
rtx insn, dwarf;
@@ -13683,7 +14411,7 @@ thumb_expand_prologue (void)
return;
}
- live_regs_mask = thumb_compute_save_reg_mask ();
+ live_regs_mask = thumb1_compute_save_reg_mask ();
/* Load the pic register before setting the frame pointer,
so we can use r7 as a temporary work register. */
if (flag_pic && arm_pic_register != INVALID_REGNUM)
@@ -13782,27 +14510,7 @@ thumb_expand_prologue (void)
}
if (frame_pointer_needed)
- {
- amount = offsets->outgoing_args - offsets->locals_base;
-
- if (amount < 1024)
- insn = emit_insn (gen_addsi3 (hard_frame_pointer_rtx,
- stack_pointer_rtx, GEN_INT (amount)));
- else
- {
- emit_insn (gen_movsi (hard_frame_pointer_rtx, GEN_INT (amount)));
- insn = emit_insn (gen_addsi3 (hard_frame_pointer_rtx,
- hard_frame_pointer_rtx,
- stack_pointer_rtx));
- dwarf = gen_rtx_SET (VOIDmode, hard_frame_pointer_rtx,
- plus_constant (stack_pointer_rtx, amount));
- RTX_FRAME_RELATED_P (dwarf) = 1;
- REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_FRAME_RELATED_EXPR, dwarf,
- REG_NOTES (insn));
- }
-
- RTX_FRAME_RELATED_P (insn) = 1;
- }
+ thumb_set_frame_pointer (offsets);
/* If we are profiling, make sure no instructions are scheduled before
the call to mcount. Similarly if the user has requested no
@@ -13825,7 +14533,7 @@ thumb_expand_prologue (void)
void
-thumb_expand_epilogue (void)
+thumb1_expand_epilogue (void)
{
HOST_WIDE_INT amount;
arm_stack_offsets *offsets;
@@ -13877,7 +14585,7 @@ thumb_expand_epilogue (void)
}
static void
-thumb_output_function_prologue (FILE *f, HOST_WIDE_INT size ATTRIBUTE_UNUSED)
+thumb1_output_function_prologue (FILE *f, HOST_WIDE_INT size ATTRIBUTE_UNUSED)
{
unsigned long live_regs_mask = 0;
unsigned long l_mask;
@@ -13963,7 +14671,7 @@ thumb_output_function_prologue (FILE *f, HOST_WIDE_INT size ATTRIBUTE_UNUSED)
}
/* Get the registers we are going to push. */
- live_regs_mask = thumb_compute_save_reg_mask ();
+ live_regs_mask = thumb1_compute_save_reg_mask ();
/* Extract a mask of the ones we can give to the Thumb's push instruction. */
l_mask = live_regs_mask & 0x40ff;
/* Then count how many other high registers will need to be pushed. */
@@ -14428,6 +15136,9 @@ arm_file_start (void)
{
int val;
+ if (TARGET_UNIFIED_ASM)
+ asm_fprintf (asm_out_file, "\t.syntax unified\n");
+
if (TARGET_BPABI)
{
const char *fpu_name;
@@ -14846,7 +15557,8 @@ arm_output_mi_thunk (FILE *file, tree thunk ATTRIBUTE_UNUSED,
? 1 : 0);
if (mi_delta < 0)
mi_delta = - mi_delta;
- if (TARGET_THUMB)
+ /* When generating 16-bit thumb code, thunks are entered in arm mode. */
+ if (TARGET_THUMB1)
{
int labelno = thunk_label++;
ASM_GENERATE_INTERNAL_LABEL (label, "LTHUMBFUNC", labelno);
@@ -14871,6 +15583,7 @@ arm_output_mi_thunk (FILE *file, tree thunk ATTRIBUTE_UNUSED,
fputs ("\tadd\tr12, pc, r12\n", file);
}
}
+ /* TODO: Use movw/movt for large constants when available. */
while (mi_delta != 0)
{
if ((mi_delta & (3 << shift)) == 0)
@@ -14884,7 +15597,7 @@ arm_output_mi_thunk (FILE *file, tree thunk ATTRIBUTE_UNUSED,
shift += 8;
}
}
- if (TARGET_THUMB)
+ if (TARGET_THUMB1)
{
fprintf (file, "\tbx\tr12\n");
ASM_OUTPUT_ALIGN (file, 2);
@@ -15273,22 +15986,26 @@ thumb_set_return_address (rtx source, rtx scratch)
{
arm_stack_offsets *offsets;
HOST_WIDE_INT delta;
+ HOST_WIDE_INT limit;
int reg;
rtx addr;
unsigned long mask;
emit_insn (gen_rtx_USE (VOIDmode, source));
- mask = thumb_compute_save_reg_mask ();
+ mask = thumb1_compute_save_reg_mask ();
if (mask & (1 << LR_REGNUM))
{
offsets = arm_get_frame_offsets ();
+ limit = 1024;
/* Find the saved regs. */
if (frame_pointer_needed)
{
delta = offsets->soft_frame - offsets->saved_args;
reg = THUMB_HARD_FRAME_POINTER_REGNUM;
+ if (TARGET_THUMB1)
+ limit = 128;
}
else
{
@@ -15296,15 +16013,14 @@ thumb_set_return_address (rtx source, rtx scratch)
reg = SP_REGNUM;
}
/* Allow for the stack frame. */
- if (TARGET_BACKTRACE)
+ if (TARGET_THUMB1 && TARGET_BACKTRACE)
delta -= 16;
/* The link register is always the first saved register. */
delta -= 4;
/* Construct the address. */
addr = gen_rtx_REG (SImode, reg);
- if ((reg != SP_REGNUM && delta >= 128)
- || delta >= 1024)
+ if (delta > limit)
{
emit_insn (gen_movsi (scratch, GEN_INT (delta)));
emit_insn (gen_addsi3 (scratch, scratch, stack_pointer_rtx));
@@ -15370,12 +16086,13 @@ arm_dbx_register_number (unsigned int regno)
#ifdef TARGET_UNWIND_INFO
-/* Emit unwind directives for a store-multiple instruction. This should
- only ever be generated by the function prologue code, so we expect it
- to have a particular form. */
+/* Emit unwind directives for a store-multiple instruction or stack pointer
+ push during alignment.
+ These should only ever be generated by the function prologue code, so
+ expect them to have a particular form. */
static void
-arm_unwind_emit_stm (FILE * asm_out_file, rtx p)
+arm_unwind_emit_sequence (FILE * asm_out_file, rtx p)
{
int i;
HOST_WIDE_INT offset;
@@ -15385,8 +16102,11 @@ arm_unwind_emit_stm (FILE * asm_out_file, rtx p)
unsigned lastreg;
rtx e;
- /* First insn will adjust the stack pointer. */
e = XVECEXP (p, 0, 0);
+ if (GET_CODE (e) != SET)
+ abort ();
+
+ /* First insn will adjust the stack pointer. */
if (GET_CODE (e) != SET
|| GET_CODE (XEXP (e, 0)) != REG
|| REGNO (XEXP (e, 0)) != SP_REGNUM
@@ -15483,6 +16203,7 @@ arm_unwind_emit_set (FILE * asm_out_file, rtx p)
{
rtx e0;
rtx e1;
+ unsigned reg;
e0 = XEXP (p, 0);
e1 = XEXP (p, 1);
@@ -15519,7 +16240,6 @@ arm_unwind_emit_set (FILE * asm_out_file, rtx p)
else if (REGNO (e0) == HARD_FRAME_POINTER_REGNUM)
{
HOST_WIDE_INT offset;
- unsigned reg;
if (GET_CODE (e1) == PLUS)
{
@@ -15555,6 +16275,13 @@ arm_unwind_emit_set (FILE * asm_out_file, rtx p)
asm_fprintf (asm_out_file, "\t.movsp %r, #%d\n",
REGNO (e0), (int)INTVAL(XEXP (e1, 1)));
}
+ else if (GET_CODE (e1) == UNSPEC && XINT (e1, 1) == UNSPEC_STACK_ALIGN)
+ {
+ /* Stack pointer save before alignment. */
+ reg = REGNO (e0);
+ asm_fprintf (asm_out_file, "\t.unwind_raw 0, 0x%x @ vsp = r%d\n",
+ reg + 0x90, reg);
+ }
else
abort ();
break;
@@ -15592,7 +16319,7 @@ arm_unwind_emit (FILE * asm_out_file, rtx insn)
case SEQUENCE:
/* Store multiple. */
- arm_unwind_emit_stm (asm_out_file, pat);
+ arm_unwind_emit_sequence (asm_out_file, pat);
break;
default:
@@ -15620,6 +16347,30 @@ arm_output_ttype (rtx x)
#endif /* TARGET_UNWIND_INFO */
+/* Handle UNSPEC DWARF call frame instructions. These are needed for dynamic
+ stack alignment. */
+
+static void
+arm_dwarf_handle_frame_unspec (const char *label, rtx pattern, int index)
+{
+ rtx unspec = SET_SRC (pattern);
+ gcc_assert (GET_CODE (unspec) == UNSPEC);
+
+ switch (index)
+ {
+ case UNSPEC_STACK_ALIGN:
+ /* ??? We should set the CFA = (SP & ~7). At this point we haven't
+ put anything on the stack, so hopefully it won't matter.
+ CFA = SP will be correct after alignment. */
+ dwarf2out_reg_save_reg (label, stack_pointer_rtx,
+ SET_DEST (pattern));
+ break;
+ default:
+ gcc_unreachable ();
+ }
+}
+
+
/* Output unwind directives for the start/end of a function. */
void
@@ -15705,4 +16456,71 @@ arm_output_addr_const_extra (FILE *fp, rtx x)
return FALSE;
}
+/* Output assembly for a shift instruction.
+ SET_FLAGS determines how the instruction modifies the condition codes.
+ 0 - Do not set conditiona codes.
+ 1 - Set condition codes.
+ 2 - Use smallest instruction. */
+const char *
+arm_output_shift(rtx * operands, int set_flags)
+{
+ char pattern[100];
+ static const char flag_chars[3] = {'?', '.', '!'};
+ const char *shift;
+ HOST_WIDE_INT val;
+ char c;
+
+ c = flag_chars[set_flags];
+ if (TARGET_UNIFIED_ASM)
+ {
+ shift = shift_op(operands[3], &val);
+ if (shift)
+ {
+ if (val != -1)
+ operands[2] = GEN_INT(val);
+ sprintf (pattern, "%s%%%c\t%%0, %%1, %%2", shift, c);
+ }
+ else
+ sprintf (pattern, "mov%%%c\t%%0, %%1", c);
+ }
+ else
+ sprintf (pattern, "mov%%%c\t%%0, %%1%%S3", c);
+ output_asm_insn (pattern, operands);
+ return "";
+}
+
+/* Output a Thumb-2 casesi instruction. */
+const char *
+thumb2_output_casesi (rtx *operands)
+{
+ rtx diff_vec = PATTERN (next_real_insn (operands[2]));
+
+ gcc_assert (GET_CODE (diff_vec) == ADDR_DIFF_VEC);
+
+ output_asm_insn ("cmp\t%0, %1", operands);
+ output_asm_insn ("bhi\t%l3", operands);
+ switch (GET_MODE(diff_vec))
+ {
+ case QImode:
+ return "tbb\t[%|pc, %0]";
+ case HImode:
+ return "tbh\t[%|pc, %0, lsl #1]";
+ case SImode:
+ if (flag_pic)
+ {
+ output_asm_insn ("adr\t%4, %l2", operands);
+ output_asm_insn ("ldr\t%5, [%4, %0, lsl #2]", operands);
+ output_asm_insn ("add\t%4, %4, %5", operands);
+ return "bx\t%4";
+ }
+ else
+ {
+ output_asm_insn ("adr\t%4, %l2", operands);
+ return "ldr\t%|pc, [%4, %0, lsl #2]";
+ }
+ default:
+ gcc_unreachable ();
+ }
+}
+
#include "gt-arm.h"
diff --git a/gcc/config/arm/arm.h b/gcc/config/arm/arm.h
index 468b5b3..d2f493b 100644
--- a/gcc/config/arm/arm.h
+++ b/gcc/config/arm/arm.h
@@ -1,6 +1,6 @@
/* Definitions of target machine for GNU compiler, for ARM.
Copyright (C) 1991, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000,
- 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
+ 2001, 2002, 2003, 2004, 2005, 2006, 2007 Free Software Foundation, Inc.
Contributed by Pieter `Tiggr' Schoenmakers (rcpieter@win.tue.nl)
and Martin Simmons (@harleqn.co.uk).
More major hacks by Richard Earnshaw (rearnsha@arm.com)
@@ -39,6 +39,8 @@ extern char arm_arch_name[];
builtin_define ("__APCS_32__"); \
if (TARGET_THUMB) \
builtin_define ("__thumb__"); \
+ if (TARGET_THUMB2) \
+ builtin_define ("__thumb2__"); \
\
if (TARGET_BIG_END) \
{ \
@@ -181,8 +183,8 @@ extern GTY(()) rtx aof_pic_label;
#define TARGET_MAVERICK (arm_fp_model == ARM_FP_MODEL_MAVERICK)
#define TARGET_VFP (arm_fp_model == ARM_FP_MODEL_VFP)
#define TARGET_IWMMXT (arm_arch_iwmmxt)
-#define TARGET_REALLY_IWMMXT (TARGET_IWMMXT && TARGET_ARM)
-#define TARGET_IWMMXT_ABI (TARGET_ARM && arm_abi == ARM_ABI_IWMMXT)
+#define TARGET_REALLY_IWMMXT (TARGET_IWMMXT && TARGET_32BIT)
+#define TARGET_IWMMXT_ABI (TARGET_32BIT && arm_abi == ARM_ABI_IWMMXT)
#define TARGET_ARM (! TARGET_THUMB)
#define TARGET_EITHER 1 /* (TARGET_ARM | TARGET_THUMB) */
#define TARGET_BACKTRACE (leaf_function_p () \
@@ -195,6 +197,25 @@ extern GTY(()) rtx aof_pic_label;
#define TARGET_HARD_TP (target_thread_pointer == TP_CP15)
#define TARGET_SOFT_TP (target_thread_pointer == TP_SOFT)
+/* Only 16-bit thumb code. */
+#define TARGET_THUMB1 (TARGET_THUMB && !arm_arch_thumb2)
+/* Arm or Thumb-2 32-bit code. */
+#define TARGET_32BIT (TARGET_ARM || arm_arch_thumb2)
+/* 32-bit Thumb-2 code. */
+#define TARGET_THUMB2 (TARGET_THUMB && arm_arch_thumb2)
+
+/* "DSP" multiply instructions, eg. SMULxy. */
+#define TARGET_DSP_MULTIPLY \
+ (TARGET_32BIT && arm_arch5e && arm_arch_notm)
+/* Integer SIMD instructions, and extend-accumulate instructions. */
+#define TARGET_INT_SIMD \
+ (TARGET_32BIT && arm_arch6 && arm_arch_notm)
+
+/* We could use unified syntax for arm mode, but for now we just use it
+ for Thumb-2. */
+#define TARGET_UNIFIED_ASM TARGET_THUMB2
+
+
/* True iff the full BPABI is being used. If TARGET_BPABI is true,
then TARGET_AAPCS_BASED must be true -- but the converse does not
hold. TARGET_BPABI implies the use of the BPABI runtime library,
@@ -320,6 +341,9 @@ extern int arm_arch5e;
/* Nonzero if this chip supports the ARM Architecture 6 extensions. */
extern int arm_arch6;
+/* Nonzero if instructions not present in the 'M' profile can be used. */
+extern int arm_arch_notm;
+
/* Nonzero if this chip can benefit from load scheduling. */
extern int arm_ld_sched;
@@ -351,6 +375,12 @@ extern int arm_tune_wbuf;
interworking clean. */
extern int arm_cpp_interwork;
+/* Nonzero if chip supports Thumb 2. */
+extern int arm_arch_thumb2;
+
+/* Nonzero if chip supports integer division instruction. */
+extern int arm_arch_hwdiv;
+
#ifndef TARGET_DEFAULT
#define TARGET_DEFAULT (MASK_APCS_FRAME)
#endif
@@ -648,7 +678,7 @@ extern int arm_structure_size_boundary;
{ \
int regno; \
\
- if (TARGET_SOFT_FLOAT || TARGET_THUMB || !TARGET_FPA) \
+ if (TARGET_SOFT_FLOAT || TARGET_THUMB1 || !TARGET_FPA) \
{ \
for (regno = FIRST_FPA_REGNUM; \
regno <= LAST_FPA_REGNUM; ++regno) \
@@ -660,6 +690,7 @@ extern int arm_structure_size_boundary;
/* When optimizing for size, it's better not to use \
the HI regs, because of the overhead of stacking \
them. */ \
+ /* ??? Is this still true for thumb2? */ \
for (regno = FIRST_HI_REGNUM; \
regno <= LAST_HI_REGNUM; ++regno) \
fixed_regs[regno] = call_used_regs[regno] = 1; \
@@ -668,10 +699,10 @@ extern int arm_structure_size_boundary;
/* The link register can be clobbered by any branch insn, \
but we have no way to track that at present, so mark \
it as unavailable. */ \
- if (TARGET_THUMB) \
+ if (TARGET_THUMB1) \
fixed_regs[LR_REGNUM] = call_used_regs[LR_REGNUM] = 1; \
\
- if (TARGET_ARM && TARGET_HARD_FLOAT) \
+ if (TARGET_32BIT && TARGET_HARD_FLOAT) \
{ \
if (TARGET_MAVERICK) \
{ \
@@ -807,7 +838,7 @@ extern int arm_structure_size_boundary;
/* The native (Norcroft) Pascal compiler for the ARM passes the static chain
as an invisible last argument (possible since varargs don't exist in
Pascal), so the following is not true. */
-#define STATIC_CHAIN_REGNUM (TARGET_ARM ? 12 : 9)
+#define STATIC_CHAIN_REGNUM 12
/* Define this to be where the real frame pointer is if it is not possible to
work out the offset between the frame pointer and the automatic variables
@@ -901,7 +932,7 @@ extern int arm_structure_size_boundary;
On the ARM regs are UNITS_PER_WORD bits wide; FPA regs can hold any FP
mode. */
#define HARD_REGNO_NREGS(REGNO, MODE) \
- ((TARGET_ARM \
+ ((TARGET_32BIT \
&& REGNO >= FIRST_FPA_REGNUM \
&& REGNO != FRAME_POINTER_REGNUM \
&& REGNO != ARG_POINTER_REGNUM) \
@@ -1042,14 +1073,14 @@ enum reg_class
|| (CLASS) == CC_REG)
/* The class value for index registers, and the one for base regs. */
-#define INDEX_REG_CLASS (TARGET_THUMB ? LO_REGS : GENERAL_REGS)
-#define BASE_REG_CLASS (TARGET_THUMB ? LO_REGS : GENERAL_REGS)
+#define INDEX_REG_CLASS (TARGET_THUMB1 ? LO_REGS : GENERAL_REGS)
+#define BASE_REG_CLASS (TARGET_THUMB1 ? LO_REGS : GENERAL_REGS)
/* For the Thumb the high registers cannot be used as base registers
when addressing quantities in QI or HI mode; if we don't know the
mode, then we must be conservative. */
#define MODE_BASE_REG_CLASS(MODE) \
- (TARGET_ARM ? GENERAL_REGS : \
+ (TARGET_32BIT ? GENERAL_REGS : \
(((MODE) == SImode) ? BASE_REGS : LO_REGS))
/* For Thumb we can not support SP+reg addressing, so we return LO_REGS
@@ -1060,15 +1091,16 @@ enum reg_class
registers explicitly used in the rtl to be used as spill registers
but prevents the compiler from extending the lifetime of these
registers. */
-#define SMALL_REGISTER_CLASSES TARGET_THUMB
+#define SMALL_REGISTER_CLASSES TARGET_THUMB1
/* Given an rtx X being reloaded into a reg required to be
in class CLASS, return the class of reg to actually use.
- In general this is just CLASS, but for the Thumb we prefer
- a LO_REGS class or a subset. */
-#define PREFERRED_RELOAD_CLASS(X, CLASS) \
- (TARGET_ARM ? (CLASS) : \
- ((CLASS) == BASE_REGS ? (CLASS) : LO_REGS))
+ In general this is just CLASS, but for the Thumb core registers and
+ immediate constants we prefer a LO_REGS class or a subset. */
+#define PREFERRED_RELOAD_CLASS(X, CLASS) \
+ (TARGET_ARM ? (CLASS) : \
+ ((CLASS) == GENERAL_REGS || (CLASS) == HI_REGS \
+ || (CLASS) == NO_REGS ? LO_REGS : (CLASS)))
/* Must leave BASE_REGS reloads alone */
#define THUMB_SECONDARY_INPUT_RELOAD_CLASS(CLASS, MODE, X) \
@@ -1093,7 +1125,7 @@ enum reg_class
((TARGET_VFP && TARGET_HARD_FLOAT \
&& (CLASS) == VFP_REGS) \
? vfp_secondary_reload_class (MODE, X) \
- : TARGET_ARM \
+ : TARGET_32BIT \
? (((MODE) == HImode && ! arm_arch4 && true_regnum (X) == -1) \
? GENERAL_REGS : NO_REGS) \
: THUMB_SECONDARY_OUTPUT_RELOAD_CLASS (CLASS, MODE, X))
@@ -1109,7 +1141,7 @@ enum reg_class
&& (CLASS) == CIRRUS_REGS \
&& (CONSTANT_P (X) || GET_CODE (X) == SYMBOL_REF)) \
? GENERAL_REGS : \
- (TARGET_ARM ? \
+ (TARGET_32BIT ? \
(((CLASS) == IWMMXT_REGS || (CLASS) == IWMMXT_GR_REGS) \
&& CONSTANT_P (X)) \
? GENERAL_REGS : \
@@ -1188,6 +1220,7 @@ enum reg_class
/* We could probably achieve better results by defining PROMOTE_MODE to help
cope with the variances between the Thumb's signed and unsigned byte and
halfword load instructions. */
+/* ??? This should be safe for thumb2, but we may be able to do better. */
#define THUMB_LEGITIMIZE_RELOAD_ADDRESS(X, MODE, OPNUM, TYPE, IND_L, WIN) \
do { \
rtx new_x = thumb_legitimize_reload_address (&X, MODE, OPNUM, TYPE, IND_L); \
@@ -1215,7 +1248,7 @@ do { \
/* Moves between FPA_REGS and GENERAL_REGS are two memory insns. */
#define REGISTER_MOVE_COST(MODE, FROM, TO) \
- (TARGET_ARM ? \
+ (TARGET_32BIT ? \
((FROM) == FPA_REGS && (TO) != FPA_REGS ? 20 : \
(FROM) != FPA_REGS && (TO) == FPA_REGS ? 20 : \
(FROM) == VFP_REGS && (TO) != VFP_REGS ? 10 : \
@@ -1289,10 +1322,10 @@ do { \
/* Define how to find the value returned by a library function
assuming the value has mode MODE. */
#define LIBCALL_VALUE(MODE) \
- (TARGET_ARM && TARGET_HARD_FLOAT_ABI && TARGET_FPA \
+ (TARGET_32BIT && TARGET_HARD_FLOAT_ABI && TARGET_FPA \
&& GET_MODE_CLASS (MODE) == MODE_FLOAT \
? gen_rtx_REG (MODE, FIRST_FPA_REGNUM) \
- : TARGET_ARM && TARGET_HARD_FLOAT_ABI && TARGET_MAVERICK \
+ : TARGET_32BIT && TARGET_HARD_FLOAT_ABI && TARGET_MAVERICK \
&& GET_MODE_CLASS (MODE) == MODE_FLOAT \
? gen_rtx_REG (MODE, FIRST_CIRRUS_FP_REGNUM) \
: TARGET_IWMMXT_ABI && arm_vector_mode_supported_p (MODE) \
@@ -1311,10 +1344,10 @@ do { \
/* On a Cirrus chip, mvf0 can return results. */
#define FUNCTION_VALUE_REGNO_P(REGNO) \
((REGNO) == ARG_REGISTER (1) \
- || (TARGET_ARM && ((REGNO) == FIRST_CIRRUS_FP_REGNUM) \
+ || (TARGET_32BIT && ((REGNO) == FIRST_CIRRUS_FP_REGNUM) \
&& TARGET_HARD_FLOAT_ABI && TARGET_MAVERICK) \
|| ((REGNO) == FIRST_IWMMXT_REGNUM && TARGET_IWMMXT_ABI) \
- || (TARGET_ARM && ((REGNO) == FIRST_FPA_REGNUM) \
+ || (TARGET_32BIT && ((REGNO) == FIRST_FPA_REGNUM) \
&& TARGET_HARD_FLOAT_ABI && TARGET_FPA))
/* Amount of memory needed for an untyped call to save all possible return
@@ -1362,6 +1395,7 @@ do { \
#define ARM_FT_NAKED (1 << 3) /* No prologue or epilogue. */
#define ARM_FT_VOLATILE (1 << 4) /* Does not return. */
#define ARM_FT_NESTED (1 << 5) /* Embedded inside another func. */
+#define ARM_FT_STACKALIGN (1 << 6) /* Called with misaligned stack. */
/* Some macros to test these flags. */
#define ARM_FUNC_TYPE(t) (t & ARM_FT_TYPE_MASK)
@@ -1369,6 +1403,7 @@ do { \
#define IS_VOLATILE(t) (t & ARM_FT_VOLATILE)
#define IS_NAKED(t) (t & ARM_FT_NAKED)
#define IS_NESTED(t) (t & ARM_FT_NESTED)
+#define IS_STACKALIGN(t) (t & ARM_FT_STACKALIGN)
/* Structure used to hold the function stack frame layout. Offsets are
@@ -1570,6 +1605,8 @@ typedef struct
/* Determine if the epilogue should be output as RTL.
You should override this if you define FUNCTION_EXTRA_EPILOGUE. */
+/* This is disabled for Thumb-2 because it will confuse the
+ conditional insn counter. */
#define USE_RETURN_INSN(ISCOND) \
(TARGET_ARM ? use_return_insn (ISCOND, NULL) : 0)
@@ -1646,37 +1683,57 @@ typedef struct
assemble_aligned_integer (UNITS_PER_WORD, const0_rtx); \
}
-/* On the Thumb we always switch into ARM mode to execute the trampoline.
- Why - because it is easier. This code will always be branched to via
- a BX instruction and since the compiler magically generates the address
- of the function the linker has no opportunity to ensure that the
- bottom bit is set. Thus the processor will be in ARM mode when it
- reaches this code. So we duplicate the ARM trampoline code and add
- a switch into Thumb mode as well. */
-#define THUMB_TRAMPOLINE_TEMPLATE(FILE) \
+/* The Thumb-2 trampoline is similar to the arm implementation.
+ Unlike 16-bit Thumb, we enter the stub in thumb mode. */
+#define THUMB2_TRAMPOLINE_TEMPLATE(FILE) \
+{ \
+ asm_fprintf (FILE, "\tldr.w\t%r, [%r, #4]\n", \
+ STATIC_CHAIN_REGNUM, PC_REGNUM); \
+ asm_fprintf (FILE, "\tldr.w\t%r, [%r, #4]\n", \
+ PC_REGNUM, PC_REGNUM); \
+ assemble_aligned_integer (UNITS_PER_WORD, const0_rtx); \
+ assemble_aligned_integer (UNITS_PER_WORD, const0_rtx); \
+}
+
+#define THUMB1_TRAMPOLINE_TEMPLATE(FILE) \
{ \
- fprintf (FILE, "\t.code 32\n"); \
+ ASM_OUTPUT_ALIGN(FILE, 2); \
+ fprintf (FILE, "\t.code\t16\n"); \
fprintf (FILE, ".Ltrampoline_start:\n"); \
- asm_fprintf (FILE, "\tldr\t%r, [%r, #8]\n", \
- STATIC_CHAIN_REGNUM, PC_REGNUM); \
- asm_fprintf (FILE, "\tldr\t%r, [%r, #8]\n", \
- IP_REGNUM, PC_REGNUM); \
- asm_fprintf (FILE, "\torr\t%r, %r, #1\n", \
- IP_REGNUM, IP_REGNUM); \
- asm_fprintf (FILE, "\tbx\t%r\n", IP_REGNUM); \
- fprintf (FILE, "\t.word\t0\n"); \
- fprintf (FILE, "\t.word\t0\n"); \
- fprintf (FILE, "\t.code 16\n"); \
+ asm_fprintf (FILE, "\tpush\t{r0, r1}\n"); \
+ asm_fprintf (FILE, "\tldr\tr0, [%r, #8]\n", \
+ PC_REGNUM); \
+ asm_fprintf (FILE, "\tmov\t%r, r0\n", \
+ STATIC_CHAIN_REGNUM); \
+ asm_fprintf (FILE, "\tldr\tr0, [%r, #8]\n", \
+ PC_REGNUM); \
+ asm_fprintf (FILE, "\tstr\tr0, [%r, #4]\n", \
+ SP_REGNUM); \
+ asm_fprintf (FILE, "\tpop\t{r0, %r}\n", \
+ PC_REGNUM); \
+ assemble_aligned_integer (UNITS_PER_WORD, const0_rtx); \
+ assemble_aligned_integer (UNITS_PER_WORD, const0_rtx); \
}
#define TRAMPOLINE_TEMPLATE(FILE) \
if (TARGET_ARM) \
ARM_TRAMPOLINE_TEMPLATE (FILE) \
+ else if (TARGET_THUMB2) \
+ THUMB2_TRAMPOLINE_TEMPLATE (FILE) \
else \
- THUMB_TRAMPOLINE_TEMPLATE (FILE)
+ THUMB1_TRAMPOLINE_TEMPLATE (FILE)
+
+/* Thumb trampolines should be entered in thumb mode, so set the bottom bit
+ of the address. */
+#define TRAMPOLINE_ADJUST_ADDRESS(ADDR) do \
+{ \
+ if (TARGET_THUMB) \
+ (ADDR) = expand_simple_binop (Pmode, IOR, (ADDR), GEN_INT(1), \
+ gen_reg_rtx (Pmode), 0, OPTAB_LIB_WIDEN); \
+} while(0)
/* Length in units of the trampoline for entering a nested function. */
-#define TRAMPOLINE_SIZE (TARGET_ARM ? 16 : 24)
+#define TRAMPOLINE_SIZE (TARGET_32BIT ? 16 : 20)
/* Alignment required for a trampoline in bits. */
#define TRAMPOLINE_ALIGNMENT 32
@@ -1690,11 +1747,11 @@ typedef struct
{ \
emit_move_insn (gen_rtx_MEM (SImode, \
plus_constant (TRAMP, \
- TARGET_ARM ? 8 : 16)), \
+ TARGET_32BIT ? 8 : 12)), \
CXT); \
emit_move_insn (gen_rtx_MEM (SImode, \
plus_constant (TRAMP, \
- TARGET_ARM ? 12 : 20)), \
+ TARGET_32BIT ? 12 : 16)), \
FNADDR); \
emit_library_call (gen_rtx_SYMBOL_REF (Pmode, "__clear_cache"), \
0, VOIDmode, 2, TRAMP, Pmode, \
@@ -1705,13 +1762,13 @@ typedef struct
/* Addressing modes, and classification of registers for them. */
#define HAVE_POST_INCREMENT 1
-#define HAVE_PRE_INCREMENT TARGET_ARM
-#define HAVE_POST_DECREMENT TARGET_ARM
-#define HAVE_PRE_DECREMENT TARGET_ARM
-#define HAVE_PRE_MODIFY_DISP TARGET_ARM
-#define HAVE_POST_MODIFY_DISP TARGET_ARM
-#define HAVE_PRE_MODIFY_REG TARGET_ARM
-#define HAVE_POST_MODIFY_REG TARGET_ARM
+#define HAVE_PRE_INCREMENT TARGET_32BIT
+#define HAVE_POST_DECREMENT TARGET_32BIT
+#define HAVE_PRE_DECREMENT TARGET_32BIT
+#define HAVE_PRE_MODIFY_DISP TARGET_32BIT
+#define HAVE_POST_MODIFY_DISP TARGET_32BIT
+#define HAVE_PRE_MODIFY_REG TARGET_32BIT
+#define HAVE_POST_MODIFY_REG TARGET_32BIT
/* Macros to check register numbers against specific register classes. */
@@ -1723,20 +1780,20 @@ typedef struct
#define TEST_REGNO(R, TEST, VALUE) \
((R TEST VALUE) || ((unsigned) reg_renumber[R] TEST VALUE))
-/* On the ARM, don't allow the pc to be used. */
+/* Don't allow the pc to be used. */
#define ARM_REGNO_OK_FOR_BASE_P(REGNO) \
(TEST_REGNO (REGNO, <, PC_REGNUM) \
|| TEST_REGNO (REGNO, ==, FRAME_POINTER_REGNUM) \
|| TEST_REGNO (REGNO, ==, ARG_POINTER_REGNUM))
-#define THUMB_REGNO_MODE_OK_FOR_BASE_P(REGNO, MODE) \
+#define THUMB1_REGNO_MODE_OK_FOR_BASE_P(REGNO, MODE) \
(TEST_REGNO (REGNO, <=, LAST_LO_REGNUM) \
|| (GET_MODE_SIZE (MODE) >= 4 \
&& TEST_REGNO (REGNO, ==, STACK_POINTER_REGNUM)))
#define REGNO_MODE_OK_FOR_BASE_P(REGNO, MODE) \
- (TARGET_THUMB \
- ? THUMB_REGNO_MODE_OK_FOR_BASE_P (REGNO, MODE) \
+ (TARGET_THUMB1 \
+ ? THUMB1_REGNO_MODE_OK_FOR_BASE_P (REGNO, MODE) \
: ARM_REGNO_OK_FOR_BASE_P (REGNO))
/* Nonzero if X can be the base register in a reg+reg addressing mode.
@@ -1763,6 +1820,7 @@ typedef struct
#else
+/* ??? Should the TARGET_ARM here also apply to thumb2? */
#define CONSTANT_ADDRESS_P(X) \
(GET_CODE (X) == SYMBOL_REF \
&& (CONSTANT_POOL_ADDRESS_P (X) \
@@ -1788,8 +1846,8 @@ typedef struct
#define LEGITIMATE_CONSTANT_P(X) \
(!arm_tls_referenced_p (X) \
- && (TARGET_ARM ? ARM_LEGITIMATE_CONSTANT_P (X) \
- : THUMB_LEGITIMATE_CONSTANT_P (X)))
+ && (TARGET_32BIT ? ARM_LEGITIMATE_CONSTANT_P (X) \
+ : THUMB_LEGITIMATE_CONSTANT_P (X)))
/* Special characters prefixed to function names
in order to encode attribute like information.
@@ -1823,6 +1881,11 @@ typedef struct
#define ASM_OUTPUT_LABELREF(FILE, NAME) \
arm_asm_output_labelref (FILE, NAME)
+/* Output IT instructions for conditonally executed Thumb-2 instructions. */
+#define ASM_OUTPUT_OPCODE(STREAM, PTR) \
+ if (TARGET_THUMB2) \
+ thumb2_asm_output_opcode (STREAM);
+
/* The EABI specifies that constructors should go in .init_array.
Other targets use .ctors for compatibility. */
#ifndef ARM_EABI_CTORS_SECTION_OP
@@ -1898,7 +1961,8 @@ typedef struct
We have two alternate definitions for each of them.
The usual definition accepts all pseudo regs; the other rejects
them unless they have been allocated suitable hard regs.
- The symbol REG_OK_STRICT causes the latter definition to be used. */
+ The symbol REG_OK_STRICT causes the latter definition to be used.
+ Thumb-2 has the same restictions as arm. */
#ifndef REG_OK_STRICT
#define ARM_REG_OK_FOR_BASE_P(X) \
@@ -1907,7 +1971,7 @@ typedef struct
|| REGNO (X) == FRAME_POINTER_REGNUM \
|| REGNO (X) == ARG_POINTER_REGNUM)
-#define THUMB_REG_MODE_OK_FOR_BASE_P(X, MODE) \
+#define THUMB1_REG_MODE_OK_FOR_BASE_P(X, MODE) \
(REGNO (X) <= LAST_LO_REGNUM \
|| REGNO (X) >= FIRST_PSEUDO_REGISTER \
|| (GET_MODE_SIZE (MODE) >= 4 \
@@ -1922,8 +1986,8 @@ typedef struct
#define ARM_REG_OK_FOR_BASE_P(X) \
ARM_REGNO_OK_FOR_BASE_P (REGNO (X))
-#define THUMB_REG_MODE_OK_FOR_BASE_P(X, MODE) \
- THUMB_REGNO_MODE_OK_FOR_BASE_P (REGNO (X), MODE)
+#define THUMB1_REG_MODE_OK_FOR_BASE_P(X, MODE) \
+ THUMB1_REGNO_MODE_OK_FOR_BASE_P (REGNO (X), MODE)
#define REG_STRICT_P 1
@@ -1932,22 +1996,23 @@ typedef struct
/* Now define some helpers in terms of the above. */
#define REG_MODE_OK_FOR_BASE_P(X, MODE) \
- (TARGET_THUMB \
- ? THUMB_REG_MODE_OK_FOR_BASE_P (X, MODE) \
+ (TARGET_THUMB1 \
+ ? THUMB1_REG_MODE_OK_FOR_BASE_P (X, MODE) \
: ARM_REG_OK_FOR_BASE_P (X))
#define ARM_REG_OK_FOR_INDEX_P(X) ARM_REG_OK_FOR_BASE_P (X)
-/* For Thumb, a valid index register is anything that can be used in
+/* For 16-bit Thumb, a valid index register is anything that can be used in
a byte load instruction. */
-#define THUMB_REG_OK_FOR_INDEX_P(X) THUMB_REG_MODE_OK_FOR_BASE_P (X, QImode)
+#define THUMB1_REG_OK_FOR_INDEX_P(X) \
+ THUMB1_REG_MODE_OK_FOR_BASE_P (X, QImode)
/* Nonzero if X is a hard reg that can be used as an index
or if it is a pseudo reg. On the Thumb, the stack pointer
is not suitable. */
#define REG_OK_FOR_INDEX_P(X) \
- (TARGET_THUMB \
- ? THUMB_REG_OK_FOR_INDEX_P (X) \
+ (TARGET_THUMB1 \
+ ? THUMB1_REG_OK_FOR_INDEX_P (X) \
: ARM_REG_OK_FOR_INDEX_P (X))
/* Nonzero if X can be the base register in a reg+reg addressing mode.
@@ -1972,17 +2037,25 @@ typedef struct
goto WIN; \
}
-#define THUMB_GO_IF_LEGITIMATE_ADDRESS(MODE,X,WIN) \
+#define THUMB2_GO_IF_LEGITIMATE_ADDRESS(MODE,X,WIN) \
{ \
- if (thumb_legitimate_address_p (MODE, X, REG_STRICT_P)) \
+ if (thumb2_legitimate_address_p (MODE, X, REG_STRICT_P)) \
+ goto WIN; \
+ }
+
+#define THUMB1_GO_IF_LEGITIMATE_ADDRESS(MODE,X,WIN) \
+ { \
+ if (thumb1_legitimate_address_p (MODE, X, REG_STRICT_P)) \
goto WIN; \
}
#define GO_IF_LEGITIMATE_ADDRESS(MODE, X, WIN) \
if (TARGET_ARM) \
ARM_GO_IF_LEGITIMATE_ADDRESS (MODE, X, WIN) \
- else /* if (TARGET_THUMB) */ \
- THUMB_GO_IF_LEGITIMATE_ADDRESS (MODE, X, WIN)
+ else if (TARGET_THUMB2) \
+ THUMB2_GO_IF_LEGITIMATE_ADDRESS (MODE, X, WIN) \
+ else /* if (TARGET_THUMB1) */ \
+ THUMB1_GO_IF_LEGITIMATE_ADDRESS (MODE, X, WIN)
/* Try machine-dependent ways of modifying an illegitimate address
@@ -1992,7 +2065,12 @@ do { \
X = arm_legitimize_address (X, OLDX, MODE); \
} while (0)
-#define THUMB_LEGITIMIZE_ADDRESS(X, OLDX, MODE, WIN) \
+/* ??? Implement LEGITIMIZE_ADDRESS for thumb2. */
+#define THUMB2_LEGITIMIZE_ADDRESS(X, OLDX, MODE, WIN) \
+do { \
+} while (0)
+
+#define THUMB1_LEGITIMIZE_ADDRESS(X, OLDX, MODE, WIN) \
do { \
X = thumb_legitimize_address (X, OLDX, MODE); \
} while (0)
@@ -2001,8 +2079,10 @@ do { \
do { \
if (TARGET_ARM) \
ARM_LEGITIMIZE_ADDRESS (X, OLDX, MODE, WIN); \
+ else if (TARGET_THUMB2) \
+ THUMB2_LEGITIMIZE_ADDRESS (X, OLDX, MODE, WIN); \
else \
- THUMB_LEGITIMIZE_ADDRESS (X, OLDX, MODE, WIN); \
+ THUMB1_LEGITIMIZE_ADDRESS (X, OLDX, MODE, WIN); \
\
if (memory_address_p (MODE, X)) \
goto WIN; \
@@ -2019,7 +2099,7 @@ do { \
/* Nothing helpful to do for the Thumb */
#define GO_IF_MODE_DEPENDENT_ADDRESS(ADDR, LABEL) \
- if (TARGET_ARM) \
+ if (TARGET_32BIT) \
ARM_GO_IF_MODE_DEPENDENT_ADDRESS (ADDR, LABEL)
@@ -2027,6 +2107,13 @@ do { \
for the index in the tablejump instruction. */
#define CASE_VECTOR_MODE Pmode
+#define CASE_VECTOR_PC_RELATIVE TARGET_THUMB2
+
+#define CASE_VECTOR_SHORTEN_MODE(min, max, body) \
+ ((min < 0 || max >= 0x2000 || !TARGET_THUMB2) ? SImode \
+ : (max >= 0x200) ? HImode \
+ : QImode)
+
/* signed 'char' is most compatible, but RISC OS wants it unsigned.
unsigned is probably best, but may break some code. */
#ifndef DEFAULT_SIGNED_CHAR
@@ -2085,14 +2172,14 @@ do { \
/* Moves to and from memory are quite expensive */
#define MEMORY_MOVE_COST(M, CLASS, IN) \
- (TARGET_ARM ? 10 : \
+ (TARGET_32BIT ? 10 : \
((GET_MODE_SIZE (M) < 4 ? 8 : 2 * GET_MODE_SIZE (M)) \
* (CLASS == LO_REGS ? 1 : 2)))
/* Try to generate sequences that don't involve branches, we can then use
conditional instructions */
#define BRANCH_COST \
- (TARGET_ARM ? 4 : (optimize > 0 ? 2 : 0))
+ (TARGET_32BIT ? 4 : (optimize > 0 ? 2 : 0))
/* Position Independent Code. */
/* We decide which register to use based on the compilation options and
@@ -2160,7 +2247,8 @@ extern int making_const_table;
#define CLZ_DEFINED_VALUE_AT_ZERO(MODE, VALUE) ((VALUE) = 32, 1)
#undef ASM_APP_OFF
-#define ASM_APP_OFF (TARGET_THUMB ? "\t.code\t16\n" : "")
+#define ASM_APP_OFF (TARGET_THUMB1 ? "\t.code\t16\n" : \
+ TARGET_THUMB2 ? "\t.thumb\n" : "")
/* Output a push or a pop instruction (only used when profiling). */
#define ASM_OUTPUT_REG_PUSH(STREAM, REGNO) \
@@ -2184,16 +2272,28 @@ extern int making_const_table;
asm_fprintf (STREAM, "\tpop {%r}\n", REGNO); \
} while (0)
+/* Jump table alignment is explicit in ASM_OUTPUT_CASE_LABEL. */
+#define ADDR_VEC_ALIGN(JUMPTABLE) 0
+
/* This is how to output a label which precedes a jumptable. Since
Thumb instructions are 2 bytes, we may need explicit alignment here. */
#undef ASM_OUTPUT_CASE_LABEL
-#define ASM_OUTPUT_CASE_LABEL(FILE, PREFIX, NUM, JUMPTABLE) \
- do \
- { \
- if (TARGET_THUMB) \
- ASM_OUTPUT_ALIGN (FILE, 2); \
- (*targetm.asm_out.internal_label) (FILE, PREFIX, NUM); \
- } \
+#define ASM_OUTPUT_CASE_LABEL(FILE, PREFIX, NUM, JUMPTABLE) \
+ do \
+ { \
+ if (TARGET_THUMB && GET_MODE (PATTERN (JUMPTABLE)) == SImode) \
+ ASM_OUTPUT_ALIGN (FILE, 2); \
+ (*targetm.asm_out.internal_label) (FILE, PREFIX, NUM); \
+ } \
+ while (0)
+
+/* Make sure subsequent insns are aligned after a TBB. */
+#define ASM_OUTPUT_CASE_END(FILE, NUM, JUMPTABLE) \
+ do \
+ { \
+ if (GET_MODE (PATTERN (JUMPTABLE)) == QImode) \
+ ASM_OUTPUT_ALIGN (FILE, 1); \
+ } \
while (0)
#define ARM_DECLARE_FUNCTION_NAME(STREAM, NAME, DECL) \
@@ -2201,11 +2301,13 @@ extern int making_const_table;
{ \
if (TARGET_THUMB) \
{ \
- if (is_called_in_ARM_mode (DECL) \
- || current_function_is_thunk) \
+ if (is_called_in_ARM_mode (DECL) \
+ || (TARGET_THUMB1 && current_function_is_thunk)) \
fprintf (STREAM, "\t.code 32\n") ; \
+ else if (TARGET_THUMB1) \
+ fprintf (STREAM, "\t.code\t16\n\t.thumb_func\n") ; \
else \
- fprintf (STREAM, "\t.code 16\n\t.thumb_func\n") ; \
+ fprintf (STREAM, "\t.thumb\n\t.thumb_func\n") ; \
} \
if (TARGET_POKE_FUNCTION_NAME) \
arm_poke_function_name (STREAM, (char *) NAME); \
@@ -2247,17 +2349,28 @@ extern int making_const_table;
}
#endif
+/* Add two bytes to the length of conditionally executed Thumb-2
+ instructions for the IT instruction. */
+#define ADJUST_INSN_LENGTH(insn, length) \
+ if (TARGET_THUMB2 && GET_CODE (PATTERN (insn)) == COND_EXEC) \
+ length += 2;
+
/* Only perform branch elimination (by making instructions conditional) if
- we're optimizing. Otherwise it's of no use anyway. */
+ we're optimizing. For Thumb-2 check if any IT instructions need
+ outputting. */
#define FINAL_PRESCAN_INSN(INSN, OPVEC, NOPERANDS) \
if (TARGET_ARM && optimize) \
arm_final_prescan_insn (INSN); \
- else if (TARGET_THUMB) \
- thumb_final_prescan_insn (INSN)
+ else if (TARGET_THUMB2) \
+ thumb2_final_prescan_insn (INSN); \
+ else if (TARGET_THUMB1) \
+ thumb1_final_prescan_insn (INSN)
#define PRINT_OPERAND_PUNCT_VALID_P(CODE) \
- (CODE == '@' || CODE == '|' \
- || (TARGET_ARM && (CODE == '?')) \
+ (CODE == '@' || CODE == '|' || CODE == '.' \
+ || CODE == '(' || CODE == ')' \
+ || (TARGET_32BIT && (CODE == '?')) \
+ || (TARGET_THUMB2 && (CODE == '!')) \
|| (TARGET_THUMB && (CODE == '_')))
/* Output an operand of an instruction. */
@@ -2390,7 +2503,7 @@ extern int making_const_table;
}
#define PRINT_OPERAND_ADDRESS(STREAM, X) \
- if (TARGET_ARM) \
+ if (TARGET_32BIT) \
ARM_PRINT_OPERAND_ADDRESS (STREAM, X) \
else \
THUMB_PRINT_OPERAND_ADDRESS (STREAM, X)
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index 83a3cf4..7253f0c 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -1,6 +1,6 @@
;;- Machine description for ARM for GNU compiler
;; Copyright 1991, 1993, 1994, 1995, 1996, 1996, 1997, 1998, 1999, 2000,
-;; 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
+;; 2001, 2002, 2003, 2004, 2005, 2006, 2007 Free Software Foundation, Inc.
;; Contributed by Pieter `Tiggr' Schoenmakers (rcpieter@win.tue.nl)
;; and Martin Simmons (@harleqn.co.uk).
;; More major hacks by Richard Earnshaw (rearnsha@arm.com).
@@ -93,6 +93,8 @@
(UNSPEC_TLS 20) ; A symbol that has been treated properly for TLS usage.
(UNSPEC_PIC_LABEL 21) ; A label used for PIC access that does not appear in the
; instruction stream.
+ (UNSPEC_STACK_ALIGN 20) ; Doubleword aligned stack pointer. Used to
+ ; generate correct unwind information.
]
)
@@ -297,6 +299,10 @@
(define_attr "far_jump" "yes,no" (const_string "no"))
+;; The number of machine instructions this pattern expands to.
+;; Used for Thumb-2 conditional execution.
+(define_attr "ce_count" "" (const_int 1))
+
;;---------------------------------------------------------------------------
;; Mode macros
@@ -368,7 +374,7 @@
DONE;
}
- if (TARGET_THUMB)
+ if (TARGET_THUMB1)
{
if (GET_CODE (operands[1]) != REG)
operands[1] = force_reg (SImode, operands[1]);
@@ -378,13 +384,13 @@
"
)
-(define_insn "*thumb_adddi3"
+(define_insn "*thumb1_adddi3"
[(set (match_operand:DI 0 "register_operand" "=l")
(plus:DI (match_operand:DI 1 "register_operand" "%0")
(match_operand:DI 2 "register_operand" "l")))
(clobber (reg:CC CC_REGNUM))
]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"add\\t%Q0, %Q0, %Q2\;adc\\t%R0, %R0, %R2"
[(set_attr "length" "4")]
)
@@ -394,9 +400,9 @@
(plus:DI (match_operand:DI 1 "s_register_operand" "%0, 0")
(match_operand:DI 2 "s_register_operand" "r, 0")))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM && !(TARGET_HARD_FLOAT && TARGET_MAVERICK)"
+ "TARGET_32BIT && !(TARGET_HARD_FLOAT && TARGET_MAVERICK)"
"#"
- "TARGET_ARM && reload_completed"
+ "TARGET_32BIT && reload_completed"
[(parallel [(set (reg:CC_C CC_REGNUM)
(compare:CC_C (plus:SI (match_dup 1) (match_dup 2))
(match_dup 1)))
@@ -422,9 +428,9 @@
(match_operand:SI 2 "s_register_operand" "r,r"))
(match_operand:DI 1 "s_register_operand" "r,0")))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM && !(TARGET_HARD_FLOAT && TARGET_MAVERICK)"
+ "TARGET_32BIT && !(TARGET_HARD_FLOAT && TARGET_MAVERICK)"
"#"
- "TARGET_ARM && reload_completed"
+ "TARGET_32BIT && reload_completed"
[(parallel [(set (reg:CC_C CC_REGNUM)
(compare:CC_C (plus:SI (match_dup 1) (match_dup 2))
(match_dup 1)))
@@ -451,9 +457,9 @@
(match_operand:SI 2 "s_register_operand" "r,r"))
(match_operand:DI 1 "s_register_operand" "r,0")))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM && !(TARGET_HARD_FLOAT && TARGET_MAVERICK)"
+ "TARGET_32BIT && !(TARGET_HARD_FLOAT && TARGET_MAVERICK)"
"#"
- "TARGET_ARM && reload_completed"
+ "TARGET_32BIT && reload_completed"
[(parallel [(set (reg:CC_C CC_REGNUM)
(compare:CC_C (plus:SI (match_dup 1) (match_dup 2))
(match_dup 1)))
@@ -478,7 +484,7 @@
(match_operand:SI 2 "reg_or_int_operand" "")))]
"TARGET_EITHER"
"
- if (TARGET_ARM && GET_CODE (operands[2]) == CONST_INT)
+ if (TARGET_32BIT && GET_CODE (operands[2]) == CONST_INT)
{
arm_split_constant (PLUS, SImode, NULL_RTX,
INTVAL (operands[2]), operands[0], operands[1],
@@ -495,7 +501,7 @@
(set (match_operand:SI 0 "arm_general_register_operand" "")
(plus:SI (match_operand:SI 1 "arm_general_register_operand" "")
(match_operand:SI 2 "const_int_operand" "")))]
- "TARGET_ARM &&
+ "TARGET_32BIT &&
!(const_ok_for_arm (INTVAL (operands[2]))
|| const_ok_for_arm (-INTVAL (operands[2])))
&& const_ok_for_arm (~INTVAL (operands[2]))"
@@ -508,12 +514,12 @@
[(set (match_operand:SI 0 "s_register_operand" "=r,r,r")
(plus:SI (match_operand:SI 1 "s_register_operand" "%r,r,r")
(match_operand:SI 2 "reg_or_int_operand" "rI,L,?n")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
add%?\\t%0, %1, %2
sub%?\\t%0, %1, #%n2
#"
- "TARGET_ARM &&
+ "TARGET_32BIT &&
GET_CODE (operands[2]) == CONST_INT
&& !(const_ok_for_arm (INTVAL (operands[2]))
|| const_ok_for_arm (-INTVAL (operands[2])))"
@@ -532,11 +538,11 @@
;; register. Trying to reload it will always fail catastrophically,
;; so never allow those alternatives to match if reloading is needed.
-(define_insn "*thumb_addsi3"
+(define_insn "*thumb1_addsi3"
[(set (match_operand:SI 0 "register_operand" "=l,l,l,*r,*h,l,!k")
(plus:SI (match_operand:SI 1 "register_operand" "%0,0,l,*0,*0,!k,!k")
(match_operand:SI 2 "nonmemory_operand" "I,J,lL,*h,*r,!M,!O")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
static const char * const asms[] =
{
@@ -564,13 +570,14 @@
(match_operand:SI 1 "const_int_operand" ""))
(set (match_dup 0)
(plus:SI (match_dup 0) (reg:SI SP_REGNUM)))]
- "TARGET_THUMB
+ "TARGET_THUMB1
&& (unsigned HOST_WIDE_INT) (INTVAL (operands[1])) < 1024
&& (INTVAL (operands[1]) & 3) == 0"
[(set (match_dup 0) (plus:SI (reg:SI SP_REGNUM) (match_dup 1)))]
""
)
+;; ??? Make Thumb-2 variants which prefer low regs
(define_insn "*addsi3_compare0"
[(set (reg:CC_NOOV CC_REGNUM)
(compare:CC_NOOV
@@ -579,10 +586,10 @@
(const_int 0)))
(set (match_operand:SI 0 "s_register_operand" "=r,r")
(plus:SI (match_dup 1) (match_dup 2)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
- add%?s\\t%0, %1, %2
- sub%?s\\t%0, %1, #%n2"
+ add%.\\t%0, %1, %2
+ sub%.\\t%0, %1, #%n2"
[(set_attr "conds" "set")]
)
@@ -592,7 +599,7 @@
(plus:SI (match_operand:SI 0 "s_register_operand" "r, r")
(match_operand:SI 1 "arm_add_operand" "rI,L"))
(const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
cmn%?\\t%0, %1
cmp%?\\t%0, #%n1"
@@ -604,7 +611,7 @@
(compare:CC_Z
(neg:SI (match_operand:SI 0 "s_register_operand" "r"))
(match_operand:SI 1 "s_register_operand" "r")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"cmn%?\\t%1, %0"
[(set_attr "conds" "set")]
)
@@ -619,10 +626,10 @@
(set (match_operand:SI 0 "s_register_operand" "=r,r")
(plus:SI (match_dup 1)
(match_operand:SI 3 "arm_addimm_operand" "L,I")))]
- "TARGET_ARM && INTVAL (operands[2]) == -INTVAL (operands[3])"
+ "TARGET_32BIT && INTVAL (operands[2]) == -INTVAL (operands[3])"
"@
- sub%?s\\t%0, %1, %2
- add%?s\\t%0, %1, #%n2"
+ sub%.\\t%0, %1, %2
+ add%.\\t%0, %1, #%n2"
[(set_attr "conds" "set")]
)
@@ -646,7 +653,7 @@
[(match_dup 2) (const_int 0)])
(match_operand 4 "" "")
(match_operand 5 "" "")))]
- "TARGET_ARM && peep2_reg_dead_p (3, operands[2])"
+ "TARGET_32BIT && peep2_reg_dead_p (3, operands[2])"
[(parallel[
(set (match_dup 2)
(compare:CC
@@ -675,10 +682,10 @@
(match_dup 1)))
(set (match_operand:SI 0 "s_register_operand" "=r,r")
(plus:SI (match_dup 1) (match_dup 2)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
- add%?s\\t%0, %1, %2
- sub%?s\\t%0, %1, #%n2"
+ add%.\\t%0, %1, %2
+ sub%.\\t%0, %1, #%n2"
[(set_attr "conds" "set")]
)
@@ -690,10 +697,10 @@
(match_dup 2)))
(set (match_operand:SI 0 "s_register_operand" "=r,r")
(plus:SI (match_dup 1) (match_dup 2)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
- add%?s\\t%0, %1, %2
- sub%?s\\t%0, %1, #%n2"
+ add%.\\t%0, %1, %2
+ sub%.\\t%0, %1, #%n2"
[(set_attr "conds" "set")]
)
@@ -703,7 +710,7 @@
(plus:SI (match_operand:SI 0 "s_register_operand" "r,r")
(match_operand:SI 1 "arm_add_operand" "rI,L"))
(match_dup 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
cmn%?\\t%0, %1
cmp%?\\t%0, #%n1"
@@ -716,7 +723,7 @@
(plus:SI (match_operand:SI 0 "s_register_operand" "r,r")
(match_operand:SI 1 "arm_add_operand" "rI,L"))
(match_dup 1)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
cmn%?\\t%0, %1
cmp%?\\t%0, #%n1"
@@ -728,7 +735,7 @@
(plus:SI (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))
(plus:SI (match_operand:SI 1 "s_register_operand" "r")
(match_operand:SI 2 "arm_rhs_operand" "rI"))))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"adc%?\\t%0, %1, %2"
[(set_attr "conds" "use")]
)
@@ -741,7 +748,7 @@
[(match_operand:SI 3 "s_register_operand" "r")
(match_operand:SI 4 "reg_or_int_operand" "rM")])
(match_operand:SI 1 "s_register_operand" "r"))))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"adc%?\\t%0, %1, %3%S2"
[(set_attr "conds" "use")
(set (attr "type") (if_then_else (match_operand 4 "const_int_operand" "")
@@ -754,7 +761,7 @@
(plus:SI (plus:SI (match_operand:SI 1 "s_register_operand" "r")
(match_operand:SI 2 "arm_rhs_operand" "rI"))
(ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"adc%?\\t%0, %1, %2"
[(set_attr "conds" "use")]
)
@@ -764,7 +771,7 @@
(plus:SI (plus:SI (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))
(match_operand:SI 1 "s_register_operand" "r"))
(match_operand:SI 2 "arm_rhs_operand" "rI")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"adc%?\\t%0, %1, %2"
[(set_attr "conds" "use")]
)
@@ -774,12 +781,21 @@
(plus:SI (plus:SI (ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))
(match_operand:SI 2 "arm_rhs_operand" "rI"))
(match_operand:SI 1 "s_register_operand" "r")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"adc%?\\t%0, %1, %2"
[(set_attr "conds" "use")]
)
-(define_insn "incscc"
+(define_expand "incscc"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r")
+ (plus:SI (match_operator:SI 2 "arm_comparison_operator"
+ [(match_operand:CC 3 "cc_register" "") (const_int 0)])
+ (match_operand:SI 1 "s_register_operand" "0,?r")))]
+ "TARGET_32BIT"
+ ""
+)
+
+(define_insn "*arm_incscc"
[(set (match_operand:SI 0 "s_register_operand" "=r,r")
(plus:SI (match_operator:SI 2 "arm_comparison_operator"
[(match_operand:CC 3 "cc_register" "") (const_int 0)])
@@ -799,7 +815,7 @@
(match_operand:SI 2 "s_register_operand" ""))
(const_int -1)))
(clobber (match_operand:SI 3 "s_register_operand" ""))]
- "TARGET_ARM"
+ "TARGET_32BIT"
[(set (match_dup 3) (match_dup 1))
(set (match_dup 0) (not:SI (ashift:SI (match_dup 3) (match_dup 2))))]
"
@@ -810,7 +826,7 @@
[(set (match_operand:SF 0 "s_register_operand" "")
(plus:SF (match_operand:SF 1 "s_register_operand" "")
(match_operand:SF 2 "arm_float_add_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
"
if (TARGET_MAVERICK
&& !cirrus_fp_register (operands[2], SFmode))
@@ -821,7 +837,7 @@
[(set (match_operand:DF 0 "s_register_operand" "")
(plus:DF (match_operand:DF 1 "s_register_operand" "")
(match_operand:DF 2 "arm_float_add_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
"
if (TARGET_MAVERICK
&& !cirrus_fp_register (operands[2], DFmode))
@@ -837,7 +853,7 @@
"TARGET_EITHER"
"
if (TARGET_HARD_FLOAT && TARGET_MAVERICK
- && TARGET_ARM
+ && TARGET_32BIT
&& cirrus_fp_register (operands[0], DImode)
&& cirrus_fp_register (operands[1], DImode))
{
@@ -845,7 +861,7 @@
DONE;
}
- if (TARGET_THUMB)
+ if (TARGET_THUMB1)
{
if (GET_CODE (operands[1]) != REG)
operands[1] = force_reg (SImode, operands[1]);
@@ -860,7 +876,7 @@
(minus:DI (match_operand:DI 1 "s_register_operand" "0,r,0")
(match_operand:DI 2 "s_register_operand" "r,0,0")))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"subs\\t%Q0, %Q1, %Q2\;sbc\\t%R0, %R1, %R2"
[(set_attr "conds" "clob")
(set_attr "length" "8")]
@@ -871,7 +887,7 @@
(minus:DI (match_operand:DI 1 "register_operand" "0")
(match_operand:DI 2 "register_operand" "l")))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"sub\\t%Q0, %Q0, %Q2\;sbc\\t%R0, %R0, %R2"
[(set_attr "length" "4")]
)
@@ -882,7 +898,7 @@
(zero_extend:DI
(match_operand:SI 2 "s_register_operand" "r,r"))))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"subs\\t%Q0, %Q1, %2\;sbc\\t%R0, %R1, #0"
[(set_attr "conds" "clob")
(set_attr "length" "8")]
@@ -894,7 +910,7 @@
(sign_extend:DI
(match_operand:SI 2 "s_register_operand" "r,r"))))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"subs\\t%Q0, %Q1, %2\;sbc\\t%R0, %R1, %2, asr #31"
[(set_attr "conds" "clob")
(set_attr "length" "8")]
@@ -931,8 +947,8 @@
(zero_extend:DI
(match_operand:SI 2 "s_register_operand" "r"))))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM"
- "subs\\t%Q0, %1, %2\;rsc\\t%R0, %1, %1"
+ "TARGET_32BIT"
+ "subs\\t%Q0, %1, %2\;sbc\\t%R0, %1, %1"
[(set_attr "conds" "clob")
(set_attr "length" "8")]
)
@@ -945,37 +961,38 @@
"
if (GET_CODE (operands[1]) == CONST_INT)
{
- if (TARGET_ARM)
+ if (TARGET_32BIT)
{
arm_split_constant (MINUS, SImode, NULL_RTX,
INTVAL (operands[1]), operands[0],
operands[2], optimize && !no_new_pseudos);
DONE;
}
- else /* TARGET_THUMB */
+ else /* TARGET_THUMB1 */
operands[1] = force_reg (SImode, operands[1]);
}
"
)
-(define_insn "*thumb_subsi3_insn"
+(define_insn "*thumb1_subsi3_insn"
[(set (match_operand:SI 0 "register_operand" "=l")
(minus:SI (match_operand:SI 1 "register_operand" "l")
(match_operand:SI 2 "register_operand" "l")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"sub\\t%0, %1, %2"
[(set_attr "length" "2")]
)
+; ??? Check Thumb-2 split length
(define_insn_and_split "*arm_subsi3_insn"
[(set (match_operand:SI 0 "s_register_operand" "=r,r")
(minus:SI (match_operand:SI 1 "reg_or_int_operand" "rI,?n")
(match_operand:SI 2 "s_register_operand" "r,r")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
rsb%?\\t%0, %2, %1
#"
- "TARGET_ARM
+ "TARGET_32BIT
&& GET_CODE (operands[1]) == CONST_INT
&& !const_ok_for_arm (INTVAL (operands[1]))"
[(clobber (const_int 0))]
@@ -993,7 +1010,7 @@
(set (match_operand:SI 0 "arm_general_register_operand" "")
(minus:SI (match_operand:SI 1 "const_int_operand" "")
(match_operand:SI 2 "arm_general_register_operand" "")))]
- "TARGET_ARM
+ "TARGET_32BIT
&& !const_ok_for_arm (INTVAL (operands[1]))
&& const_ok_for_arm (~INTVAL (operands[1]))"
[(set (match_dup 3) (match_dup 1))
@@ -1009,14 +1026,23 @@
(const_int 0)))
(set (match_operand:SI 0 "s_register_operand" "=r,r")
(minus:SI (match_dup 1) (match_dup 2)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
- sub%?s\\t%0, %1, %2
- rsb%?s\\t%0, %2, %1"
+ sub%.\\t%0, %1, %2
+ rsb%.\\t%0, %2, %1"
[(set_attr "conds" "set")]
)
-(define_insn "decscc"
+(define_expand "decscc"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r")
+ (minus:SI (match_operand:SI 1 "s_register_operand" "0,?r")
+ (match_operator:SI 2 "arm_comparison_operator"
+ [(match_operand 3 "cc_register" "") (const_int 0)])))]
+ "TARGET_32BIT"
+ ""
+)
+
+(define_insn "*arm_decscc"
[(set (match_operand:SI 0 "s_register_operand" "=r,r")
(minus:SI (match_operand:SI 1 "s_register_operand" "0,?r")
(match_operator:SI 2 "arm_comparison_operator"
@@ -1033,7 +1059,7 @@
[(set (match_operand:SF 0 "s_register_operand" "")
(minus:SF (match_operand:SF 1 "arm_float_rhs_operand" "")
(match_operand:SF 2 "arm_float_rhs_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
"
if (TARGET_MAVERICK)
{
@@ -1048,7 +1074,7 @@
[(set (match_operand:DF 0 "s_register_operand" "")
(minus:DF (match_operand:DF 1 "arm_float_rhs_operand" "")
(match_operand:DF 2 "arm_float_rhs_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
"
if (TARGET_MAVERICK)
{
@@ -1075,7 +1101,7 @@
[(set (match_operand:SI 0 "s_register_operand" "=&r,&r")
(mult:SI (match_operand:SI 2 "s_register_operand" "r,r")
(match_operand:SI 1 "s_register_operand" "%?r,0")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"mul%?\\t%0, %2, %1"
[(set_attr "insn" "mul")
(set_attr "predicable" "yes")]
@@ -1090,7 +1116,7 @@
[(set (match_operand:SI 0 "register_operand" "=&l,&l,&l")
(mult:SI (match_operand:SI 1 "register_operand" "%l,*h,0")
(match_operand:SI 2 "register_operand" "l,l,l")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
if (which_alternative < 2)
return \"mov\\t%0, %1\;mul\\t%0, %2\";
@@ -1110,7 +1136,7 @@
(set (match_operand:SI 0 "s_register_operand" "=&r,&r")
(mult:SI (match_dup 2) (match_dup 1)))]
"TARGET_ARM"
- "mul%?s\\t%0, %2, %1"
+ "mul%.\\t%0, %2, %1"
[(set_attr "conds" "set")
(set_attr "insn" "muls")]
)
@@ -1123,7 +1149,7 @@
(const_int 0)))
(clobber (match_scratch:SI 0 "=&r,&r"))]
"TARGET_ARM"
- "mul%?s\\t%0, %2, %1"
+ "mul%.\\t%0, %2, %1"
[(set_attr "conds" "set")
(set_attr "insn" "muls")]
)
@@ -1136,7 +1162,7 @@
(mult:SI (match_operand:SI 2 "s_register_operand" "r,r,r,r")
(match_operand:SI 1 "s_register_operand" "%r,0,r,0"))
(match_operand:SI 3 "s_register_operand" "?r,r,0,0")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"mla%?\\t%0, %2, %1, %3"
[(set_attr "insn" "mla")
(set_attr "predicable" "yes")]
@@ -1154,7 +1180,7 @@
(plus:SI (mult:SI (match_dup 2) (match_dup 1))
(match_dup 3)))]
"TARGET_ARM"
- "mla%?s\\t%0, %2, %1, %3"
+ "mla%.\\t%0, %2, %1, %3"
[(set_attr "conds" "set")
(set_attr "insn" "mlas")]
)
@@ -1169,7 +1195,7 @@
(const_int 0)))
(clobber (match_scratch:SI 0 "=&r,&r,&r,&r"))]
"TARGET_ARM"
- "mla%?s\\t%0, %2, %1, %3"
+ "mla%.\\t%0, %2, %1, %3"
[(set_attr "conds" "set")
(set_attr "insn" "mlas")]
)
@@ -1183,7 +1209,7 @@
(sign_extend:DI (match_operand:SI 2 "s_register_operand" "%r"))
(sign_extend:DI (match_operand:SI 3 "s_register_operand" "r")))
(match_operand:DI 1 "s_register_operand" "0")))]
- "TARGET_ARM && arm_arch3m"
+ "TARGET_32BIT && arm_arch3m"
"smlal%?\\t%Q0, %R0, %3, %2"
[(set_attr "insn" "smlal")
(set_attr "predicable" "yes")]
@@ -1194,7 +1220,7 @@
(mult:DI
(sign_extend:DI (match_operand:SI 1 "s_register_operand" "%r"))
(sign_extend:DI (match_operand:SI 2 "s_register_operand" "r"))))]
- "TARGET_ARM && arm_arch3m"
+ "TARGET_32BIT && arm_arch3m"
"smull%?\\t%Q0, %R0, %1, %2"
[(set_attr "insn" "smull")
(set_attr "predicable" "yes")]
@@ -1205,7 +1231,7 @@
(mult:DI
(zero_extend:DI (match_operand:SI 1 "s_register_operand" "%r"))
(zero_extend:DI (match_operand:SI 2 "s_register_operand" "r"))))]
- "TARGET_ARM && arm_arch3m"
+ "TARGET_32BIT && arm_arch3m"
"umull%?\\t%Q0, %R0, %1, %2"
[(set_attr "insn" "umull")
(set_attr "predicable" "yes")]
@@ -1220,7 +1246,7 @@
(zero_extend:DI (match_operand:SI 2 "s_register_operand" "%r"))
(zero_extend:DI (match_operand:SI 3 "s_register_operand" "r")))
(match_operand:DI 1 "s_register_operand" "0")))]
- "TARGET_ARM && arm_arch3m"
+ "TARGET_32BIT && arm_arch3m"
"umlal%?\\t%Q0, %R0, %3, %2"
[(set_attr "insn" "umlal")
(set_attr "predicable" "yes")]
@@ -1235,7 +1261,7 @@
(sign_extend:DI (match_operand:SI 2 "s_register_operand" "r,r")))
(const_int 32))))
(clobber (match_scratch:SI 3 "=&r,&r"))]
- "TARGET_ARM && arm_arch3m"
+ "TARGET_32BIT && arm_arch3m"
"smull%?\\t%3, %0, %2, %1"
[(set_attr "insn" "smull")
(set_attr "predicable" "yes")]
@@ -1250,7 +1276,7 @@
(zero_extend:DI (match_operand:SI 2 "s_register_operand" "r,r")))
(const_int 32))))
(clobber (match_scratch:SI 3 "=&r,&r"))]
- "TARGET_ARM && arm_arch3m"
+ "TARGET_32BIT && arm_arch3m"
"umull%?\\t%3, %0, %2, %1"
[(set_attr "insn" "umull")
(set_attr "predicable" "yes")]
@@ -1262,7 +1288,7 @@
(match_operand:HI 1 "s_register_operand" "%r"))
(sign_extend:SI
(match_operand:HI 2 "s_register_operand" "r"))))]
- "TARGET_ARM && arm_arch5e"
+ "TARGET_DSP_MULTIPLY"
"smulbb%?\\t%0, %1, %2"
[(set_attr "insn" "smulxy")
(set_attr "predicable" "yes")]
@@ -1275,7 +1301,7 @@
(const_int 16))
(sign_extend:SI
(match_operand:HI 2 "s_register_operand" "r"))))]
- "TARGET_ARM && arm_arch5e"
+ "TARGET_DSP_MULTIPLY"
"smultb%?\\t%0, %1, %2"
[(set_attr "insn" "smulxy")
(set_attr "predicable" "yes")]
@@ -1288,7 +1314,7 @@
(ashiftrt:SI
(match_operand:SI 2 "s_register_operand" "r")
(const_int 16))))]
- "TARGET_ARM && arm_arch5e"
+ "TARGET_DSP_MULTIPLY"
"smulbt%?\\t%0, %1, %2"
[(set_attr "insn" "smulxy")
(set_attr "predicable" "yes")]
@@ -1302,7 +1328,7 @@
(ashiftrt:SI
(match_operand:SI 2 "s_register_operand" "r")
(const_int 16))))]
- "TARGET_ARM && arm_arch5e"
+ "TARGET_DSP_MULTIPLY"
"smultt%?\\t%0, %1, %2"
[(set_attr "insn" "smulxy")
(set_attr "predicable" "yes")]
@@ -1315,7 +1341,7 @@
(match_operand:HI 2 "s_register_operand" "%r"))
(sign_extend:SI
(match_operand:HI 3 "s_register_operand" "r")))))]
- "TARGET_ARM && arm_arch5e"
+ "TARGET_DSP_MULTIPLY"
"smlabb%?\\t%0, %2, %3, %1"
[(set_attr "insn" "smlaxy")
(set_attr "predicable" "yes")]
@@ -1329,7 +1355,7 @@
(match_operand:HI 2 "s_register_operand" "%r"))
(sign_extend:DI
(match_operand:HI 3 "s_register_operand" "r")))))]
- "TARGET_ARM && arm_arch5e"
+ "TARGET_DSP_MULTIPLY"
"smlalbb%?\\t%Q0, %R0, %2, %3"
[(set_attr "insn" "smlalxy")
(set_attr "predicable" "yes")])
@@ -1338,7 +1364,7 @@
[(set (match_operand:SF 0 "s_register_operand" "")
(mult:SF (match_operand:SF 1 "s_register_operand" "")
(match_operand:SF 2 "arm_float_rhs_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
"
if (TARGET_MAVERICK
&& !cirrus_fp_register (operands[2], SFmode))
@@ -1349,7 +1375,7 @@
[(set (match_operand:DF 0 "s_register_operand" "")
(mult:DF (match_operand:DF 1 "s_register_operand" "")
(match_operand:DF 2 "arm_float_rhs_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
"
if (TARGET_MAVERICK
&& !cirrus_fp_register (operands[2], DFmode))
@@ -1362,14 +1388,14 @@
[(set (match_operand:SF 0 "s_register_operand" "")
(div:SF (match_operand:SF 1 "arm_float_rhs_operand" "")
(match_operand:SF 2 "arm_float_rhs_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"")
(define_expand "divdf3"
[(set (match_operand:DF 0 "s_register_operand" "")
(div:DF (match_operand:DF 1 "arm_float_rhs_operand" "")
(match_operand:DF 2 "arm_float_rhs_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"")
;; Modulo insns
@@ -1378,14 +1404,14 @@
[(set (match_operand:SF 0 "s_register_operand" "")
(mod:SF (match_operand:SF 1 "s_register_operand" "")
(match_operand:SF 2 "arm_float_rhs_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"")
(define_expand "moddf3"
[(set (match_operand:DF 0 "s_register_operand" "")
(mod:DF (match_operand:DF 1 "s_register_operand" "")
(match_operand:DF 2 "arm_float_rhs_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"")
;; Boolean and,ior,xor insns
@@ -1399,7 +1425,8 @@
(match_operator:DI 6 "logical_binary_operator"
[(match_operand:DI 1 "s_register_operand" "")
(match_operand:DI 2 "s_register_operand" "")]))]
- "TARGET_ARM && reload_completed && ! IS_IWMMXT_REGNUM (REGNO (operands[0]))"
+ "TARGET_32BIT && reload_completed
+ && ! IS_IWMMXT_REGNUM (REGNO (operands[0]))"
[(set (match_dup 0) (match_op_dup:SI 6 [(match_dup 1) (match_dup 2)]))
(set (match_dup 3) (match_op_dup:SI 6 [(match_dup 4) (match_dup 5)]))]
"
@@ -1418,7 +1445,7 @@
(match_operator:DI 6 "logical_binary_operator"
[(sign_extend:DI (match_operand:SI 2 "s_register_operand" ""))
(match_operand:DI 1 "s_register_operand" "")]))]
- "TARGET_ARM && reload_completed"
+ "TARGET_32BIT && reload_completed"
[(set (match_dup 0) (match_op_dup:SI 6 [(match_dup 1) (match_dup 2)]))
(set (match_dup 3) (match_op_dup:SI 6
[(ashiftrt:SI (match_dup 2) (const_int 31))
@@ -1441,7 +1468,7 @@
(ior:DI
(zero_extend:DI (match_operand:SI 2 "s_register_operand" ""))
(match_operand:DI 1 "s_register_operand" "")))]
- "TARGET_ARM && operands[0] != operands[1] && reload_completed"
+ "TARGET_32BIT && operands[0] != operands[1] && reload_completed"
[(set (match_dup 0) (ior:SI (match_dup 1) (match_dup 2)))
(set (match_dup 3) (match_dup 4))]
"
@@ -1460,7 +1487,7 @@
(xor:DI
(zero_extend:DI (match_operand:SI 2 "s_register_operand" ""))
(match_operand:DI 1 "s_register_operand" "")))]
- "TARGET_ARM && operands[0] != operands[1] && reload_completed"
+ "TARGET_32BIT && operands[0] != operands[1] && reload_completed"
[(set (match_dup 0) (xor:SI (match_dup 1) (match_dup 2)))
(set (match_dup 3) (match_dup 4))]
"
@@ -1476,7 +1503,7 @@
[(set (match_operand:DI 0 "s_register_operand" "=&r,&r")
(and:DI (match_operand:DI 1 "s_register_operand" "%0,r")
(match_operand:DI 2 "s_register_operand" "r,r")))]
- "TARGET_ARM && ! TARGET_IWMMXT"
+ "TARGET_32BIT && ! TARGET_IWMMXT"
"#"
[(set_attr "length" "8")]
)
@@ -1486,9 +1513,9 @@
(and:DI (zero_extend:DI
(match_operand:SI 2 "s_register_operand" "r,r"))
(match_operand:DI 1 "s_register_operand" "?r,0")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"#"
- "TARGET_ARM && reload_completed"
+ "TARGET_32BIT && reload_completed"
; The zero extend of operand 2 clears the high word of the output
; operand.
[(set (match_dup 0) (and:SI (match_dup 1) (match_dup 2)))
@@ -1507,7 +1534,7 @@
(and:DI (sign_extend:DI
(match_operand:SI 2 "s_register_operand" "r,r"))
(match_operand:DI 1 "s_register_operand" "?r,0")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"#"
[(set_attr "length" "8")]
)
@@ -1518,7 +1545,7 @@
(match_operand:SI 2 "reg_or_int_operand" "")))]
"TARGET_EITHER"
"
- if (TARGET_ARM)
+ if (TARGET_32BIT)
{
if (GET_CODE (operands[2]) == CONST_INT)
{
@@ -1529,7 +1556,7 @@
DONE;
}
}
- else /* TARGET_THUMB */
+ else /* TARGET_THUMB1 */
{
if (GET_CODE (operands[2]) != CONST_INT)
operands[2] = force_reg (SImode, operands[2]);
@@ -1574,16 +1601,17 @@
"
)
+; ??? Check split length for Thumb-2
(define_insn_and_split "*arm_andsi3_insn"
[(set (match_operand:SI 0 "s_register_operand" "=r,r,r")
(and:SI (match_operand:SI 1 "s_register_operand" "r,r,r")
(match_operand:SI 2 "reg_or_int_operand" "rI,K,?n")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
and%?\\t%0, %1, %2
bic%?\\t%0, %1, #%B2
#"
- "TARGET_ARM
+ "TARGET_32BIT
&& GET_CODE (operands[2]) == CONST_INT
&& !(const_ok_for_arm (INTVAL (operands[2]))
|| const_ok_for_arm (~INTVAL (operands[2])))"
@@ -1597,11 +1625,11 @@
(set_attr "predicable" "yes")]
)
-(define_insn "*thumb_andsi3_insn"
+(define_insn "*thumb1_andsi3_insn"
[(set (match_operand:SI 0 "register_operand" "=l")
(and:SI (match_operand:SI 1 "register_operand" "%0")
(match_operand:SI 2 "register_operand" "l")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"and\\t%0, %0, %2"
[(set_attr "length" "2")]
)
@@ -1614,10 +1642,10 @@
(const_int 0)))
(set (match_operand:SI 0 "s_register_operand" "=r,r")
(and:SI (match_dup 1) (match_dup 2)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
- and%?s\\t%0, %1, %2
- bic%?s\\t%0, %1, #%B2"
+ and%.\\t%0, %1, %2
+ bic%.\\t%0, %1, #%B2"
[(set_attr "conds" "set")]
)
@@ -1628,10 +1656,10 @@
(match_operand:SI 1 "arm_not_operand" "rI,K"))
(const_int 0)))
(clobber (match_scratch:SI 2 "=X,r"))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
tst%?\\t%0, %1
- bic%?s\\t%2, %0, #%B1"
+ bic%.\\t%2, %0, #%B1"
[(set_attr "conds" "set")]
)
@@ -1642,7 +1670,7 @@
(match_operand 1 "const_int_operand" "n")
(match_operand 2 "const_int_operand" "n"))
(const_int 0)))]
- "TARGET_ARM
+ "TARGET_32BIT
&& (INTVAL (operands[2]) >= 0 && INTVAL (operands[2]) < 32
&& INTVAL (operands[1]) > 0
&& INTVAL (operands[1]) + (INTVAL (operands[2]) & 1) <= 8
@@ -1664,13 +1692,13 @@
(match_operand:SI 3 "const_int_operand" "n"))
(const_int 0)))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM
+ "TARGET_32BIT
&& (INTVAL (operands[3]) >= 0 && INTVAL (operands[3]) < 32
&& INTVAL (operands[2]) > 0
&& INTVAL (operands[2]) + (INTVAL (operands[3]) & 1) <= 8
&& INTVAL (operands[2]) + INTVAL (operands[3]) <= 32)"
"#"
- "TARGET_ARM
+ "TARGET_32BIT
&& (INTVAL (operands[3]) >= 0 && INTVAL (operands[3]) < 32
&& INTVAL (operands[2]) > 0
&& INTVAL (operands[2]) + (INTVAL (operands[3]) & 1) <= 8
@@ -1687,7 +1715,10 @@
<< INTVAL (operands[3]));
"
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set (attr "length")
+ (if_then_else (eq_attr "is_thumb" "yes")
+ (const_int 12)
+ (const_int 8)))]
)
(define_insn_and_split "*ne_zeroextractsi_shifted"
@@ -1786,7 +1817,7 @@
(match_operand:SI 2 "const_int_operand" "")
(match_operand:SI 3 "const_int_operand" "")))
(clobber (match_operand:SI 4 "s_register_operand" ""))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
[(set (match_dup 4) (ashift:SI (match_dup 1) (match_dup 2)))
(set (match_dup 0) (lshiftrt:SI (match_dup 4) (match_dup 3)))]
"{
@@ -1797,6 +1828,7 @@
}"
)
+;; ??? Use Thumb-2 has bitfield insert/extract instructions.
(define_split
[(set (match_operand:SI 0 "s_register_operand" "")
(match_operator:SI 1 "shiftable_operator"
@@ -1824,7 +1856,7 @@
(sign_extract:SI (match_operand:SI 1 "s_register_operand" "")
(match_operand:SI 2 "const_int_operand" "")
(match_operand:SI 3 "const_int_operand" "")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
[(set (match_dup 0) (ashift:SI (match_dup 1) (match_dup 2)))
(set (match_dup 0) (ashiftrt:SI (match_dup 0) (match_dup 3)))]
"{
@@ -1866,6 +1898,7 @@
;;; the value before we insert. This loses some of the advantage of having
;;; this insv pattern, so this pattern needs to be reevalutated.
+; ??? Use Thumb-2 bitfield insert/extract instructions
(define_expand "insv"
[(set (zero_extract:SI (match_operand:SI 0 "s_register_operand" "")
(match_operand:SI 1 "general_operand" "")
@@ -2007,9 +2040,9 @@
[(set (match_operand:DI 0 "s_register_operand" "=&r,&r")
(and:DI (not:DI (match_operand:DI 1 "s_register_operand" "r,0"))
(match_operand:DI 2 "s_register_operand" "0,r")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"#"
- "TARGET_ARM && reload_completed && ! IS_IWMMXT_REGNUM (REGNO (operands[0]))"
+ "TARGET_32BIT && reload_completed && ! IS_IWMMXT_REGNUM (REGNO (operands[0]))"
[(set (match_dup 0) (and:SI (not:SI (match_dup 1)) (match_dup 2)))
(set (match_dup 3) (and:SI (not:SI (match_dup 4)) (match_dup 5)))]
"
@@ -2030,13 +2063,13 @@
(and:DI (not:DI (zero_extend:DI
(match_operand:SI 2 "s_register_operand" "r,r")))
(match_operand:DI 1 "s_register_operand" "0,?r")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
bic%?\\t%Q0, %Q1, %2
#"
; (not (zero_extend ...)) allows us to just copy the high word from
; operand1 to operand0.
- "TARGET_ARM
+ "TARGET_32BIT
&& reload_completed
&& operands[0] != operands[1]"
[(set (match_dup 0) (and:SI (not:SI (match_dup 2)) (match_dup 1)))
@@ -2057,9 +2090,9 @@
(and:DI (not:DI (sign_extend:DI
(match_operand:SI 2 "s_register_operand" "r,r")))
(match_operand:DI 1 "s_register_operand" "0,r")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"#"
- "TARGET_ARM && reload_completed"
+ "TARGET_32BIT && reload_completed"
[(set (match_dup 0) (and:SI (not:SI (match_dup 2)) (match_dup 1)))
(set (match_dup 3) (and:SI (not:SI
(ashiftrt:SI (match_dup 2) (const_int 31)))
@@ -2079,7 +2112,7 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(and:SI (not:SI (match_operand:SI 2 "s_register_operand" "r"))
(match_operand:SI 1 "s_register_operand" "r")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"bic%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")]
)
@@ -2088,7 +2121,7 @@
[(set (match_operand:SI 0 "register_operand" "=l")
(and:SI (not:SI (match_operand:SI 1 "register_operand" "l"))
(match_operand:SI 2 "register_operand" "0")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"bic\\t%0, %0, %1"
[(set_attr "length" "2")]
)
@@ -2116,8 +2149,8 @@
(const_int 0)))
(set (match_operand:SI 0 "s_register_operand" "=r")
(and:SI (not:SI (match_dup 2)) (match_dup 1)))]
- "TARGET_ARM"
- "bic%?s\\t%0, %1, %2"
+ "TARGET_32BIT"
+ "bic%.\\t%0, %1, %2"
[(set_attr "conds" "set")]
)
@@ -2128,8 +2161,8 @@
(match_operand:SI 1 "s_register_operand" "r"))
(const_int 0)))
(clobber (match_scratch:SI 0 "=r"))]
- "TARGET_ARM"
- "bic%?s\\t%0, %1, %2"
+ "TARGET_32BIT"
+ "bic%.\\t%0, %1, %2"
[(set_attr "conds" "set")]
)
@@ -2137,7 +2170,7 @@
[(set (match_operand:DI 0 "s_register_operand" "=&r,&r")
(ior:DI (match_operand:DI 1 "s_register_operand" "%0,r")
(match_operand:DI 2 "s_register_operand" "r,r")))]
- "TARGET_ARM && ! TARGET_IWMMXT"
+ "TARGET_32BIT && ! TARGET_IWMMXT"
"#"
[(set_attr "length" "8")
(set_attr "predicable" "yes")]
@@ -2148,7 +2181,7 @@
(ior:DI (zero_extend:DI
(match_operand:SI 2 "s_register_operand" "r,r"))
(match_operand:DI 1 "s_register_operand" "0,?r")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
orr%?\\t%Q0, %Q1, %2
#"
@@ -2161,7 +2194,7 @@
(ior:DI (sign_extend:DI
(match_operand:SI 2 "s_register_operand" "r,r"))
(match_operand:DI 1 "s_register_operand" "?r,0")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"#"
[(set_attr "length" "8")
(set_attr "predicable" "yes")]
@@ -2175,14 +2208,14 @@
"
if (GET_CODE (operands[2]) == CONST_INT)
{
- if (TARGET_ARM)
+ if (TARGET_32BIT)
{
arm_split_constant (IOR, SImode, NULL_RTX,
INTVAL (operands[2]), operands[0], operands[1],
optimize && !no_new_pseudos);
DONE;
}
- else /* TARGET_THUMB */
+ else /* TARGET_THUMB1 */
operands [2] = force_reg (SImode, operands [2]);
}
"
@@ -2192,11 +2225,11 @@
[(set (match_operand:SI 0 "s_register_operand" "=r,r")
(ior:SI (match_operand:SI 1 "s_register_operand" "r,r")
(match_operand:SI 2 "reg_or_int_operand" "rI,?n")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
orr%?\\t%0, %1, %2
#"
- "TARGET_ARM
+ "TARGET_32BIT
&& GET_CODE (operands[2]) == CONST_INT
&& !const_ok_for_arm (INTVAL (operands[2]))"
[(clobber (const_int 0))]
@@ -2209,11 +2242,11 @@
(set_attr "predicable" "yes")]
)
-(define_insn "*thumb_iorsi3"
+(define_insn "*thumb1_iorsi3"
[(set (match_operand:SI 0 "register_operand" "=l")
(ior:SI (match_operand:SI 1 "register_operand" "%0")
(match_operand:SI 2 "register_operand" "l")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"orr\\t%0, %0, %2"
[(set_attr "length" "2")]
)
@@ -2223,7 +2256,7 @@
(set (match_operand:SI 0 "arm_general_register_operand" "")
(ior:SI (match_operand:SI 1 "arm_general_register_operand" "")
(match_operand:SI 2 "const_int_operand" "")))]
- "TARGET_ARM
+ "TARGET_32BIT
&& !const_ok_for_arm (INTVAL (operands[2]))
&& const_ok_for_arm (~INTVAL (operands[2]))"
[(set (match_dup 3) (match_dup 2))
@@ -2238,8 +2271,8 @@
(const_int 0)))
(set (match_operand:SI 0 "s_register_operand" "=r")
(ior:SI (match_dup 1) (match_dup 2)))]
- "TARGET_ARM"
- "orr%?s\\t%0, %1, %2"
+ "TARGET_32BIT"
+ "orr%.\\t%0, %1, %2"
[(set_attr "conds" "set")]
)
@@ -2249,8 +2282,8 @@
(match_operand:SI 2 "arm_rhs_operand" "rI"))
(const_int 0)))
(clobber (match_scratch:SI 0 "=r"))]
- "TARGET_ARM"
- "orr%?s\\t%0, %1, %2"
+ "TARGET_32BIT"
+ "orr%.\\t%0, %1, %2"
[(set_attr "conds" "set")]
)
@@ -2258,7 +2291,7 @@
[(set (match_operand:DI 0 "s_register_operand" "=&r,&r")
(xor:DI (match_operand:DI 1 "s_register_operand" "%0,r")
(match_operand:DI 2 "s_register_operand" "r,r")))]
- "TARGET_ARM && !TARGET_IWMMXT"
+ "TARGET_32BIT && !TARGET_IWMMXT"
"#"
[(set_attr "length" "8")
(set_attr "predicable" "yes")]
@@ -2269,7 +2302,7 @@
(xor:DI (zero_extend:DI
(match_operand:SI 2 "s_register_operand" "r,r"))
(match_operand:DI 1 "s_register_operand" "0,?r")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
eor%?\\t%Q0, %Q1, %2
#"
@@ -2282,7 +2315,7 @@
(xor:DI (sign_extend:DI
(match_operand:SI 2 "s_register_operand" "r,r"))
(match_operand:DI 1 "s_register_operand" "?r,0")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"#"
[(set_attr "length" "8")
(set_attr "predicable" "yes")]
@@ -2293,7 +2326,7 @@
(xor:SI (match_operand:SI 1 "s_register_operand" "")
(match_operand:SI 2 "arm_rhs_operand" "")))]
"TARGET_EITHER"
- "if (TARGET_THUMB)
+ "if (TARGET_THUMB1)
if (GET_CODE (operands[2]) == CONST_INT)
operands[2] = force_reg (SImode, operands[2]);
"
@@ -2303,16 +2336,16 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(xor:SI (match_operand:SI 1 "s_register_operand" "r")
(match_operand:SI 2 "arm_rhs_operand" "rI")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"eor%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")]
)
-(define_insn "*thumb_xorsi3"
+(define_insn "*thumb1_xorsi3"
[(set (match_operand:SI 0 "register_operand" "=l")
(xor:SI (match_operand:SI 1 "register_operand" "%0")
(match_operand:SI 2 "register_operand" "l")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"eor\\t%0, %0, %2"
[(set_attr "length" "2")]
)
@@ -2324,8 +2357,8 @@
(const_int 0)))
(set (match_operand:SI 0 "s_register_operand" "=r")
(xor:SI (match_dup 1) (match_dup 2)))]
- "TARGET_ARM"
- "eor%?s\\t%0, %1, %2"
+ "TARGET_32BIT"
+ "eor%.\\t%0, %1, %2"
[(set_attr "conds" "set")]
)
@@ -2334,7 +2367,7 @@
(compare:CC_NOOV (xor:SI (match_operand:SI 0 "s_register_operand" "r")
(match_operand:SI 1 "arm_rhs_operand" "rI"))
(const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"teq%?\\t%0, %1"
[(set_attr "conds" "set")]
)
@@ -2349,7 +2382,7 @@
(not:SI (match_operand:SI 2 "arm_rhs_operand" "")))
(match_operand:SI 3 "arm_rhs_operand" "")))
(clobber (match_operand:SI 4 "s_register_operand" ""))]
- "TARGET_ARM"
+ "TARGET_32BIT"
[(set (match_dup 4) (and:SI (ior:SI (match_dup 1) (match_dup 2))
(not:SI (match_dup 3))))
(set (match_dup 0) (not:SI (match_dup 4)))]
@@ -2361,12 +2394,15 @@
(and:SI (ior:SI (match_operand:SI 1 "s_register_operand" "r,r,0")
(match_operand:SI 2 "arm_rhs_operand" "rI,0,rI"))
(not:SI (match_operand:SI 3 "arm_rhs_operand" "rI,rI,rI"))))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"orr%?\\t%0, %1, %2\;bic%?\\t%0, %0, %3"
[(set_attr "length" "8")
+ (set_attr "ce_count" "2")
(set_attr "predicable" "yes")]
)
+; ??? Are these four splitters still beneficial when the Thumb-2 bitfield
+; insns are available?
(define_split
[(set (match_operand:SI 0 "s_register_operand" "")
(match_operator:SI 1 "logical_binary_operator"
@@ -2378,7 +2414,7 @@
(match_operand:SI 6 "const_int_operand" ""))
(match_operand:SI 7 "s_register_operand" "")])]))
(clobber (match_operand:SI 8 "s_register_operand" ""))]
- "TARGET_ARM
+ "TARGET_32BIT
&& GET_CODE (operands[1]) == GET_CODE (operands[9])
&& INTVAL (operands[3]) == 32 - INTVAL (operands[6])"
[(set (match_dup 8)
@@ -2404,7 +2440,7 @@
(match_operand:SI 3 "const_int_operand" "")
(match_operand:SI 4 "const_int_operand" ""))]))
(clobber (match_operand:SI 8 "s_register_operand" ""))]
- "TARGET_ARM
+ "TARGET_32BIT
&& GET_CODE (operands[1]) == GET_CODE (operands[9])
&& INTVAL (operands[3]) == 32 - INTVAL (operands[6])"
[(set (match_dup 8)
@@ -2430,7 +2466,7 @@
(match_operand:SI 6 "const_int_operand" ""))
(match_operand:SI 7 "s_register_operand" "")])]))
(clobber (match_operand:SI 8 "s_register_operand" ""))]
- "TARGET_ARM
+ "TARGET_32BIT
&& GET_CODE (operands[1]) == GET_CODE (operands[9])
&& INTVAL (operands[3]) == 32 - INTVAL (operands[6])"
[(set (match_dup 8)
@@ -2456,7 +2492,7 @@
(match_operand:SI 3 "const_int_operand" "")
(match_operand:SI 4 "const_int_operand" ""))]))
(clobber (match_operand:SI 8 "s_register_operand" ""))]
- "TARGET_ARM
+ "TARGET_32BIT
&& GET_CODE (operands[1]) == GET_CODE (operands[9])
&& INTVAL (operands[3]) == 32 - INTVAL (operands[6])"
[(set (match_dup 8)
@@ -2480,7 +2516,7 @@
(smax:SI (match_operand:SI 1 "s_register_operand" "")
(match_operand:SI 2 "arm_rhs_operand" "")))
(clobber (reg:CC CC_REGNUM))])]
- "TARGET_ARM"
+ "TARGET_32BIT"
"
if (operands[2] == const0_rtx || operands[2] == constm1_rtx)
{
@@ -2496,7 +2532,7 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(smax:SI (match_operand:SI 1 "s_register_operand" "r")
(const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"bic%?\\t%0, %1, %1, asr #31"
[(set_attr "predicable" "yes")]
)
@@ -2505,12 +2541,12 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(smax:SI (match_operand:SI 1 "s_register_operand" "r")
(const_int -1)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"orr%?\\t%0, %1, %1, asr #31"
[(set_attr "predicable" "yes")]
)
-(define_insn "*smax_insn"
+(define_insn "*arm_smax_insn"
[(set (match_operand:SI 0 "s_register_operand" "=r,r")
(smax:SI (match_operand:SI 1 "s_register_operand" "%0,?r")
(match_operand:SI 2 "arm_rhs_operand" "rI,rI")))
@@ -2529,7 +2565,7 @@
(smin:SI (match_operand:SI 1 "s_register_operand" "")
(match_operand:SI 2 "arm_rhs_operand" "")))
(clobber (reg:CC CC_REGNUM))])]
- "TARGET_ARM"
+ "TARGET_32BIT"
"
if (operands[2] == const0_rtx)
{
@@ -2545,12 +2581,12 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(smin:SI (match_operand:SI 1 "s_register_operand" "r")
(const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"and%?\\t%0, %1, %1, asr #31"
[(set_attr "predicable" "yes")]
)
-(define_insn "*smin_insn"
+(define_insn "*arm_smin_insn"
[(set (match_operand:SI 0 "s_register_operand" "=r,r")
(smin:SI (match_operand:SI 1 "s_register_operand" "%0,?r")
(match_operand:SI 2 "arm_rhs_operand" "rI,rI")))
@@ -2563,7 +2599,17 @@
(set_attr "length" "8,12")]
)
-(define_insn "umaxsi3"
+(define_expand "umaxsi3"
+ [(parallel [
+ (set (match_operand:SI 0 "s_register_operand" "")
+ (umax:SI (match_operand:SI 1 "s_register_operand" "")
+ (match_operand:SI 2 "arm_rhs_operand" "")))
+ (clobber (reg:CC CC_REGNUM))])]
+ "TARGET_32BIT"
+ ""
+)
+
+(define_insn "*arm_umaxsi3"
[(set (match_operand:SI 0 "s_register_operand" "=r,r,r")
(umax:SI (match_operand:SI 1 "s_register_operand" "0,r,?r")
(match_operand:SI 2 "arm_rhs_operand" "rI,0,rI")))
@@ -2577,7 +2623,17 @@
(set_attr "length" "8,8,12")]
)
-(define_insn "uminsi3"
+(define_expand "uminsi3"
+ [(parallel [
+ (set (match_operand:SI 0 "s_register_operand" "")
+ (umin:SI (match_operand:SI 1 "s_register_operand" "")
+ (match_operand:SI 2 "arm_rhs_operand" "")))
+ (clobber (reg:CC CC_REGNUM))])]
+ "TARGET_32BIT"
+ ""
+)
+
+(define_insn "*arm_uminsi3"
[(set (match_operand:SI 0 "s_register_operand" "=r,r,r")
(umin:SI (match_operand:SI 1 "s_register_operand" "0,r,?r")
(match_operand:SI 2 "arm_rhs_operand" "rI,0,rI")))
@@ -2597,17 +2653,22 @@
[(match_operand:SI 1 "s_register_operand" "r")
(match_operand:SI 2 "s_register_operand" "r")]))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"*
operands[3] = gen_rtx_fmt_ee (minmax_code (operands[3]), SImode,
operands[1], operands[2]);
output_asm_insn (\"cmp\\t%1, %2\", operands);
+ if (TARGET_THUMB2)
+ output_asm_insn (\"ite\t%d3\", operands);
output_asm_insn (\"str%d3\\t%1, %0\", operands);
output_asm_insn (\"str%D3\\t%2, %0\", operands);
return \"\";
"
[(set_attr "conds" "clob")
- (set_attr "length" "12")
+ (set (attr "length")
+ (if_then_else (eq_attr "is_thumb" "yes")
+ (const_int 14)
+ (const_int 12)))
(set_attr "type" "store1")]
)
@@ -2621,22 +2682,38 @@
(match_operand:SI 3 "arm_rhs_operand" "rI,rI")])
(match_operand:SI 1 "s_register_operand" "0,?r")]))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM && !arm_eliminable_register (operands[1])"
+ "TARGET_32BIT && !arm_eliminable_register (operands[1])"
"*
{
enum rtx_code code = GET_CODE (operands[4]);
+ bool need_else;
+
+ if (which_alternative != 0 || operands[3] != const0_rtx
+ || (code != PLUS && code != MINUS && code != IOR && code != XOR))
+ need_else = true;
+ else
+ need_else = false;
operands[5] = gen_rtx_fmt_ee (minmax_code (operands[5]), SImode,
operands[2], operands[3]);
output_asm_insn (\"cmp\\t%2, %3\", operands);
+ if (TARGET_THUMB2)
+ {
+ if (need_else)
+ output_asm_insn (\"ite\\t%d5\", operands);
+ else
+ output_asm_insn (\"it\\t%d5\", operands);
+ }
output_asm_insn (\"%i4%d5\\t%0, %1, %2\", operands);
- if (which_alternative != 0 || operands[3] != const0_rtx
- || (code != PLUS && code != MINUS && code != IOR && code != XOR))
+ if (need_else)
output_asm_insn (\"%i4%D5\\t%0, %1, %3\", operands);
return \"\";
}"
[(set_attr "conds" "clob")
- (set_attr "length" "12")]
+ (set (attr "length")
+ (if_then_else (eq_attr "is_thumb" "yes")
+ (const_int 14)
+ (const_int 12)))]
)
@@ -2646,7 +2723,7 @@
[(set (match_operand:DI 0 "s_register_operand" "")
(ashift:DI (match_operand:DI 1 "s_register_operand" "")
(match_operand:SI 2 "reg_or_int_operand" "")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"
if (GET_CODE (operands[2]) == CONST_INT)
{
@@ -2671,7 +2748,7 @@
(ashift:DI (match_operand:DI 1 "s_register_operand" "?r,0")
(const_int 1)))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"movs\\t%Q0, %Q1, asl #1\;adc\\t%R0, %R1, %R1"
[(set_attr "conds" "clob")
(set_attr "length" "8")]
@@ -2692,11 +2769,11 @@
"
)
-(define_insn "*thumb_ashlsi3"
+(define_insn "*thumb1_ashlsi3"
[(set (match_operand:SI 0 "register_operand" "=l,l")
(ashift:SI (match_operand:SI 1 "register_operand" "l,0")
(match_operand:SI 2 "nonmemory_operand" "N,l")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"lsl\\t%0, %1, %2"
[(set_attr "length" "2")]
)
@@ -2705,7 +2782,7 @@
[(set (match_operand:DI 0 "s_register_operand" "")
(ashiftrt:DI (match_operand:DI 1 "s_register_operand" "")
(match_operand:SI 2 "reg_or_int_operand" "")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"
if (GET_CODE (operands[2]) == CONST_INT)
{
@@ -2730,7 +2807,7 @@
(ashiftrt:DI (match_operand:DI 1 "s_register_operand" "?r,0")
(const_int 1)))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"movs\\t%R0, %R1, asr #1\;mov\\t%Q0, %Q1, rrx"
[(set_attr "conds" "clob")
(set_attr "length" "8")]
@@ -2748,11 +2825,11 @@
"
)
-(define_insn "*thumb_ashrsi3"
+(define_insn "*thumb1_ashrsi3"
[(set (match_operand:SI 0 "register_operand" "=l,l")
(ashiftrt:SI (match_operand:SI 1 "register_operand" "l,0")
(match_operand:SI 2 "nonmemory_operand" "N,l")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"asr\\t%0, %1, %2"
[(set_attr "length" "2")]
)
@@ -2761,7 +2838,7 @@
[(set (match_operand:DI 0 "s_register_operand" "")
(lshiftrt:DI (match_operand:DI 1 "s_register_operand" "")
(match_operand:SI 2 "reg_or_int_operand" "")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"
if (GET_CODE (operands[2]) == CONST_INT)
{
@@ -2786,7 +2863,7 @@
(lshiftrt:DI (match_operand:DI 1 "s_register_operand" "?r,0")
(const_int 1)))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"movs\\t%R0, %R1, lsr #1\;mov\\t%Q0, %Q1, rrx"
[(set_attr "conds" "clob")
(set_attr "length" "8")]
@@ -2807,11 +2884,11 @@
"
)
-(define_insn "*thumb_lshrsi3"
+(define_insn "*thumb1_lshrsi3"
[(set (match_operand:SI 0 "register_operand" "=l,l")
(lshiftrt:SI (match_operand:SI 1 "register_operand" "l,0")
(match_operand:SI 2 "nonmemory_operand" "N,l")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"lsr\\t%0, %1, %2"
[(set_attr "length" "2")]
)
@@ -2820,7 +2897,7 @@
[(set (match_operand:SI 0 "s_register_operand" "")
(rotatert:SI (match_operand:SI 1 "s_register_operand" "")
(match_operand:SI 2 "reg_or_int_operand" "")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"
if (GET_CODE (operands[2]) == CONST_INT)
operands[2] = GEN_INT ((32 - INTVAL (operands[2])) % 32);
@@ -2839,13 +2916,13 @@
(match_operand:SI 2 "arm_rhs_operand" "")))]
"TARGET_EITHER"
"
- if (TARGET_ARM)
+ if (TARGET_32BIT)
{
if (GET_CODE (operands[2]) == CONST_INT
&& ((unsigned HOST_WIDE_INT) INTVAL (operands[2])) > 31)
operands[2] = GEN_INT (INTVAL (operands[2]) % 32);
}
- else /* TARGET_THUMB */
+ else /* TARGET_THUMB1 */
{
if (GET_CODE (operands [2]) == CONST_INT)
operands [2] = force_reg (SImode, operands[2]);
@@ -2853,11 +2930,11 @@
"
)
-(define_insn "*thumb_rotrsi3"
+(define_insn "*thumb1_rotrsi3"
[(set (match_operand:SI 0 "register_operand" "=l")
(rotatert:SI (match_operand:SI 1 "register_operand" "0")
(match_operand:SI 2 "register_operand" "l")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"ror\\t%0, %0, %2"
[(set_attr "length" "2")]
)
@@ -2867,8 +2944,8 @@
(match_operator:SI 3 "shift_operator"
[(match_operand:SI 1 "s_register_operand" "r")
(match_operand:SI 2 "reg_or_int_operand" "rM")]))]
- "TARGET_ARM"
- "mov%?\\t%0, %1%S3"
+ "TARGET_32BIT"
+ "* return arm_output_shift(operands, 0);"
[(set_attr "predicable" "yes")
(set_attr "shift" "1")
(set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "")
@@ -2884,8 +2961,8 @@
(const_int 0)))
(set (match_operand:SI 0 "s_register_operand" "=r")
(match_op_dup 3 [(match_dup 1) (match_dup 2)]))]
- "TARGET_ARM"
- "mov%?s\\t%0, %1%S3"
+ "TARGET_32BIT"
+ "* return arm_output_shift(operands, 1);"
[(set_attr "conds" "set")
(set_attr "shift" "1")
(set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "")
@@ -2900,13 +2977,13 @@
(match_operand:SI 2 "arm_rhs_operand" "rM")])
(const_int 0)))
(clobber (match_scratch:SI 0 "=r"))]
- "TARGET_ARM"
- "mov%?s\\t%0, %1%S3"
+ "TARGET_32BIT"
+ "* return arm_output_shift(operands, 1);"
[(set_attr "conds" "set")
(set_attr "shift" "1")]
)
-(define_insn "*notsi_shiftsi"
+(define_insn "*arm_notsi_shiftsi"
[(set (match_operand:SI 0 "s_register_operand" "=r")
(not:SI (match_operator:SI 3 "shift_operator"
[(match_operand:SI 1 "s_register_operand" "r")
@@ -2920,7 +2997,7 @@
(const_string "alu_shift_reg")))]
)
-(define_insn "*notsi_shiftsi_compare0"
+(define_insn "*arm_notsi_shiftsi_compare0"
[(set (reg:CC_NOOV CC_REGNUM)
(compare:CC_NOOV (not:SI (match_operator:SI 3 "shift_operator"
[(match_operand:SI 1 "s_register_operand" "r")
@@ -2929,7 +3006,7 @@
(set (match_operand:SI 0 "s_register_operand" "=r")
(not:SI (match_op_dup 3 [(match_dup 1) (match_dup 2)])))]
"TARGET_ARM"
- "mvn%?s\\t%0, %1%S3"
+ "mvn%.\\t%0, %1%S3"
[(set_attr "conds" "set")
(set_attr "shift" "1")
(set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "")
@@ -2937,7 +3014,7 @@
(const_string "alu_shift_reg")))]
)
-(define_insn "*not_shiftsi_compare0_scratch"
+(define_insn "*arm_not_shiftsi_compare0_scratch"
[(set (reg:CC_NOOV CC_REGNUM)
(compare:CC_NOOV (not:SI (match_operator:SI 3 "shift_operator"
[(match_operand:SI 1 "s_register_operand" "r")
@@ -2945,7 +3022,7 @@
(const_int 0)))
(clobber (match_scratch:SI 0 "=r"))]
"TARGET_ARM"
- "mvn%?s\\t%0, %1%S3"
+ "mvn%.\\t%0, %1%S3"
[(set_attr "conds" "set")
(set_attr "shift" "1")
(set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "")
@@ -2963,7 +3040,7 @@
(set (match_operand:SI 0 "register_operand" "")
(lshiftrt:SI (match_dup 4)
(match_operand:SI 3 "const_int_operand" "")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"
{
HOST_WIDE_INT lshift = 32 - INTVAL (operands[2]) - INTVAL (operands[3]);
@@ -2992,7 +3069,7 @@
(clobber (reg:CC CC_REGNUM))])]
"TARGET_EITHER"
"
- if (TARGET_THUMB)
+ if (TARGET_THUMB1)
{
if (GET_CODE (operands[1]) != REG)
operands[1] = force_reg (SImode, operands[1]);
@@ -3012,11 +3089,11 @@
(set_attr "length" "8")]
)
-(define_insn "*thumb_negdi2"
+(define_insn "*thumb1_negdi2"
[(set (match_operand:DI 0 "register_operand" "=&l")
(neg:DI (match_operand:DI 1 "register_operand" "l")))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"mov\\t%R0, #0\;neg\\t%Q0, %Q1\;sbc\\t%R0, %R1"
[(set_attr "length" "6")]
)
@@ -3031,15 +3108,15 @@
(define_insn "*arm_negsi2"
[(set (match_operand:SI 0 "s_register_operand" "=r")
(neg:SI (match_operand:SI 1 "s_register_operand" "r")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"rsb%?\\t%0, %1, #0"
[(set_attr "predicable" "yes")]
)
-(define_insn "*thumb_negsi2"
+(define_insn "*thumb1_negsi2"
[(set (match_operand:SI 0 "register_operand" "=l")
(neg:SI (match_operand:SI 1 "register_operand" "l")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"neg\\t%0, %1"
[(set_attr "length" "2")]
)
@@ -3047,14 +3124,14 @@
(define_expand "negsf2"
[(set (match_operand:SF 0 "s_register_operand" "")
(neg:SF (match_operand:SF 1 "s_register_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
""
)
(define_expand "negdf2"
[(set (match_operand:DF 0 "s_register_operand" "")
(neg:DF (match_operand:DF 1 "s_register_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"")
;; abssi2 doesn't really clobber the condition codes if a different register
@@ -3069,7 +3146,7 @@
(clobber (match_dup 2))])]
"TARGET_EITHER"
"
- if (TARGET_THUMB)
+ if (TARGET_THUMB1)
operands[2] = gen_rtx_SCRATCH (SImode);
else
operands[2] = gen_rtx_REG (CCmode, CC_REGNUM);
@@ -3089,13 +3166,13 @@
(set_attr "length" "8")]
)
-(define_insn_and_split "*thumb_abssi2"
+(define_insn_and_split "*thumb1_abssi2"
[(set (match_operand:SI 0 "s_register_operand" "=l")
(abs:SI (match_operand:SI 1 "s_register_operand" "l")))
(clobber (match_scratch:SI 2 "=&l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"#"
- "TARGET_THUMB && reload_completed"
+ "TARGET_THUMB1 && reload_completed"
[(set (match_dup 2) (ashiftrt:SI (match_dup 1) (const_int 31)))
(set (match_dup 0) (plus:SI (match_dup 1) (match_dup 2)))
(set (match_dup 0) (xor:SI (match_dup 0) (match_dup 2)))]
@@ -3117,13 +3194,13 @@
(set_attr "length" "8")]
)
-(define_insn_and_split "*thumb_neg_abssi2"
+(define_insn_and_split "*thumb1_neg_abssi2"
[(set (match_operand:SI 0 "s_register_operand" "=l")
(neg:SI (abs:SI (match_operand:SI 1 "s_register_operand" "l"))))
(clobber (match_scratch:SI 2 "=&l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"#"
- "TARGET_THUMB && reload_completed"
+ "TARGET_THUMB1 && reload_completed"
[(set (match_dup 2) (ashiftrt:SI (match_dup 1) (const_int 31)))
(set (match_dup 0) (minus:SI (match_dup 2) (match_dup 1)))
(set (match_dup 0) (xor:SI (match_dup 0) (match_dup 2)))]
@@ -3134,33 +3211,33 @@
(define_expand "abssf2"
[(set (match_operand:SF 0 "s_register_operand" "")
(abs:SF (match_operand:SF 1 "s_register_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
"")
(define_expand "absdf2"
[(set (match_operand:DF 0 "s_register_operand" "")
(abs:DF (match_operand:DF 1 "s_register_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
"")
(define_expand "sqrtsf2"
[(set (match_operand:SF 0 "s_register_operand" "")
(sqrt:SF (match_operand:SF 1 "s_register_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"")
(define_expand "sqrtdf2"
[(set (match_operand:DF 0 "s_register_operand" "")
(sqrt:DF (match_operand:DF 1 "s_register_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"")
(define_insn_and_split "one_cmpldi2"
[(set (match_operand:DI 0 "s_register_operand" "=&r,&r")
(not:DI (match_operand:DI 1 "s_register_operand" "?r,0")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"#"
- "TARGET_ARM && reload_completed"
+ "TARGET_32BIT && reload_completed"
[(set (match_dup 0) (not:SI (match_dup 1)))
(set (match_dup 2) (not:SI (match_dup 3)))]
"
@@ -3184,15 +3261,15 @@
(define_insn "*arm_one_cmplsi2"
[(set (match_operand:SI 0 "s_register_operand" "=r")
(not:SI (match_operand:SI 1 "s_register_operand" "r")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"mvn%?\\t%0, %1"
[(set_attr "predicable" "yes")]
)
-(define_insn "*thumb_one_cmplsi2"
+(define_insn "*thumb1_one_cmplsi2"
[(set (match_operand:SI 0 "register_operand" "=l")
(not:SI (match_operand:SI 1 "register_operand" "l")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"mvn\\t%0, %1"
[(set_attr "length" "2")]
)
@@ -3203,8 +3280,8 @@
(const_int 0)))
(set (match_operand:SI 0 "s_register_operand" "=r")
(not:SI (match_dup 1)))]
- "TARGET_ARM"
- "mvn%?s\\t%0, %1"
+ "TARGET_32BIT"
+ "mvn%.\\t%0, %1"
[(set_attr "conds" "set")]
)
@@ -3213,8 +3290,8 @@
(compare:CC_NOOV (not:SI (match_operand:SI 1 "s_register_operand" "r"))
(const_int 0)))
(clobber (match_scratch:SI 0 "=r"))]
- "TARGET_ARM"
- "mvn%?s\\t%0, %1"
+ "TARGET_32BIT"
+ "mvn%.\\t%0, %1"
[(set_attr "conds" "set")]
)
@@ -3223,7 +3300,7 @@
(define_expand "floatsisf2"
[(set (match_operand:SF 0 "s_register_operand" "")
(float:SF (match_operand:SI 1 "s_register_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
"
if (TARGET_MAVERICK)
{
@@ -3235,7 +3312,7 @@
(define_expand "floatsidf2"
[(set (match_operand:DF 0 "s_register_operand" "")
(float:DF (match_operand:SI 1 "s_register_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
"
if (TARGET_MAVERICK)
{
@@ -3247,7 +3324,7 @@
(define_expand "fix_truncsfsi2"
[(set (match_operand:SI 0 "s_register_operand" "")
(fix:SI (fix:SF (match_operand:SF 1 "s_register_operand" ""))))]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
"
if (TARGET_MAVERICK)
{
@@ -3263,7 +3340,7 @@
(define_expand "fix_truncdfsi2"
[(set (match_operand:SI 0 "s_register_operand" "")
(fix:SI (fix:DF (match_operand:DF 1 "s_register_operand" ""))))]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
"
if (TARGET_MAVERICK)
{
@@ -3280,13 +3357,20 @@
[(set (match_operand:SF 0 "s_register_operand" "")
(float_truncate:SF
(match_operand:DF 1 "s_register_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
""
)
;; Zero and sign extension instructions.
-(define_insn "zero_extendsidi2"
+(define_expand "zero_extendsidi2"
+ [(set (match_operand:DI 0 "s_register_operand" "")
+ (zero_extend:DI (match_operand:SI 1 "s_register_operand" "")))]
+ "TARGET_32BIT"
+ ""
+)
+
+(define_insn "*arm_zero_extendsidi2"
[(set (match_operand:DI 0 "s_register_operand" "=r")
(zero_extend:DI (match_operand:SI 1 "s_register_operand" "r")))]
"TARGET_ARM"
@@ -3300,13 +3384,20 @@
(set_attr "predicable" "yes")]
)
-(define_insn "zero_extendqidi2"
+(define_expand "zero_extendqidi2"
+ [(set (match_operand:DI 0 "s_register_operand" "")
+ (zero_extend:DI (match_operand:QI 1 "nonimmediate_operand" "")))]
+ "TARGET_32BIT"
+ ""
+)
+
+(define_insn "*arm_zero_extendqidi2"
[(set (match_operand:DI 0 "s_register_operand" "=r,r")
(zero_extend:DI (match_operand:QI 1 "nonimmediate_operand" "r,m")))]
"TARGET_ARM"
"@
and%?\\t%Q0, %1, #255\;mov%?\\t%R0, #0
- ldr%?b\\t%Q0, %1\;mov%?\\t%R0, #0"
+ ldr%(b%)\\t%Q0, %1\;mov%?\\t%R0, #0"
[(set_attr "length" "8")
(set_attr "predicable" "yes")
(set_attr "type" "*,load_byte")
@@ -3314,7 +3405,14 @@
(set_attr "neg_pool_range" "*,4084")]
)
-(define_insn "extendsidi2"
+(define_expand "extendsidi2"
+ [(set (match_operand:DI 0 "s_register_operand" "")
+ (sign_extend:DI (match_operand:SI 1 "s_register_operand" "")))]
+ "TARGET_32BIT"
+ ""
+)
+
+(define_insn "*arm_extendsidi2"
[(set (match_operand:DI 0 "s_register_operand" "=r")
(sign_extend:DI (match_operand:SI 1 "s_register_operand" "r")))]
"TARGET_ARM"
@@ -3338,7 +3436,7 @@
"TARGET_EITHER"
"
{
- if ((TARGET_THUMB || arm_arch4) && GET_CODE (operands[1]) == MEM)
+ if ((TARGET_THUMB1 || arm_arch4) && GET_CODE (operands[1]) == MEM)
{
emit_insn (gen_rtx_SET (VOIDmode, operands[0],
gen_rtx_ZERO_EXTEND (SImode, operands[1])));
@@ -3366,10 +3464,10 @@
}"
)
-(define_insn "*thumb_zero_extendhisi2"
+(define_insn "*thumb1_zero_extendhisi2"
[(set (match_operand:SI 0 "register_operand" "=l")
(zero_extend:SI (match_operand:HI 1 "memory_operand" "m")))]
- "TARGET_THUMB && !arm_arch6"
+ "TARGET_THUMB1 && !arm_arch6"
"*
rtx mem = XEXP (operands[1], 0);
@@ -3408,10 +3506,10 @@
(set_attr "pool_range" "60")]
)
-(define_insn "*thumb_zero_extendhisi2_v6"
+(define_insn "*thumb1_zero_extendhisi2_v6"
[(set (match_operand:SI 0 "register_operand" "=l,l")
(zero_extend:SI (match_operand:HI 1 "nonimmediate_operand" "l,m")))]
- "TARGET_THUMB && arm_arch6"
+ "TARGET_THUMB1 && arm_arch6"
"*
rtx mem;
@@ -3459,7 +3557,7 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(zero_extend:SI (match_operand:HI 1 "memory_operand" "m")))]
"TARGET_ARM && arm_arch4 && !arm_arch6"
- "ldr%?h\\t%0, %1"
+ "ldr%(h%)\\t%0, %1"
[(set_attr "type" "load_byte")
(set_attr "predicable" "yes")
(set_attr "pool_range" "256")
@@ -3472,7 +3570,7 @@
"TARGET_ARM && arm_arch6"
"@
uxth%?\\t%0, %1
- ldr%?h\\t%0, %1"
+ ldr%(h%)\\t%0, %1"
[(set_attr "type" "alu_shift,load_byte")
(set_attr "predicable" "yes")
(set_attr "pool_range" "*,256")
@@ -3483,7 +3581,7 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(plus:SI (zero_extend:SI (match_operand:HI 1 "s_register_operand" "r"))
(match_operand:SI 2 "s_register_operand" "r")))]
- "TARGET_ARM && arm_arch6"
+ "TARGET_INT_SIMD"
"uxtah%?\\t%0, %2, %1"
[(set_attr "type" "alu_shift")
(set_attr "predicable" "yes")]
@@ -3529,20 +3627,20 @@
"
)
-(define_insn "*thumb_zero_extendqisi2"
+(define_insn "*thumb1_zero_extendqisi2"
[(set (match_operand:SI 0 "register_operand" "=l")
(zero_extend:SI (match_operand:QI 1 "memory_operand" "m")))]
- "TARGET_THUMB && !arm_arch6"
+ "TARGET_THUMB1 && !arm_arch6"
"ldrb\\t%0, %1"
[(set_attr "length" "2")
(set_attr "type" "load_byte")
(set_attr "pool_range" "32")]
)
-(define_insn "*thumb_zero_extendqisi2_v6"
+(define_insn "*thumb1_zero_extendqisi2_v6"
[(set (match_operand:SI 0 "register_operand" "=l,l")
(zero_extend:SI (match_operand:QI 1 "nonimmediate_operand" "l,m")))]
- "TARGET_THUMB && arm_arch6"
+ "TARGET_THUMB1 && arm_arch6"
"@
uxtb\\t%0, %1
ldrb\\t%0, %1"
@@ -3555,7 +3653,7 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(zero_extend:SI (match_operand:QI 1 "memory_operand" "m")))]
"TARGET_ARM && !arm_arch6"
- "ldr%?b\\t%0, %1\\t%@ zero_extendqisi2"
+ "ldr%(b%)\\t%0, %1\\t%@ zero_extendqisi2"
[(set_attr "type" "load_byte")
(set_attr "predicable" "yes")
(set_attr "pool_range" "4096")
@@ -3567,8 +3665,8 @@
(zero_extend:SI (match_operand:QI 1 "nonimmediate_operand" "r,m")))]
"TARGET_ARM && arm_arch6"
"@
- uxtb%?\\t%0, %1
- ldr%?b\\t%0, %1\\t%@ zero_extendqisi2"
+ uxtb%(%)\\t%0, %1
+ ldr%(b%)\\t%0, %1\\t%@ zero_extendqisi2"
[(set_attr "type" "alu_shift,load_byte")
(set_attr "predicable" "yes")
(set_attr "pool_range" "*,4096")
@@ -3579,7 +3677,7 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(plus:SI (zero_extend:SI (match_operand:QI 1 "s_register_operand" "r"))
(match_operand:SI 2 "s_register_operand" "r")))]
- "TARGET_ARM && arm_arch6"
+ "TARGET_INT_SIMD"
"uxtab%?\\t%0, %2, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "alu_shift")]
@@ -3589,7 +3687,7 @@
[(set (match_operand:SI 0 "s_register_operand" "")
(zero_extend:SI (subreg:QI (match_operand:SI 1 "" "") 0)))
(clobber (match_operand:SI 2 "s_register_operand" ""))]
- "TARGET_ARM && (GET_CODE (operands[1]) != MEM) && ! BYTES_BIG_ENDIAN"
+ "TARGET_32BIT && (GET_CODE (operands[1]) != MEM) && ! BYTES_BIG_ENDIAN"
[(set (match_dup 2) (match_dup 1))
(set (match_dup 0) (and:SI (match_dup 2) (const_int 255)))]
""
@@ -3599,7 +3697,7 @@
[(set (match_operand:SI 0 "s_register_operand" "")
(zero_extend:SI (subreg:QI (match_operand:SI 1 "" "") 3)))
(clobber (match_operand:SI 2 "s_register_operand" ""))]
- "TARGET_ARM && (GET_CODE (operands[1]) != MEM) && BYTES_BIG_ENDIAN"
+ "TARGET_32BIT && (GET_CODE (operands[1]) != MEM) && BYTES_BIG_ENDIAN"
[(set (match_dup 2) (match_dup 1))
(set (match_dup 0) (and:SI (match_dup 2) (const_int 255)))]
""
@@ -3609,7 +3707,7 @@
[(set (reg:CC_Z CC_REGNUM)
(compare:CC_Z (match_operand:QI 0 "s_register_operand" "r")
(const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"tst\\t%0, #255"
[(set_attr "conds" "set")]
)
@@ -3626,9 +3724,9 @@
{
if (GET_CODE (operands[1]) == MEM)
{
- if (TARGET_THUMB)
+ if (TARGET_THUMB1)
{
- emit_insn (gen_thumb_extendhisi2 (operands[0], operands[1]));
+ emit_insn (gen_thumb1_extendhisi2 (operands[0], operands[1]));
DONE;
}
else if (arm_arch4)
@@ -3650,8 +3748,8 @@
if (arm_arch6)
{
- if (TARGET_THUMB)
- emit_insn (gen_thumb_extendhisi2 (operands[0], operands[1]));
+ if (TARGET_THUMB1)
+ emit_insn (gen_thumb1_extendhisi2 (operands[0], operands[1]));
else
emit_insn (gen_rtx_SET (VOIDmode, operands[0],
gen_rtx_SIGN_EXTEND (SImode, operands[1])));
@@ -3664,11 +3762,11 @@
}"
)
-(define_insn "thumb_extendhisi2"
+(define_insn "thumb1_extendhisi2"
[(set (match_operand:SI 0 "register_operand" "=l")
(sign_extend:SI (match_operand:HI 1 "memory_operand" "m")))
(clobber (match_scratch:SI 2 "=&l"))]
- "TARGET_THUMB && !arm_arch6"
+ "TARGET_THUMB1 && !arm_arch6"
"*
{
rtx ops[4];
@@ -3725,11 +3823,11 @@
;; we try to verify the operands. Fortunately, we don't really need
;; the early-clobber: we can always use operand 0 if operand 2
;; overlaps the address.
-(define_insn "*thumb_extendhisi2_insn_v6"
+(define_insn "*thumb1_extendhisi2_insn_v6"
[(set (match_operand:SI 0 "register_operand" "=l,l")
(sign_extend:SI (match_operand:HI 1 "nonimmediate_operand" "l,m")))
(clobber (match_scratch:SI 2 "=X,l"))]
- "TARGET_THUMB && arm_arch6"
+ "TARGET_THUMB1 && arm_arch6"
"*
{
rtx ops[4];
@@ -3787,6 +3885,7 @@
(set_attr "pool_range" "*,1020")]
)
+;; This pattern will only be used when ldsh is not available
(define_expand "extendhisi2_mem"
[(set (match_dup 2) (zero_extend:SI (match_operand:HI 1 "" "")))
(set (match_dup 3)
@@ -3826,20 +3925,21 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(sign_extend:SI (match_operand:HI 1 "memory_operand" "m")))]
"TARGET_ARM && arm_arch4 && !arm_arch6"
- "ldr%?sh\\t%0, %1"
+ "ldr%(sh%)\\t%0, %1"
[(set_attr "type" "load_byte")
(set_attr "predicable" "yes")
(set_attr "pool_range" "256")
(set_attr "neg_pool_range" "244")]
)
+;; ??? Check Thumb-2 pool range
(define_insn "*arm_extendhisi2_v6"
[(set (match_operand:SI 0 "s_register_operand" "=r,r")
(sign_extend:SI (match_operand:HI 1 "nonimmediate_operand" "r,m")))]
- "TARGET_ARM && arm_arch6"
+ "TARGET_32BIT && arm_arch6"
"@
sxth%?\\t%0, %1
- ldr%?sh\\t%0, %1"
+ ldr%(sh%)\\t%0, %1"
[(set_attr "type" "alu_shift,load_byte")
(set_attr "predicable" "yes")
(set_attr "pool_range" "*,256")
@@ -3850,7 +3950,7 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(plus:SI (sign_extend:SI (match_operand:HI 1 "s_register_operand" "r"))
(match_operand:SI 2 "s_register_operand" "r")))]
- "TARGET_ARM && arm_arch6"
+ "TARGET_INT_SIMD"
"sxtah%?\\t%0, %2, %1"
)
@@ -3879,11 +3979,11 @@
}"
)
-(define_insn "*extendqihi_insn"
+(define_insn "*arm_extendqihi_insn"
[(set (match_operand:HI 0 "s_register_operand" "=r")
(sign_extend:HI (match_operand:QI 1 "memory_operand" "Uq")))]
"TARGET_ARM && arm_arch4"
- "ldr%?sb\\t%0, %1"
+ "ldr%(sb%)\\t%0, %1"
[(set_attr "type" "load_byte")
(set_attr "predicable" "yes")
(set_attr "pool_range" "256")
@@ -3926,7 +4026,7 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(sign_extend:SI (match_operand:QI 1 "memory_operand" "Uq")))]
"TARGET_ARM && arm_arch4 && !arm_arch6"
- "ldr%?sb\\t%0, %1"
+ "ldr%(sb%)\\t%0, %1"
[(set_attr "type" "load_byte")
(set_attr "predicable" "yes")
(set_attr "pool_range" "256")
@@ -3939,7 +4039,7 @@
"TARGET_ARM && arm_arch6"
"@
sxtb%?\\t%0, %1
- ldr%?sb\\t%0, %1"
+ ldr%(sb%)\\t%0, %1"
[(set_attr "type" "alu_shift,load_byte")
(set_attr "predicable" "yes")
(set_attr "pool_range" "*,256")
@@ -3950,16 +4050,16 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(plus:SI (sign_extend:SI (match_operand:QI 1 "s_register_operand" "r"))
(match_operand:SI 2 "s_register_operand" "r")))]
- "TARGET_ARM && arm_arch6"
+ "TARGET_INT_SIMD"
"sxtab%?\\t%0, %2, %1"
[(set_attr "type" "alu_shift")
(set_attr "predicable" "yes")]
)
-(define_insn "*thumb_extendqisi2"
+(define_insn "*thumb1_extendqisi2"
[(set (match_operand:SI 0 "register_operand" "=l,l")
(sign_extend:SI (match_operand:QI 1 "memory_operand" "V,m")))]
- "TARGET_THUMB && !arm_arch6"
+ "TARGET_THUMB1 && !arm_arch6"
"*
{
rtx ops[3];
@@ -4034,10 +4134,10 @@
(set_attr "pool_range" "32,32")]
)
-(define_insn "*thumb_extendqisi2_v6"
+(define_insn "*thumb1_extendqisi2_v6"
[(set (match_operand:SI 0 "register_operand" "=l,l,l")
(sign_extend:SI (match_operand:QI 1 "nonimmediate_operand" "l,V,m")))]
- "TARGET_THUMB && arm_arch6"
+ "TARGET_THUMB1 && arm_arch6"
"*
{
rtx ops[3];
@@ -4117,7 +4217,7 @@
(define_expand "extendsfdf2"
[(set (match_operand:DF 0 "s_register_operand" "")
(float_extend:DF (match_operand:SF 1 "s_register_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
""
)
@@ -4223,7 +4323,7 @@
(define_split
[(set (match_operand:ANY64 0 "arm_general_register_operand" "")
(match_operand:ANY64 1 "const_double_operand" ""))]
- "TARGET_ARM
+ "TARGET_32BIT
&& reload_completed
&& (arm_const_double_inline_cost (operands[1])
<= ((optimize_size || arm_ld_sched) ? 3 : 4))"
@@ -4315,10 +4415,10 @@
;;; ??? This was originally identical to the movdf_insn pattern.
;;; ??? The 'i' constraint looks funny, but it should always be replaced by
;;; thumb_reorg with a memory reference.
-(define_insn "*thumb_movdi_insn"
+(define_insn "*thumb1_movdi_insn"
[(set (match_operand:DI 0 "nonimmediate_operand" "=l,l,l,l,>,l, m,*r")
(match_operand:DI 1 "general_operand" "l, I,J,>,l,mi,l,*r"))]
- "TARGET_THUMB
+ "TARGET_THUMB1
&& !(TARGET_HARD_FLOAT && TARGET_MAVERICK)
&& ( register_operand (operands[0], DImode)
|| register_operand (operands[1], DImode))"
@@ -4363,7 +4463,7 @@
(match_operand:SI 1 "general_operand" ""))]
"TARGET_EITHER"
"
- if (TARGET_ARM)
+ if (TARGET_32BIT)
{
/* Everything except mem = const or mem = mem can be done easily. */
if (GET_CODE (operands[0]) == MEM)
@@ -4379,7 +4479,7 @@
DONE;
}
}
- else /* TARGET_THUMB.... */
+ else /* TARGET_THUMB1... */
{
if (!no_new_pseudos)
{
@@ -4422,8 +4522,8 @@
)
(define_insn "*arm_movsi_insn"
- [(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r, m")
- (match_operand:SI 1 "general_operand" "rI,K,mi,r"))]
+ [(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r,r, m")
+ (match_operand:SI 1 "general_operand" "rI,K,N,mi,r"))]
"TARGET_ARM && ! TARGET_IWMMXT
&& !(TARGET_HARD_FLOAT && TARGET_VFP)
&& ( register_operand (operands[0], SImode)
@@ -4431,18 +4531,19 @@
"@
mov%?\\t%0, %1
mvn%?\\t%0, #%B1
+ movw%?\\t%0, %1
ldr%?\\t%0, %1
str%?\\t%1, %0"
- [(set_attr "type" "*,*,load1,store1")
+ [(set_attr "type" "*,*,*,load1,store1")
(set_attr "predicable" "yes")
- (set_attr "pool_range" "*,*,4096,*")
- (set_attr "neg_pool_range" "*,*,4084,*")]
+ (set_attr "pool_range" "*,*,*,4096,*")
+ (set_attr "neg_pool_range" "*,*,*,4084,*")]
)
(define_split
[(set (match_operand:SI 0 "arm_general_register_operand" "")
(match_operand:SI 1 "const_int_operand" ""))]
- "TARGET_ARM
+ "TARGET_32BIT
&& (!(const_ok_for_arm (INTVAL (operands[1]))
|| const_ok_for_arm (~INTVAL (operands[1]))))"
[(clobber (const_int 0))]
@@ -4453,10 +4554,10 @@
"
)
-(define_insn "*thumb_movsi_insn"
+(define_insn "*thumb1_movsi_insn"
[(set (match_operand:SI 0 "nonimmediate_operand" "=l,l,l,l,l,>,l, m,*lh")
(match_operand:SI 1 "general_operand" "l, I,J,K,>,l,mi,l,*lh"))]
- "TARGET_THUMB
+ "TARGET_THUMB1
&& ( register_operand (operands[0], SImode)
|| register_operand (operands[1], SImode))"
"@
@@ -4477,7 +4578,7 @@
(define_split
[(set (match_operand:SI 0 "register_operand" "")
(match_operand:SI 1 "const_int_operand" ""))]
- "TARGET_THUMB && satisfies_constraint_J (operands[1])"
+ "TARGET_THUMB1 && satisfies_constraint_J (operands[1])"
[(set (match_dup 0) (match_dup 1))
(set (match_dup 0) (neg:SI (match_dup 0)))]
"operands[1] = GEN_INT (- INTVAL (operands[1]));"
@@ -4486,7 +4587,7 @@
(define_split
[(set (match_operand:SI 0 "register_operand" "")
(match_operand:SI 1 "const_int_operand" ""))]
- "TARGET_THUMB && satisfies_constraint_K (operands[1])"
+ "TARGET_THUMB1 && satisfies_constraint_K (operands[1])"
[(set (match_dup 0) (match_dup 1))
(set (match_dup 0) (ashift:SI (match_dup 0) (match_dup 2)))]
"
@@ -4527,10 +4628,10 @@
(set (attr "neg_pool_range") (const_int 4084))]
)
-(define_insn "pic_load_addr_thumb"
+(define_insn "pic_load_addr_thumb1"
[(set (match_operand:SI 0 "s_register_operand" "=l")
(unspec:SI [(match_operand:SI 1 "" "mX")] UNSPEC_PIC_SYM))]
- "TARGET_THUMB && flag_pic"
+ "TARGET_THUMB1 && flag_pic"
"ldr\\t%0, %1"
[(set_attr "type" "load1")
(set (attr "pool_range") (const_int 1024))]
@@ -4575,7 +4676,7 @@
(const (plus:SI (pc) (const_int 4))))]
UNSPEC_PIC_BASE))
(use (match_operand 2 "" ""))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
(*targetm.asm_out.internal_label) (asm_out_file, \"LPIC\",
INTVAL (operands[2]));
@@ -4656,10 +4757,10 @@
(const_int 0)))
(set (match_operand:SI 0 "s_register_operand" "=r,r")
(match_dup 1))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
cmp%?\\t%0, #0
- sub%?s\\t%0, %1, #0"
+ sub%.\\t%0, %1, #0"
[(set_attr "conds" "set")]
)
@@ -4775,7 +4876,7 @@
(define_expand "storehi_single_op"
[(set (match_operand:HI 0 "memory_operand" "")
(match_operand:HI 1 "general_operand" ""))]
- "TARGET_ARM && arm_arch4"
+ "TARGET_32BIT && arm_arch4"
"
if (!s_register_operand (operands[1], HImode))
operands[1] = copy_to_mode_reg (HImode, operands[1]);
@@ -4893,7 +4994,25 @@
DONE;
}
}
- else /* TARGET_THUMB */
+ else if (TARGET_THUMB2)
+ {
+ /* Thumb-2 can do everything except mem=mem and mem=const easily. */
+ if (!no_new_pseudos)
+ {
+ if (GET_CODE (operands[0]) != REG)
+ operands[1] = force_reg (HImode, operands[1]);
+ /* Zero extend a constant, and keep it in an SImode reg. */
+ else if (GET_CODE (operands[1]) == CONST_INT)
+ {
+ rtx reg = gen_reg_rtx (SImode);
+ HOST_WIDE_INT val = INTVAL (operands[1]) & 0xffff;
+
+ emit_insn (gen_movsi (reg, GEN_INT (val)));
+ operands[1] = gen_lowpart (HImode, reg);
+ }
+ }
+ }
+ else /* TARGET_THUMB1 */
{
if (!no_new_pseudos)
{
@@ -4955,10 +5074,10 @@
"
)
-(define_insn "*thumb_movhi_insn"
+(define_insn "*thumb1_movhi_insn"
[(set (match_operand:HI 0 "nonimmediate_operand" "=l,l,m,*r,*h,l")
(match_operand:HI 1 "general_operand" "l,m,l,*h,*r,I"))]
- "TARGET_THUMB
+ "TARGET_THUMB1
&& ( register_operand (operands[0], HImode)
|| register_operand (operands[1], HImode))"
"*
@@ -5054,8 +5173,8 @@
"@
mov%?\\t%0, %1\\t%@ movhi
mvn%?\\t%0, #%B1\\t%@ movhi
- str%?h\\t%1, %0\\t%@ movhi
- ldr%?h\\t%0, %1\\t%@ movhi"
+ str%(h%)\\t%1, %0\\t%@ movhi
+ ldr%(h%)\\t%0, %1\\t%@ movhi"
[(set_attr "type" "*,*,store1,load1")
(set_attr "predicable" "yes")
(set_attr "pool_range" "*,*,*,256")
@@ -5076,7 +5195,7 @@
[(set (match_operand:HI 0 "memory_operand" "")
(match_operand:HI 1 "register_operand" ""))
(clobber (match_operand:DI 2 "register_operand" ""))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"
if (strict_memory_address_p (HImode, XEXP (operands[0], 0))
&& REGNO (operands[1]) <= LAST_LO_REGNUM)
@@ -5191,22 +5310,22 @@
(define_insn "*arm_movqi_insn"
[(set (match_operand:QI 0 "nonimmediate_operand" "=r,r,r,m")
(match_operand:QI 1 "general_operand" "rI,K,m,r"))]
- "TARGET_ARM
+ "TARGET_32BIT
&& ( register_operand (operands[0], QImode)
|| register_operand (operands[1], QImode))"
"@
mov%?\\t%0, %1
mvn%?\\t%0, #%B1
- ldr%?b\\t%0, %1
- str%?b\\t%1, %0"
+ ldr%(b%)\\t%0, %1
+ str%(b%)\\t%1, %0"
[(set_attr "type" "*,*,load1,store1")
(set_attr "predicable" "yes")]
)
-(define_insn "*thumb_movqi_insn"
+(define_insn "*thumb1_movqi_insn"
[(set (match_operand:QI 0 "nonimmediate_operand" "=l,l,m,*r,*h,l")
(match_operand:QI 1 "general_operand" "l, m,l,*h,*r,I"))]
- "TARGET_THUMB
+ "TARGET_THUMB1
&& ( register_operand (operands[0], QImode)
|| register_operand (operands[1], QImode))"
"@
@@ -5226,12 +5345,12 @@
(match_operand:SF 1 "general_operand" ""))]
"TARGET_EITHER"
"
- if (TARGET_ARM)
+ if (TARGET_32BIT)
{
if (GET_CODE (operands[0]) == MEM)
operands[1] = force_reg (SFmode, operands[1]);
}
- else /* TARGET_THUMB */
+ else /* TARGET_THUMB1 */
{
if (!no_new_pseudos)
{
@@ -5247,7 +5366,7 @@
(define_split
[(set (match_operand:SF 0 "arm_general_register_operand" "")
(match_operand:SF 1 "immediate_operand" ""))]
- "TARGET_ARM
+ "TARGET_32BIT
&& reload_completed
&& GET_CODE (operands[1]) == CONST_DOUBLE"
[(set (match_dup 2) (match_dup 3))]
@@ -5278,10 +5397,10 @@
)
;;; ??? This should have alternatives for constants.
-(define_insn "*thumb_movsf_insn"
+(define_insn "*thumb1_movsf_insn"
[(set (match_operand:SF 0 "nonimmediate_operand" "=l,l,>,l, m,*r,*h")
(match_operand:SF 1 "general_operand" "l, >,l,mF,l,*h,*r"))]
- "TARGET_THUMB
+ "TARGET_THUMB1
&& ( register_operand (operands[0], SFmode)
|| register_operand (operands[1], SFmode))"
"@
@@ -5302,7 +5421,7 @@
(match_operand:DF 1 "general_operand" ""))]
"TARGET_EITHER"
"
- if (TARGET_ARM)
+ if (TARGET_32BIT)
{
if (GET_CODE (operands[0]) == MEM)
operands[1] = force_reg (DFmode, operands[1]);
@@ -5324,7 +5443,7 @@
[(match_operand:DF 0 "arm_reload_memory_operand" "=o")
(match_operand:DF 1 "s_register_operand" "r")
(match_operand:SI 2 "s_register_operand" "=&r")]
- "TARGET_ARM"
+ "TARGET_32BIT"
"
{
enum rtx_code code = GET_CODE (XEXP (operands[0], 0));
@@ -5392,7 +5511,7 @@
(define_insn "*thumb_movdf_insn"
[(set (match_operand:DF 0 "nonimmediate_operand" "=l,l,>,l, m,*r")
(match_operand:DF 1 "general_operand" "l, >,l,mF,l,*r"))]
- "TARGET_THUMB
+ "TARGET_THUMB1
&& ( register_operand (operands[0], DFmode)
|| register_operand (operands[1], DFmode))"
"*
@@ -5428,7 +5547,7 @@
(define_expand "movxf"
[(set (match_operand:XF 0 "general_operand" "")
(match_operand:XF 1 "general_operand" ""))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"
if (GET_CODE (operands[0]) == MEM)
operands[1] = force_reg (XFmode, operands[1]);
@@ -5466,7 +5585,7 @@
[(match_par_dup 3 [(set (match_operand:SI 0 "" "")
(match_operand:SI 1 "" ""))
(use (match_operand:SI 2 "" ""))])]
- "TARGET_ARM"
+ "TARGET_32BIT"
{
HOST_WIDE_INT offset = 0;
@@ -5501,13 +5620,13 @@
(mem:SI (plus:SI (match_dup 2) (const_int 8))))
(set (match_operand:SI 6 "arm_hard_register_operand" "")
(mem:SI (plus:SI (match_dup 2) (const_int 12))))])]
- "TARGET_ARM && XVECLEN (operands[0], 0) == 5"
- "ldm%?ia\\t%1!, {%3, %4, %5, %6}"
+ "TARGET_32BIT && XVECLEN (operands[0], 0) == 5"
+ "ldm%(ia%)\\t%1!, {%3, %4, %5, %6}"
[(set_attr "type" "load4")
(set_attr "predicable" "yes")]
)
-(define_insn "*ldmsi_postinc4_thumb"
+(define_insn "*ldmsi_postinc4_thumb1"
[(match_parallel 0 "load_multiple_operation"
[(set (match_operand:SI 1 "s_register_operand" "=l")
(plus:SI (match_operand:SI 2 "s_register_operand" "1")
@@ -5520,7 +5639,7 @@
(mem:SI (plus:SI (match_dup 2) (const_int 8))))
(set (match_operand:SI 6 "arm_hard_register_operand" "")
(mem:SI (plus:SI (match_dup 2) (const_int 12))))])]
- "TARGET_THUMB && XVECLEN (operands[0], 0) == 5"
+ "TARGET_THUMB1 && XVECLEN (operands[0], 0) == 5"
"ldmia\\t%1!, {%3, %4, %5, %6}"
[(set_attr "type" "load4")]
)
@@ -5536,8 +5655,8 @@
(mem:SI (plus:SI (match_dup 2) (const_int 4))))
(set (match_operand:SI 5 "arm_hard_register_operand" "")
(mem:SI (plus:SI (match_dup 2) (const_int 8))))])]
- "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
- "ldm%?ia\\t%1!, {%3, %4, %5}"
+ "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
+ "ldm%(ia%)\\t%1!, {%3, %4, %5}"
[(set_attr "type" "load3")
(set_attr "predicable" "yes")]
)
@@ -5551,8 +5670,8 @@
(mem:SI (match_dup 2)))
(set (match_operand:SI 4 "arm_hard_register_operand" "")
(mem:SI (plus:SI (match_dup 2) (const_int 4))))])]
- "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
- "ldm%?ia\\t%1!, {%3, %4}"
+ "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
+ "ldm%(ia%)\\t%1!, {%3, %4}"
[(set_attr "type" "load2")
(set_attr "predicable" "yes")]
)
@@ -5569,8 +5688,8 @@
(mem:SI (plus:SI (match_dup 1) (const_int 8))))
(set (match_operand:SI 5 "arm_hard_register_operand" "")
(mem:SI (plus:SI (match_dup 1) (const_int 12))))])]
- "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
- "ldm%?ia\\t%1, {%2, %3, %4, %5}"
+ "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
+ "ldm%(ia%)\\t%1, {%2, %3, %4, %5}"
[(set_attr "type" "load4")
(set_attr "predicable" "yes")]
)
@@ -5583,8 +5702,8 @@
(mem:SI (plus:SI (match_dup 1) (const_int 4))))
(set (match_operand:SI 4 "arm_hard_register_operand" "")
(mem:SI (plus:SI (match_dup 1) (const_int 8))))])]
- "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
- "ldm%?ia\\t%1, {%2, %3, %4}"
+ "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
+ "ldm%(ia%)\\t%1, {%2, %3, %4}"
[(set_attr "type" "load3")
(set_attr "predicable" "yes")]
)
@@ -5595,8 +5714,8 @@
(mem:SI (match_operand:SI 1 "s_register_operand" "r")))
(set (match_operand:SI 3 "arm_hard_register_operand" "")
(mem:SI (plus:SI (match_dup 1) (const_int 4))))])]
- "TARGET_ARM && XVECLEN (operands[0], 0) == 2"
- "ldm%?ia\\t%1, {%2, %3}"
+ "TARGET_32BIT && XVECLEN (operands[0], 0) == 2"
+ "ldm%(ia%)\\t%1, {%2, %3}"
[(set_attr "type" "load2")
(set_attr "predicable" "yes")]
)
@@ -5605,7 +5724,7 @@
[(match_par_dup 3 [(set (match_operand:SI 0 "" "")
(match_operand:SI 1 "" ""))
(use (match_operand:SI 2 "" ""))])]
- "TARGET_ARM"
+ "TARGET_32BIT"
{
HOST_WIDE_INT offset = 0;
@@ -5640,13 +5759,13 @@
(match_operand:SI 5 "arm_hard_register_operand" ""))
(set (mem:SI (plus:SI (match_dup 2) (const_int 12)))
(match_operand:SI 6 "arm_hard_register_operand" ""))])]
- "TARGET_ARM && XVECLEN (operands[0], 0) == 5"
- "stm%?ia\\t%1!, {%3, %4, %5, %6}"
+ "TARGET_32BIT && XVECLEN (operands[0], 0) == 5"
+ "stm%(ia%)\\t%1!, {%3, %4, %5, %6}"
[(set_attr "predicable" "yes")
(set_attr "type" "store4")]
)
-(define_insn "*stmsi_postinc4_thumb"
+(define_insn "*stmsi_postinc4_thumb1"
[(match_parallel 0 "store_multiple_operation"
[(set (match_operand:SI 1 "s_register_operand" "=l")
(plus:SI (match_operand:SI 2 "s_register_operand" "1")
@@ -5659,7 +5778,7 @@
(match_operand:SI 5 "arm_hard_register_operand" ""))
(set (mem:SI (plus:SI (match_dup 2) (const_int 12)))
(match_operand:SI 6 "arm_hard_register_operand" ""))])]
- "TARGET_THUMB && XVECLEN (operands[0], 0) == 5"
+ "TARGET_THUMB1 && XVECLEN (operands[0], 0) == 5"
"stmia\\t%1!, {%3, %4, %5, %6}"
[(set_attr "type" "store4")]
)
@@ -5675,8 +5794,8 @@
(match_operand:SI 4 "arm_hard_register_operand" ""))
(set (mem:SI (plus:SI (match_dup 2) (const_int 8)))
(match_operand:SI 5 "arm_hard_register_operand" ""))])]
- "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
- "stm%?ia\\t%1!, {%3, %4, %5}"
+ "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
+ "stm%(ia%)\\t%1!, {%3, %4, %5}"
[(set_attr "predicable" "yes")
(set_attr "type" "store3")]
)
@@ -5690,8 +5809,8 @@
(match_operand:SI 3 "arm_hard_register_operand" ""))
(set (mem:SI (plus:SI (match_dup 2) (const_int 4)))
(match_operand:SI 4 "arm_hard_register_operand" ""))])]
- "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
- "stm%?ia\\t%1!, {%3, %4}"
+ "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
+ "stm%(ia%)\\t%1!, {%3, %4}"
[(set_attr "predicable" "yes")
(set_attr "type" "store2")]
)
@@ -5708,8 +5827,8 @@
(match_operand:SI 4 "arm_hard_register_operand" ""))
(set (mem:SI (plus:SI (match_dup 1) (const_int 12)))
(match_operand:SI 5 "arm_hard_register_operand" ""))])]
- "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
- "stm%?ia\\t%1, {%2, %3, %4, %5}"
+ "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
+ "stm%(ia%)\\t%1, {%2, %3, %4, %5}"
[(set_attr "predicable" "yes")
(set_attr "type" "store4")]
)
@@ -5722,8 +5841,8 @@
(match_operand:SI 3 "arm_hard_register_operand" ""))
(set (mem:SI (plus:SI (match_dup 1) (const_int 8)))
(match_operand:SI 4 "arm_hard_register_operand" ""))])]
- "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
- "stm%?ia\\t%1, {%2, %3, %4}"
+ "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
+ "stm%(ia%)\\t%1, {%2, %3, %4}"
[(set_attr "predicable" "yes")
(set_attr "type" "store3")]
)
@@ -5734,8 +5853,8 @@
(match_operand:SI 2 "arm_hard_register_operand" ""))
(set (mem:SI (plus:SI (match_dup 1) (const_int 4)))
(match_operand:SI 3 "arm_hard_register_operand" ""))])]
- "TARGET_ARM && XVECLEN (operands[0], 0) == 2"
- "stm%?ia\\t%1, {%2, %3}"
+ "TARGET_32BIT && XVECLEN (operands[0], 0) == 2"
+ "stm%(ia%)\\t%1, {%2, %3}"
[(set_attr "predicable" "yes")
(set_attr "type" "store2")]
)
@@ -5751,13 +5870,13 @@
(match_operand:SI 3 "const_int_operand" "")]
"TARGET_EITHER"
"
- if (TARGET_ARM)
+ if (TARGET_32BIT)
{
if (arm_gen_movmemqi (operands))
DONE;
FAIL;
}
- else /* TARGET_THUMB */
+ else /* TARGET_THUMB1 */
{
if ( INTVAL (operands[3]) != 4
|| INTVAL (operands[2]) > 48)
@@ -5785,7 +5904,7 @@
(clobber (match_scratch:SI 4 "=&l"))
(clobber (match_scratch:SI 5 "=&l"))
(clobber (match_scratch:SI 6 "=&l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"* return thumb_output_move_mem_multiple (3, operands);"
[(set_attr "length" "4")
; This isn't entirely accurate... It loads as well, but in terms of
@@ -5804,7 +5923,7 @@
(plus:SI (match_dup 3) (const_int 8)))
(clobber (match_scratch:SI 4 "=&l"))
(clobber (match_scratch:SI 5 "=&l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"* return thumb_output_move_mem_multiple (2, operands);"
[(set_attr "length" "4")
; This isn't entirely accurate... It loads as well, but in terms of
@@ -5838,15 +5957,15 @@
(match_operand:SI 2 "nonmemory_operand" "")])
(label_ref (match_operand 3 "" ""))
(pc)))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"
- if (thumb_cmpneg_operand (operands[2], SImode))
+ if (thumb1_cmpneg_operand (operands[2], SImode))
{
emit_jump_insn (gen_cbranchsi4_scratch (NULL, operands[1], operands[2],
operands[3], operands[0]));
DONE;
}
- if (!thumb_cmp_operand (operands[2], SImode))
+ if (!thumb1_cmp_operand (operands[2], SImode))
operands[2] = force_reg (SImode, operands[2]);
")
@@ -5854,10 +5973,10 @@
[(set (pc) (if_then_else
(match_operator 0 "arm_comparison_operator"
[(match_operand:SI 1 "s_register_operand" "l,*h")
- (match_operand:SI 2 "thumb_cmp_operand" "lI*h,*r")])
+ (match_operand:SI 2 "thumb1_cmp_operand" "lI*h,*r")])
(label_ref (match_operand 3 "" ""))
(pc)))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
output_asm_insn (\"cmp\\t%1, %2\", operands);
@@ -5889,11 +6008,11 @@
[(set (pc) (if_then_else
(match_operator 4 "arm_comparison_operator"
[(match_operand:SI 1 "s_register_operand" "l,0")
- (match_operand:SI 2 "thumb_cmpneg_operand" "L,J")])
+ (match_operand:SI 2 "thumb1_cmpneg_operand" "L,J")])
(label_ref (match_operand 3 "" ""))
(pc)))
(clobber (match_scratch:SI 0 "=l,l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
output_asm_insn (\"add\\t%0, %1, #%n2\", operands);
@@ -5930,7 +6049,7 @@
(pc)))
(set (match_operand:SI 0 "thumb_cbrch_target_operand" "=l,l,*h,*m")
(match_dup 1))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*{
if (which_alternative == 0)
output_asm_insn (\"cmp\t%0, #0\", operands);
@@ -5991,7 +6110,7 @@
(neg:SI (match_operand:SI 2 "s_register_operand" "l"))])
(label_ref (match_operand 3 "" ""))
(pc)))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
output_asm_insn (\"cmn\\t%1, %2\", operands);
switch (get_attr_length (insn))
@@ -6029,7 +6148,7 @@
(label_ref (match_operand 3 "" ""))
(pc)))
(clobber (match_scratch:SI 4 "=l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
{
rtx op[3];
@@ -6073,7 +6192,7 @@
(label_ref (match_operand 3 "" ""))
(pc)))
(clobber (match_scratch:SI 4 "=l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
{
rtx op[3];
@@ -6115,7 +6234,7 @@
(const_int 0)])
(label_ref (match_operand 2 "" ""))
(pc)))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
{
output_asm_insn (\"tst\\t%0, %1\", operands);
@@ -6155,7 +6274,7 @@
(set (match_operand:SI 0 "thumb_cbrch_target_operand" "=l,*?h,*?m,*?m")
(and:SI (match_dup 2) (match_dup 3)))
(clobber (match_scratch:SI 1 "=X,l,&l,&l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
{
if (which_alternative == 0)
@@ -6220,7 +6339,7 @@
(label_ref (match_operand 3 "" ""))
(pc)))
(clobber (match_scratch:SI 0 "=l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
{
output_asm_insn (\"orr\\t%0, %2\", operands);
@@ -6260,7 +6379,7 @@
(set (match_operand:SI 0 "thumb_cbrch_target_operand" "=l,*?h,*?m,*?m")
(ior:SI (match_dup 2) (match_dup 3)))
(clobber (match_scratch:SI 1 "=X,l,&l,&l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
{
if (which_alternative == 0)
@@ -6325,7 +6444,7 @@
(label_ref (match_operand 3 "" ""))
(pc)))
(clobber (match_scratch:SI 0 "=l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
{
output_asm_insn (\"eor\\t%0, %2\", operands);
@@ -6365,7 +6484,7 @@
(set (match_operand:SI 0 "thumb_cbrch_target_operand" "=l,*?h,*?m,*?m")
(xor:SI (match_dup 2) (match_dup 3)))
(clobber (match_scratch:SI 1 "=X,l,&l,&l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
{
if (which_alternative == 0)
@@ -6430,7 +6549,7 @@
(label_ref (match_operand 3 "" ""))
(pc)))
(clobber (match_scratch:SI 0 "=l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
{
output_asm_insn (\"bic\\t%0, %2\", operands);
@@ -6470,7 +6589,7 @@
(set (match_operand:SI 0 "thumb_cbrch_target_operand" "=!l,l,*?h,*?m,*?m")
(and:SI (not:SI (match_dup 3)) (match_dup 2)))
(clobber (match_scratch:SI 1 "=X,l,l,&l,&l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
{
if (which_alternative == 0)
@@ -6537,7 +6656,7 @@
(set (match_operand:SI 0 "thumb_cbrch_target_operand" "=l,*?h,*?m,*?m")
(plus:SI (match_dup 2) (const_int -1)))
(clobber (match_scratch:SI 1 "=X,l,&l,&l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
{
rtx cond[2];
@@ -6644,7 +6763,7 @@
(match_operand:SI 0 "thumb_cbrch_target_operand" "=l,l,*!h,*?h,*?m,*?m")
(plus:SI (match_dup 2) (match_dup 3)))
(clobber (match_scratch:SI 1 "=X,X,X,l,&l,&l"))]
- "TARGET_THUMB
+ "TARGET_THUMB1
&& (GET_CODE (operands[4]) == EQ
|| GET_CODE (operands[4]) == NE
|| GET_CODE (operands[4]) == GE
@@ -6723,7 +6842,7 @@
(label_ref (match_operand 4 "" ""))
(pc)))
(clobber (match_scratch:SI 0 "=X,X,l,l"))]
- "TARGET_THUMB
+ "TARGET_THUMB1
&& (GET_CODE (operands[3]) == EQ
|| GET_CODE (operands[3]) == NE
|| GET_CODE (operands[3]) == GE
@@ -6793,7 +6912,7 @@
(set (match_operand:SI 0 "thumb_cbrch_target_operand" "=l,*?h,*?m,*?m")
(minus:SI (match_dup 2) (match_dup 3)))
(clobber (match_scratch:SI 1 "=X,l,&l,&l"))]
- "TARGET_THUMB
+ "TARGET_THUMB1
&& (GET_CODE (operands[4]) == EQ
|| GET_CODE (operands[4]) == NE
|| GET_CODE (operands[4]) == GE
@@ -6870,7 +6989,7 @@
(const_int 0)])
(label_ref (match_operand 3 "" ""))
(pc)))]
- "TARGET_THUMB
+ "TARGET_THUMB1
&& (GET_CODE (operands[0]) == EQ
|| GET_CODE (operands[0]) == NE
|| GET_CODE (operands[0]) == GE
@@ -6906,7 +7025,7 @@
(define_expand "cmpsi"
[(match_operand:SI 0 "s_register_operand" "")
(match_operand:SI 1 "arm_add_operand" "")]
- "TARGET_ARM"
+ "TARGET_32BIT"
"{
arm_compare_op0 = operands[0];
arm_compare_op1 = operands[1];
@@ -6917,7 +7036,7 @@
(define_expand "cmpsf"
[(match_operand:SF 0 "s_register_operand" "")
(match_operand:SF 1 "arm_float_compare_operand" "")]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
"
arm_compare_op0 = operands[0];
arm_compare_op1 = operands[1];
@@ -6928,7 +7047,7 @@
(define_expand "cmpdf"
[(match_operand:DF 0 "s_register_operand" "")
(match_operand:DF 1 "arm_float_compare_operand" "")]
- "TARGET_ARM && TARGET_HARD_FLOAT"
+ "TARGET_32BIT && TARGET_HARD_FLOAT"
"
arm_compare_op0 = operands[0];
arm_compare_op1 = operands[1];
@@ -6940,14 +7059,14 @@
[(set (reg:CC CC_REGNUM)
(compare:CC (match_operand:SI 0 "s_register_operand" "r,r")
(match_operand:SI 1 "arm_add_operand" "rI,L")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"@
cmp%?\\t%0, %1
cmn%?\\t%0, #%n1"
[(set_attr "conds" "set")]
)
-(define_insn "*cmpsi_shiftsi"
+(define_insn "*arm_cmpsi_shiftsi"
[(set (reg:CC CC_REGNUM)
(compare:CC (match_operand:SI 0 "s_register_operand" "r")
(match_operator:SI 3 "shift_operator"
@@ -6962,7 +7081,7 @@
(const_string "alu_shift_reg")))]
)
-(define_insn "*cmpsi_shiftsi_swp"
+(define_insn "*arm_cmpsi_shiftsi_swp"
[(set (reg:CC_SWP CC_REGNUM)
(compare:CC_SWP (match_operator:SI 3 "shift_operator"
[(match_operand:SI 1 "s_register_operand" "r")
@@ -6977,7 +7096,7 @@
(const_string "alu_shift_reg")))]
)
-(define_insn "*cmpsi_negshiftsi_si"
+(define_insn "*arm_cmpsi_negshiftsi_si"
[(set (reg:CC_Z CC_REGNUM)
(compare:CC_Z
(neg:SI (match_operator:SI 1 "shift_operator"
@@ -7043,7 +7162,7 @@
(define_insn "*deleted_compare"
[(set (match_operand 0 "cc_register" "") (match_dup 0))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"\\t%@ deleted compare"
[(set_attr "conds" "set")
(set_attr "length" "0")]
@@ -7057,7 +7176,7 @@
(if_then_else (eq (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (EQ, arm_compare_op0, arm_compare_op1);"
)
@@ -7066,7 +7185,7 @@
(if_then_else (ne (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (NE, arm_compare_op0, arm_compare_op1);"
)
@@ -7075,7 +7194,7 @@
(if_then_else (gt (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (GT, arm_compare_op0, arm_compare_op1);"
)
@@ -7084,7 +7203,7 @@
(if_then_else (le (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (LE, arm_compare_op0, arm_compare_op1);"
)
@@ -7093,7 +7212,7 @@
(if_then_else (ge (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (GE, arm_compare_op0, arm_compare_op1);"
)
@@ -7102,7 +7221,7 @@
(if_then_else (lt (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (LT, arm_compare_op0, arm_compare_op1);"
)
@@ -7111,7 +7230,7 @@
(if_then_else (gtu (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (GTU, arm_compare_op0, arm_compare_op1);"
)
@@ -7120,7 +7239,7 @@
(if_then_else (leu (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (LEU, arm_compare_op0, arm_compare_op1);"
)
@@ -7129,7 +7248,7 @@
(if_then_else (geu (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (GEU, arm_compare_op0, arm_compare_op1);"
)
@@ -7138,7 +7257,7 @@
(if_then_else (ltu (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (LTU, arm_compare_op0, arm_compare_op1);"
)
@@ -7147,7 +7266,7 @@
(if_then_else (unordered (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"operands[1] = arm_gen_compare_reg (UNORDERED, arm_compare_op0,
arm_compare_op1);"
)
@@ -7157,7 +7276,7 @@
(if_then_else (ordered (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"operands[1] = arm_gen_compare_reg (ORDERED, arm_compare_op0,
arm_compare_op1);"
)
@@ -7167,7 +7286,7 @@
(if_then_else (ungt (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"operands[1] = arm_gen_compare_reg (UNGT, arm_compare_op0, arm_compare_op1);"
)
@@ -7176,7 +7295,7 @@
(if_then_else (unlt (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"operands[1] = arm_gen_compare_reg (UNLT, arm_compare_op0, arm_compare_op1);"
)
@@ -7185,7 +7304,7 @@
(if_then_else (unge (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"operands[1] = arm_gen_compare_reg (UNGE, arm_compare_op0, arm_compare_op1);"
)
@@ -7194,7 +7313,7 @@
(if_then_else (unle (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"operands[1] = arm_gen_compare_reg (UNLE, arm_compare_op0, arm_compare_op1);"
)
@@ -7205,7 +7324,7 @@
(if_then_else (uneq (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"operands[1] = arm_gen_compare_reg (UNEQ, arm_compare_op0, arm_compare_op1);"
)
@@ -7214,7 +7333,7 @@
(if_then_else (ltgt (match_dup 1) (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"operands[1] = arm_gen_compare_reg (LTGT, arm_compare_op0, arm_compare_op1);"
)
@@ -7228,7 +7347,7 @@
(if_then_else (uneq (match_operand 1 "cc_register" "") (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"*
gcc_assert (!arm_ccfsm_state);
@@ -7244,7 +7363,7 @@
(if_then_else (ltgt (match_operand 1 "cc_register" "") (const_int 0))
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"*
gcc_assert (!arm_ccfsm_state);
@@ -7260,7 +7379,7 @@
[(match_operand 2 "cc_register" "") (const_int 0)])
(label_ref (match_operand 0 "" ""))
(pc)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"*
if (arm_ccfsm_state == 1 || arm_ccfsm_state == 2)
{
@@ -7311,7 +7430,7 @@
[(match_operand 2 "cc_register" "") (const_int 0)])
(pc)
(label_ref (match_operand 0 "" ""))))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"*
if (arm_ccfsm_state == 1 || arm_ccfsm_state == 2)
{
@@ -7331,77 +7450,77 @@
(define_expand "seq"
[(set (match_operand:SI 0 "s_register_operand" "")
(eq:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (EQ, arm_compare_op0, arm_compare_op1);"
)
(define_expand "sne"
[(set (match_operand:SI 0 "s_register_operand" "")
(ne:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (NE, arm_compare_op0, arm_compare_op1);"
)
(define_expand "sgt"
[(set (match_operand:SI 0 "s_register_operand" "")
(gt:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (GT, arm_compare_op0, arm_compare_op1);"
)
(define_expand "sle"
[(set (match_operand:SI 0 "s_register_operand" "")
(le:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (LE, arm_compare_op0, arm_compare_op1);"
)
(define_expand "sge"
[(set (match_operand:SI 0 "s_register_operand" "")
(ge:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (GE, arm_compare_op0, arm_compare_op1);"
)
(define_expand "slt"
[(set (match_operand:SI 0 "s_register_operand" "")
(lt:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (LT, arm_compare_op0, arm_compare_op1);"
)
(define_expand "sgtu"
[(set (match_operand:SI 0 "s_register_operand" "")
(gtu:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (GTU, arm_compare_op0, arm_compare_op1);"
)
(define_expand "sleu"
[(set (match_operand:SI 0 "s_register_operand" "")
(leu:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (LEU, arm_compare_op0, arm_compare_op1);"
)
(define_expand "sgeu"
[(set (match_operand:SI 0 "s_register_operand" "")
(geu:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (GEU, arm_compare_op0, arm_compare_op1);"
)
(define_expand "sltu"
[(set (match_operand:SI 0 "s_register_operand" "")
(ltu:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"operands[1] = arm_gen_compare_reg (LTU, arm_compare_op0, arm_compare_op1);"
)
(define_expand "sunordered"
[(set (match_operand:SI 0 "s_register_operand" "")
(unordered:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"operands[1] = arm_gen_compare_reg (UNORDERED, arm_compare_op0,
arm_compare_op1);"
)
@@ -7409,7 +7528,7 @@
(define_expand "sordered"
[(set (match_operand:SI 0 "s_register_operand" "")
(ordered:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"operands[1] = arm_gen_compare_reg (ORDERED, arm_compare_op0,
arm_compare_op1);"
)
@@ -7417,7 +7536,7 @@
(define_expand "sungt"
[(set (match_operand:SI 0 "s_register_operand" "")
(ungt:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"operands[1] = arm_gen_compare_reg (UNGT, arm_compare_op0,
arm_compare_op1);"
)
@@ -7425,7 +7544,7 @@
(define_expand "sunge"
[(set (match_operand:SI 0 "s_register_operand" "")
(unge:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"operands[1] = arm_gen_compare_reg (UNGE, arm_compare_op0,
arm_compare_op1);"
)
@@ -7433,7 +7552,7 @@
(define_expand "sunlt"
[(set (match_operand:SI 0 "s_register_operand" "")
(unlt:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"operands[1] = arm_gen_compare_reg (UNLT, arm_compare_op0,
arm_compare_op1);"
)
@@ -7441,7 +7560,7 @@
(define_expand "sunle"
[(set (match_operand:SI 0 "s_register_operand" "")
(unle:SI (match_dup 1) (const_int 0)))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"operands[1] = arm_gen_compare_reg (UNLE, arm_compare_op0,
arm_compare_op1);"
)
@@ -7452,14 +7571,14 @@
; (define_expand "suneq"
; [(set (match_operand:SI 0 "s_register_operand" "")
; (uneq:SI (match_dup 1) (const_int 0)))]
-; "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+; "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
; "gcc_unreachable ();"
; )
;
; (define_expand "sltgt"
; [(set (match_operand:SI 0 "s_register_operand" "")
; (ltgt:SI (match_dup 1) (const_int 0)))]
-; "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+; "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
; "gcc_unreachable ();"
; )
@@ -7498,7 +7617,7 @@
(match_operator:SI 1 "arm_comparison_operator"
[(match_operand:SI 2 "s_register_operand" "")
(match_operand:SI 3 "reg_or_int_operand" "")]))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"{
rtx op3, scratch, scratch2;
@@ -7507,11 +7626,11 @@
switch (GET_CODE (operands[1]))
{
case EQ:
- emit_insn (gen_cstoresi_eq0_thumb (operands[0], operands[2]));
+ emit_insn (gen_cstoresi_eq0_thumb1 (operands[0], operands[2]));
break;
case NE:
- emit_insn (gen_cstoresi_ne0_thumb (operands[0], operands[2]));
+ emit_insn (gen_cstoresi_ne0_thumb1 (operands[0], operands[2]));
break;
case LE:
@@ -7551,13 +7670,13 @@
case EQ:
scratch = expand_binop (SImode, sub_optab, operands[2], operands[3],
NULL_RTX, 0, OPTAB_WIDEN);
- emit_insn (gen_cstoresi_eq0_thumb (operands[0], scratch));
+ emit_insn (gen_cstoresi_eq0_thumb1 (operands[0], scratch));
break;
case NE:
scratch = expand_binop (SImode, sub_optab, operands[2], operands[3],
NULL_RTX, 0, OPTAB_WIDEN);
- emit_insn (gen_cstoresi_ne0_thumb (operands[0], scratch));
+ emit_insn (gen_cstoresi_ne0_thumb1 (operands[0], scratch));
break;
case LE:
@@ -7567,51 +7686,51 @@
NULL_RTX, 1, OPTAB_WIDEN);
scratch2 = expand_binop (SImode, ashr_optab, op3, GEN_INT (31),
NULL_RTX, 0, OPTAB_WIDEN);
- emit_insn (gen_thumb_addsi3_addgeu (operands[0], scratch, scratch2,
+ emit_insn (gen_thumb1_addsi3_addgeu (operands[0], scratch, scratch2,
op3, operands[2]));
break;
case GE:
op3 = operands[3];
- if (!thumb_cmp_operand (op3, SImode))
+ if (!thumb1_cmp_operand (op3, SImode))
op3 = force_reg (SImode, op3);
scratch = expand_binop (SImode, ashr_optab, operands[2], GEN_INT (31),
NULL_RTX, 0, OPTAB_WIDEN);
scratch2 = expand_binop (SImode, lshr_optab, op3, GEN_INT (31),
NULL_RTX, 1, OPTAB_WIDEN);
- emit_insn (gen_thumb_addsi3_addgeu (operands[0], scratch, scratch2,
+ emit_insn (gen_thumb1_addsi3_addgeu (operands[0], scratch, scratch2,
operands[2], op3));
break;
case LEU:
op3 = force_reg (SImode, operands[3]);
scratch = force_reg (SImode, const0_rtx);
- emit_insn (gen_thumb_addsi3_addgeu (operands[0], scratch, scratch,
+ emit_insn (gen_thumb1_addsi3_addgeu (operands[0], scratch, scratch,
op3, operands[2]));
break;
case GEU:
op3 = operands[3];
- if (!thumb_cmp_operand (op3, SImode))
+ if (!thumb1_cmp_operand (op3, SImode))
op3 = force_reg (SImode, op3);
scratch = force_reg (SImode, const0_rtx);
- emit_insn (gen_thumb_addsi3_addgeu (operands[0], scratch, scratch,
+ emit_insn (gen_thumb1_addsi3_addgeu (operands[0], scratch, scratch,
operands[2], op3));
break;
case LTU:
op3 = operands[3];
- if (!thumb_cmp_operand (op3, SImode))
+ if (!thumb1_cmp_operand (op3, SImode))
op3 = force_reg (SImode, op3);
scratch = gen_reg_rtx (SImode);
- emit_insn (gen_cstoresi_nltu_thumb (scratch, operands[2], op3));
+ emit_insn (gen_cstoresi_nltu_thumb1 (scratch, operands[2], op3));
emit_insn (gen_negsi2 (operands[0], scratch));
break;
case GTU:
op3 = force_reg (SImode, operands[3]);
scratch = gen_reg_rtx (SImode);
- emit_insn (gen_cstoresi_nltu_thumb (scratch, op3, operands[2]));
+ emit_insn (gen_cstoresi_nltu_thumb1 (scratch, op3, operands[2]));
emit_insn (gen_negsi2 (operands[0], scratch));
break;
@@ -7622,65 +7741,65 @@
DONE;
}")
-(define_expand "cstoresi_eq0_thumb"
+(define_expand "cstoresi_eq0_thumb1"
[(parallel
[(set (match_operand:SI 0 "s_register_operand" "")
(eq:SI (match_operand:SI 1 "s_register_operand" "")
(const_int 0)))
(clobber (match_dup:SI 2))])]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"operands[2] = gen_reg_rtx (SImode);"
)
-(define_expand "cstoresi_ne0_thumb"
+(define_expand "cstoresi_ne0_thumb1"
[(parallel
[(set (match_operand:SI 0 "s_register_operand" "")
(ne:SI (match_operand:SI 1 "s_register_operand" "")
(const_int 0)))
(clobber (match_dup:SI 2))])]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"operands[2] = gen_reg_rtx (SImode);"
)
-(define_insn "*cstoresi_eq0_thumb_insn"
+(define_insn "*cstoresi_eq0_thumb1_insn"
[(set (match_operand:SI 0 "s_register_operand" "=&l,l")
(eq:SI (match_operand:SI 1 "s_register_operand" "l,0")
(const_int 0)))
(clobber (match_operand:SI 2 "s_register_operand" "=X,l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"@
neg\\t%0, %1\;adc\\t%0, %0, %1
neg\\t%2, %1\;adc\\t%0, %1, %2"
[(set_attr "length" "4")]
)
-(define_insn "*cstoresi_ne0_thumb_insn"
+(define_insn "*cstoresi_ne0_thumb1_insn"
[(set (match_operand:SI 0 "s_register_operand" "=l")
(ne:SI (match_operand:SI 1 "s_register_operand" "0")
(const_int 0)))
(clobber (match_operand:SI 2 "s_register_operand" "=l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"sub\\t%2, %1, #1\;sbc\\t%0, %1, %2"
[(set_attr "length" "4")]
)
-(define_insn "cstoresi_nltu_thumb"
+(define_insn "cstoresi_nltu_thumb1"
[(set (match_operand:SI 0 "s_register_operand" "=l,l")
(neg:SI (gtu:SI (match_operand:SI 1 "s_register_operand" "l,*h")
- (match_operand:SI 2 "thumb_cmp_operand" "lI*h,*r"))))]
- "TARGET_THUMB"
+ (match_operand:SI 2 "thumb1_cmp_operand" "lI*h,*r"))))]
+ "TARGET_THUMB1"
"cmp\\t%1, %2\;sbc\\t%0, %0, %0"
[(set_attr "length" "4")]
)
;; Used as part of the expansion of thumb les sequence.
-(define_insn "thumb_addsi3_addgeu"
+(define_insn "thumb1_addsi3_addgeu"
[(set (match_operand:SI 0 "s_register_operand" "=l")
(plus:SI (plus:SI (match_operand:SI 1 "s_register_operand" "%0")
(match_operand:SI 2 "s_register_operand" "l"))
(geu:SI (match_operand:SI 3 "s_register_operand" "l")
- (match_operand:SI 4 "thumb_cmp_operand" "lI"))))]
- "TARGET_THUMB"
+ (match_operand:SI 4 "thumb1_cmp_operand" "lI"))))]
+ "TARGET_THUMB1"
"cmp\\t%3, %4\;adc\\t%0, %1, %2"
[(set_attr "length" "4")]
)
@@ -7693,7 +7812,7 @@
(if_then_else:SI (match_operand 1 "arm_comparison_operator" "")
(match_operand:SI 2 "arm_not_operand" "")
(match_operand:SI 3 "arm_not_operand" "")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"
{
enum rtx_code code = GET_CODE (operands[1]);
@@ -7712,7 +7831,7 @@
(if_then_else:SF (match_operand 1 "arm_comparison_operator" "")
(match_operand:SF 2 "s_register_operand" "")
(match_operand:SF 3 "nonmemory_operand" "")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"
{
enum rtx_code code = GET_CODE (operands[1]);
@@ -7737,7 +7856,7 @@
(if_then_else:DF (match_operand 1 "arm_comparison_operator" "")
(match_operand:DF 2 "s_register_operand" "")
(match_operand:DF 3 "arm_float_add_operand" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && (TARGET_FPA || TARGET_VFP)"
"
{
enum rtx_code code = GET_CODE (operands[1]);
@@ -7798,7 +7917,7 @@
(define_insn "*arm_jump"
[(set (pc)
(label_ref (match_operand 0 "" "")))]
- "TARGET_ARM"
+ "TARGET_32BIT"
"*
{
if (arm_ccfsm_state == 1 || arm_ccfsm_state == 2)
@@ -7815,7 +7934,7 @@
(define_insn "*thumb_jump"
[(set (pc)
(label_ref (match_operand 0 "" "")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
if (get_attr_length (insn) == 2)
return \"b\\t%l0\";
@@ -7905,23 +8024,23 @@
(set_attr "type" "call")]
)
-(define_insn "*call_reg_thumb_v5"
+(define_insn "*call_reg_thumb1_v5"
[(call (mem:SI (match_operand:SI 0 "register_operand" "l*r"))
(match_operand 1 "" ""))
(use (match_operand 2 "" ""))
(clobber (reg:SI LR_REGNUM))]
- "TARGET_THUMB && arm_arch5"
+ "TARGET_THUMB1 && arm_arch5"
"blx\\t%0"
[(set_attr "length" "2")
(set_attr "type" "call")]
)
-(define_insn "*call_reg_thumb"
+(define_insn "*call_reg_thumb1"
[(call (mem:SI (match_operand:SI 0 "register_operand" "l*r"))
(match_operand 1 "" ""))
(use (match_operand 2 "" ""))
(clobber (reg:SI LR_REGNUM))]
- "TARGET_THUMB && !arm_arch5"
+ "TARGET_THUMB1 && !arm_arch5"
"*
{
if (!TARGET_CALLER_INTERWORKING)
@@ -7999,25 +8118,25 @@
(set_attr "type" "call")]
)
-(define_insn "*call_value_reg_thumb_v5"
+(define_insn "*call_value_reg_thumb1_v5"
[(set (match_operand 0 "" "")
(call (mem:SI (match_operand:SI 1 "register_operand" "l*r"))
(match_operand 2 "" "")))
(use (match_operand 3 "" ""))
(clobber (reg:SI LR_REGNUM))]
- "TARGET_THUMB && arm_arch5"
+ "TARGET_THUMB1 && arm_arch5"
"blx\\t%1"
[(set_attr "length" "2")
(set_attr "type" "call")]
)
-(define_insn "*call_value_reg_thumb"
+(define_insn "*call_value_reg_thumb1"
[(set (match_operand 0 "" "")
(call (mem:SI (match_operand:SI 1 "register_operand" "l*r"))
(match_operand 2 "" "")))
(use (match_operand 3 "" ""))
(clobber (reg:SI LR_REGNUM))]
- "TARGET_THUMB && !arm_arch5"
+ "TARGET_THUMB1 && !arm_arch5"
"*
{
if (!TARGET_CALLER_INTERWORKING)
@@ -8371,7 +8490,7 @@
(match_operand:SI 2 "const_int_operand" "") ; total range
(match_operand:SI 3 "" "") ; table label
(match_operand:SI 4 "" "")] ; Out of range label
- "TARGET_ARM"
+ "TARGET_32BIT"
"
{
rtx reg;
@@ -8387,15 +8506,28 @@
if (!const_ok_for_arm (INTVAL (operands[2])))
operands[2] = force_reg (SImode, operands[2]);
- emit_jump_insn (gen_casesi_internal (operands[0], operands[2], operands[3],
- operands[4]));
+ if (TARGET_ARM)
+ {
+ emit_jump_insn (gen_arm_casesi_internal (operands[0], operands[2],
+ operands[3], operands[4]));
+ }
+ else if (flag_pic)
+ {
+ emit_jump_insn (gen_thumb2_casesi_internal_pic (operands[0],
+ operands[2], operands[3], operands[4]));
+ }
+ else
+ {
+ emit_jump_insn (gen_thumb2_casesi_internal (operands[0], operands[2],
+ operands[3], operands[4]));
+ }
DONE;
}"
)
;; The USE in this pattern is needed to tell flow analysis that this is
;; a CASESI insn. It has no other purpose.
-(define_insn "casesi_internal"
+(define_insn "arm_casesi_internal"
[(parallel [(set (pc)
(if_then_else
(leu (match_operand:SI 0 "s_register_operand" "r")
@@ -8419,7 +8551,17 @@
[(set (pc)
(match_operand:SI 0 "s_register_operand" ""))]
"TARGET_EITHER"
- ""
+ "
+ /* Thumb-2 doesn't have mov pc, reg. Explicitly set the low bit of the
+ address and use bx. */
+ if (TARGET_THUMB2)
+ {
+ rtx tmp;
+ tmp = gen_reg_rtx (SImode);
+ emit_insn (gen_iorsi3 (tmp, operands[0], GEN_INT(1)));
+ operands[0] = tmp;
+ }
+ "
)
;; NB Never uses BX.
@@ -8443,10 +8585,10 @@
)
;; NB Never uses BX.
-(define_insn "*thumb_indirect_jump"
+(define_insn "*thumb1_indirect_jump"
[(set (pc)
(match_operand:SI 0 "register_operand" "l*r"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"mov\\tpc, %0"
[(set_attr "conds" "clob")
(set_attr "length" "2")]
@@ -8459,6 +8601,8 @@
[(const_int 0)]
"TARGET_EITHER"
"*
+ if (TARGET_UNIFIED_ASM)
+ return \"nop\";
if (TARGET_ARM)
return \"mov%?\\t%|r0, %|r0\\t%@ nop\";
return \"mov\\tr8, r8\";
@@ -8518,7 +8662,7 @@
(match_op_dup 1 [(match_op_dup 3 [(match_dup 4) (match_dup 5)])
(match_dup 2)]))]
"TARGET_ARM"
- "%i1%?s\\t%0, %2, %4%S3"
+ "%i1%.\\t%0, %2, %4%S3"
[(set_attr "conds" "set")
(set_attr "shift" "4")
(set (attr "type") (if_then_else (match_operand 5 "const_int_operand" "")
@@ -8536,7 +8680,7 @@
(const_int 0)))
(clobber (match_scratch:SI 0 "=r"))]
"TARGET_ARM"
- "%i1%?s\\t%0, %2, %4%S3"
+ "%i1%.\\t%0, %2, %4%S3"
[(set_attr "conds" "set")
(set_attr "shift" "4")
(set (attr "type") (if_then_else (match_operand 5 "const_int_operand" "")
@@ -8571,7 +8715,7 @@
(minus:SI (match_dup 1) (match_op_dup 2 [(match_dup 3)
(match_dup 4)])))]
"TARGET_ARM"
- "sub%?s\\t%0, %1, %3%S2"
+ "sub%.\\t%0, %1, %3%S2"
[(set_attr "conds" "set")
(set_attr "shift" "3")
(set (attr "type") (if_then_else (match_operand 4 "const_int_operand" "")
@@ -8589,7 +8733,7 @@
(const_int 0)))
(clobber (match_scratch:SI 0 "=r"))]
"TARGET_ARM"
- "sub%?s\\t%0, %1, %3%S2"
+ "sub%.\\t%0, %1, %3%S2"
[(set_attr "conds" "set")
(set_attr "shift" "3")
(set (attr "type") (if_then_else (match_operand 4 "const_int_operand" "")
@@ -8731,6 +8875,7 @@
(set_attr "length" "8,12")]
)
+;; ??? Is it worth using these conditional patterns in Thumb-2 mode?
(define_insn "*cmp_ite0"
[(set (match_operand 6 "dominant_cc_register" "")
(compare
@@ -9057,6 +9202,7 @@
(compare:CC_NOOV (and:SI (match_dup 4) (const_int 1))
(const_int 0)))]
"")
+;; ??? The conditional patterns above need checking for Thumb-2 usefulness
(define_insn "*negscc"
[(set (match_operand:SI 0 "s_register_operand" "=r")
@@ -9146,6 +9292,8 @@
(set_attr "length" "8,8,12")]
)
+;; ??? The patterns below need checking for Thumb-2 usefulness.
+
(define_insn "*ifcompare_plus_move"
[(set (match_operand:SI 0 "s_register_operand" "=r,r")
(if_then_else:SI (match_operator 6 "arm_comparison_operator"
@@ -9738,7 +9886,7 @@
if (val1 == 4 || val2 == 4)
/* Other val must be 8, since we know they are adjacent and neither
is zero. */
- output_asm_insn (\"ldm%?ib\\t%0, {%1, %2}\", ldm);
+ output_asm_insn (\"ldm%(ib%)\\t%0, {%1, %2}\", ldm);
else if (const_ok_for_arm (val1) || const_ok_for_arm (-val1))
{
ldm[0] = ops[0] = operands[4];
@@ -9746,9 +9894,9 @@
ops[2] = GEN_INT (val1);
output_add_immediate (ops);
if (val1 < val2)
- output_asm_insn (\"ldm%?ia\\t%0, {%1, %2}\", ldm);
+ output_asm_insn (\"ldm%(ia%)\\t%0, {%1, %2}\", ldm);
else
- output_asm_insn (\"ldm%?da\\t%0, {%1, %2}\", ldm);
+ output_asm_insn (\"ldm%(da%)\\t%0, {%1, %2}\", ldm);
}
else
{
@@ -9765,16 +9913,16 @@
else if (val1 != 0)
{
if (val1 < val2)
- output_asm_insn (\"ldm%?da\\t%0, {%1, %2}\", ldm);
+ output_asm_insn (\"ldm%(da%)\\t%0, {%1, %2}\", ldm);
else
- output_asm_insn (\"ldm%?ia\\t%0, {%1, %2}\", ldm);
+ output_asm_insn (\"ldm%(ia%)\\t%0, {%1, %2}\", ldm);
}
else
{
if (val1 < val2)
- output_asm_insn (\"ldm%?ia\\t%0, {%1, %2}\", ldm);
+ output_asm_insn (\"ldm%(ia%)\\t%0, {%1, %2}\", ldm);
else
- output_asm_insn (\"ldm%?da\\t%0, {%1, %2}\", ldm);
+ output_asm_insn (\"ldm%(da%)\\t%0, {%1, %2}\", ldm);
}
output_asm_insn (\"%I3%?\\t%0, %1, %2\", arith);
return \"\";
@@ -9913,14 +10061,15 @@
operands[1] = GEN_INT (((unsigned long) INTVAL (operands[1])) >> 24);
"
)
+;; ??? Check the patterns above for Thumb-2 usefulness
(define_expand "prologue"
[(clobber (const_int 0))]
"TARGET_EITHER"
- "if (TARGET_ARM)
+ "if (TARGET_32BIT)
arm_expand_prologue ();
else
- thumb_expand_prologue ();
+ thumb1_expand_prologue ();
DONE;
"
)
@@ -9931,8 +10080,8 @@
"
if (current_function_calls_eh_return)
emit_insn (gen_prologue_use (gen_rtx_REG (Pmode, 2)));
- if (TARGET_THUMB)
- thumb_expand_epilogue ();
+ if (TARGET_THUMB1)
+ thumb1_expand_epilogue ();
else if (USE_RETURN_INSN (FALSE))
{
emit_jump_insn (gen_return ());
@@ -9954,7 +10103,7 @@
(define_insn "sibcall_epilogue"
[(parallel [(unspec:SI [(reg:SI LR_REGNUM)] UNSPEC_PROLOGUE_USE)
(unspec_volatile [(return)] VUNSPEC_EPILOGUE)])]
- "TARGET_ARM"
+ "TARGET_32BIT"
"*
if (use_return_insn (FALSE, next_nonnote_insn (insn)))
return output_return_instruction (const_true_rtx, FALSE, FALSE);
@@ -9973,9 +10122,9 @@
[(unspec_volatile [(return)] VUNSPEC_EPILOGUE)]
"TARGET_EITHER"
"*
- if (TARGET_ARM)
+ if (TARGET_32BIT)
return arm_output_epilogue (NULL);
- else /* TARGET_THUMB */
+ else /* TARGET_THUMB1 */
return thumb_unexpanded_epilogue ();
"
; Length is absolute worst case
@@ -10015,6 +10164,9 @@
;; some extent with the conditional data operations, so we have to split them
;; up again here.
+;; ??? Need to audit these splitters for Thumb-2. Why isn't normal
+;; conditional execution sufficient?
+
(define_split
[(set (match_operand:SI 0 "s_register_operand" "")
(if_then_else:SI (match_operator 1 "arm_comparison_operator"
@@ -10177,6 +10329,7 @@
[(set_attr "conds" "clob")
(set_attr "length" "12")]
)
+;; ??? The above patterns need auditing for Thumb-2
;; Push multiple registers to the stack. Registers are in parallel (use ...)
;; expressions. For simplicity, the first register is also in the unspec
@@ -10186,21 +10339,26 @@
[(set (match_operand:BLK 0 "memory_operand" "=m")
(unspec:BLK [(match_operand:SI 1 "s_register_operand" "r")]
UNSPEC_PUSH_MULT))])]
- "TARGET_ARM"
+ "TARGET_32BIT"
"*
{
int num_saves = XVECLEN (operands[2], 0);
/* For the StrongARM at least it is faster to
- use STR to store only a single register. */
- if (num_saves == 1)
+ use STR to store only a single register.
+ In Thumb mode always use push, and the assmebler will pick
+ something approporiate. */
+ if (num_saves == 1 && TARGET_ARM)
output_asm_insn (\"str\\t%1, [%m0, #-4]!\", operands);
else
{
int i;
char pattern[100];
- strcpy (pattern, \"stmfd\\t%m0!, {%1\");
+ if (TARGET_ARM)
+ strcpy (pattern, \"stmfd\\t%m0!, {%1\");
+ else
+ strcpy (pattern, \"push\\t{%1\");
for (i = 1; i < num_saves; i++)
{
@@ -10234,7 +10392,7 @@
[(set (match_operand:BLK 0 "memory_operand" "=m")
(unspec:BLK [(match_operand:XF 1 "f_register_operand" "f")]
UNSPEC_PUSH_MULT))])]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"*
{
char pattern[100];
@@ -10277,7 +10435,7 @@
(define_insn "consttable_1"
[(unspec_volatile [(match_operand 0 "" "")] VUNSPEC_POOL_1)]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
making_const_table = TRUE;
assemble_integer (operands[0], 1, BITS_PER_WORD, 1);
@@ -10289,7 +10447,7 @@
(define_insn "consttable_2"
[(unspec_volatile [(match_operand 0 "" "")] VUNSPEC_POOL_2)]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"*
making_const_table = TRUE;
assemble_integer (operands[0], 2, BITS_PER_WORD, 1);
@@ -10352,7 +10510,7 @@
(define_expand "tablejump"
[(parallel [(set (pc) (match_operand:SI 0 "register_operand" ""))
(use (label_ref (match_operand 1 "" "")))])]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"
if (flag_pic)
{
@@ -10367,10 +10525,10 @@
)
;; NB never uses BX.
-(define_insn "*thumb_tablejump"
+(define_insn "*thumb1_tablejump"
[(set (pc) (match_operand:SI 0 "register_operand" "l*r"))
(use (label_ref (match_operand 1 "" "")))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"mov\\t%|pc, %0"
[(set_attr "length" "2")]
)
@@ -10380,14 +10538,14 @@
(define_insn "clzsi2"
[(set (match_operand:SI 0 "s_register_operand" "=r")
(clz:SI (match_operand:SI 1 "s_register_operand" "r")))]
- "TARGET_ARM && arm_arch5"
+ "TARGET_32BIT && arm_arch5"
"clz%?\\t%0, %1"
[(set_attr "predicable" "yes")])
(define_expand "ffssi2"
[(set (match_operand:SI 0 "s_register_operand" "")
(ffs:SI (match_operand:SI 1 "s_register_operand" "")))]
- "TARGET_ARM && arm_arch5"
+ "TARGET_32BIT && arm_arch5"
"
{
rtx t1, t2, t3;
@@ -10407,7 +10565,7 @@
(define_expand "ctzsi2"
[(set (match_operand:SI 0 "s_register_operand" "")
(ctz:SI (match_operand:SI 1 "s_register_operand" "")))]
- "TARGET_ARM && arm_arch5"
+ "TARGET_32BIT && arm_arch5"
"
{
rtx t1, t2, t3;
@@ -10430,7 +10588,7 @@
[(prefetch (match_operand:SI 0 "address_operand" "p")
(match_operand:SI 1 "" "")
(match_operand:SI 2 "" ""))]
- "TARGET_ARM && arm_arch5e"
+ "TARGET_32BIT && arm_arch5e"
"pld\\t%a0")
;; General predication pattern
@@ -10439,7 +10597,7 @@
[(match_operator 0 "arm_comparison_operator"
[(match_operand 1 "cc_register" "")
(const_int 0)])]
- "TARGET_ARM"
+ "TARGET_32BIT"
""
)
@@ -10457,7 +10615,7 @@
"TARGET_EITHER"
"
{
- if (TARGET_ARM)
+ if (TARGET_32BIT)
emit_insn (gen_arm_eh_return (operands[0]));
else
emit_insn (gen_thumb_eh_return (operands[0]));
@@ -10485,7 +10643,7 @@
[(unspec_volatile [(match_operand:SI 0 "s_register_operand" "l")]
VUNSPEC_EH_RETURN)
(clobber (match_scratch:SI 1 "=&l"))]
- "TARGET_THUMB"
+ "TARGET_THUMB1"
"#"
"&& reload_completed"
[(const_int 0)]
@@ -10526,4 +10684,6 @@
(include "iwmmxt.md")
;; Load the VFP co-processor patterns
(include "vfp.md")
+;; Thumb-2 patterns
+(include "thumb2.md")
diff --git a/gcc/config/arm/bpabi.S b/gcc/config/arm/bpabi.S
index e492d4b..c9f6d21 100644
--- a/gcc/config/arm/bpabi.S
+++ b/gcc/config/arm/bpabi.S
@@ -1,6 +1,6 @@
/* Miscellaneous BPABI functions.
- Copyright (C) 2003, 2004 Free Software Foundation, Inc.
+ Copyright (C) 2003, 2004, 2007 Free Software Foundation, Inc.
Contributed by CodeSourcery, LLC.
This file is free software; you can redistribute it and/or modify it
@@ -44,7 +44,8 @@
ARM_FUNC_START aeabi_lcmp
subs ip, xxl, yyl
sbcs ip, xxh, yyh
- subeqs ip, xxl, yyl
+ do_it eq
+ COND(sub,s,eq) ip, xxl, yyl
mov r0, ip
RET
FUNC_END aeabi_lcmp
@@ -55,12 +56,18 @@ ARM_FUNC_START aeabi_lcmp
ARM_FUNC_START aeabi_ulcmp
cmp xxh, yyh
+ do_it lo
movlo r0, #-1
+ do_it hi
movhi r0, #1
+ do_it ne
RETc(ne)
cmp xxl, yyl
+ do_it lo
movlo r0, #-1
+ do_it hi
movhi r0, #1
+ do_it eq
moveq r0, #0
RET
FUNC_END aeabi_ulcmp
@@ -71,11 +78,16 @@ ARM_FUNC_START aeabi_ulcmp
ARM_FUNC_START aeabi_ldivmod
sub sp, sp, #8
- stmfd sp!, {sp, lr}
+#if defined(__thumb2__)
+ mov ip, sp
+ push {ip, lr}
+#else
+ do_push {sp, lr}
+#endif
bl SYM(__gnu_ldivmod_helper) __PLT__
ldr lr, [sp, #4]
add sp, sp, #8
- ldmfd sp!, {r2, r3}
+ do_pop {r2, r3}
RET
#endif /* L_aeabi_ldivmod */
@@ -84,11 +96,16 @@ ARM_FUNC_START aeabi_ldivmod
ARM_FUNC_START aeabi_uldivmod
sub sp, sp, #8
- stmfd sp!, {sp, lr}
+#if defined(__thumb2__)
+ mov ip, sp
+ push {ip, lr}
+#else
+ do_push {sp, lr}
+#endif
bl SYM(__gnu_uldivmod_helper) __PLT__
ldr lr, [sp, #4]
add sp, sp, #8
- ldmfd sp!, {r2, r3}
+ do_pop {r2, r3}
RET
#endif /* L_aeabi_divmod */
diff --git a/gcc/config/arm/cirrus.md b/gcc/config/arm/cirrus.md
index b857cbb..e7b3015 100644
--- a/gcc/config/arm/cirrus.md
+++ b/gcc/config/arm/cirrus.md
@@ -1,5 +1,5 @@
;; Cirrus EP9312 "Maverick" ARM floating point co-processor description.
-;; Copyright (C) 2003, 2004, 2005 Free Software Foundation, Inc.
+;; Copyright (C) 2003, 2004, 2005, 2007 Free Software Foundation, Inc.
;; Contributed by Red Hat.
;; Written by Aldy Hernandez (aldyh@redhat.com)
@@ -34,7 +34,7 @@
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(plus:DI (match_operand:DI 1 "cirrus_fp_register" "v")
(match_operand:DI 2 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfadd64%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@@ -44,7 +44,7 @@
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(plus:SI (match_operand:SI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfadd32%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@@ -54,7 +54,7 @@
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(plus:SF (match_operand:SF 1 "cirrus_fp_register" "v")
(match_operand:SF 2 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfadds%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@@ -64,7 +64,7 @@
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(plus:DF (match_operand:DF 1 "cirrus_fp_register" "v")
(match_operand:DF 2 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfaddd%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@@ -74,7 +74,7 @@
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(minus:DI (match_operand:DI 1 "cirrus_fp_register" "v")
(match_operand:DI 2 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfsub64%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@@ -84,7 +84,7 @@
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(minus:SI (match_operand:SI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfsub32%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@@ -94,7 +94,7 @@
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(minus:SF (match_operand:SF 1 "cirrus_fp_register" "v")
(match_operand:SF 2 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfsubs%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@@ -104,7 +104,7 @@
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(minus:DF (match_operand:DF 1 "cirrus_fp_register" "v")
(match_operand:DF 2 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfsubd%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@@ -114,7 +114,7 @@
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(mult:SI (match_operand:SI 2 "cirrus_fp_register" "v")
(match_operand:SI 1 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfmul32%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@@ -124,7 +124,7 @@
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(mult:DI (match_operand:DI 2 "cirrus_fp_register" "v")
(match_operand:DI 1 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfmul64%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_dmult")
(set_attr "cirrus" "normal")]
@@ -136,7 +136,7 @@
(mult:SI (match_operand:SI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "cirrus_fp_register" "v"))
(match_operand:SI 3 "cirrus_fp_register" "0")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfmac32%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@@ -149,7 +149,7 @@
(match_operand:SI 1 "cirrus_fp_register" "0")
(mult:SI (match_operand:SI 2 "cirrus_fp_register" "v")
(match_operand:SI 3 "cirrus_fp_register" "v"))))]
- "0 && TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "0 && TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfmsc32%?\\t%V0, %V2, %V3"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@@ -159,7 +159,7 @@
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(mult:SF (match_operand:SF 1 "cirrus_fp_register" "v")
(match_operand:SF 2 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfmuls%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_farith")
(set_attr "cirrus" "normal")]
@@ -169,7 +169,7 @@
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(mult:DF (match_operand:DF 1 "cirrus_fp_register" "v")
(match_operand:DF 2 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfmuld%?\\t%V0, %V1, %V2"
[(set_attr "type" "mav_dmult")
(set_attr "cirrus" "normal")]
@@ -179,7 +179,7 @@
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(ashift:SI (match_operand:SI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "cirrus_shift_const" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfsh32%?\\t%V0, %V1, #%s2"
[(set_attr "cirrus" "normal")]
)
@@ -188,7 +188,7 @@
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(ashiftrt:SI (match_operand:SI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "cirrus_shift_const" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfsh32%?\\t%V0, %V1, #-%s2"
[(set_attr "cirrus" "normal")]
)
@@ -197,7 +197,7 @@
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(ashift:SI (match_operand:SI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "register_operand" "r")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfrshl32%?\\t%V1, %V0, %s2"
[(set_attr "cirrus" "normal")]
)
@@ -206,7 +206,7 @@
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(ashift:DI (match_operand:DI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "register_operand" "r")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfrshl64%?\\t%V1, %V0, %s2"
[(set_attr "cirrus" "normal")]
)
@@ -215,7 +215,7 @@
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(ashift:DI (match_operand:DI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "cirrus_shift_const" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfsh64%?\\t%V0, %V1, #%s2"
[(set_attr "cirrus" "normal")]
)
@@ -224,7 +224,7 @@
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(ashiftrt:DI (match_operand:DI 1 "cirrus_fp_register" "v")
(match_operand:SI 2 "cirrus_shift_const" "")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfsh64%?\\t%V0, %V1, #-%s2"
[(set_attr "cirrus" "normal")]
)
@@ -232,7 +232,7 @@
(define_insn "*cirrus_absdi2"
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(abs:DI (match_operand:DI 1 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfabs64%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@@ -242,7 +242,7 @@
[(set (match_operand:DI 0 "cirrus_fp_register" "=v")
(neg:DI (match_operand:DI 1 "cirrus_fp_register" "v")))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfneg64%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@@ -250,7 +250,7 @@
(define_insn "*cirrus_negsi2"
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(neg:SI (match_operand:SI 1 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfneg32%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@@ -258,7 +258,7 @@
(define_insn "*cirrus_negsf2"
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(neg:SF (match_operand:SF 1 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfnegs%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@@ -266,7 +266,7 @@
(define_insn "*cirrus_negdf2"
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(neg:DF (match_operand:DF 1 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfnegd%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@@ -276,7 +276,7 @@
[(set (match_operand:SI 0 "cirrus_fp_register" "=v")
(abs:SI (match_operand:SI 1 "cirrus_fp_register" "v")))
(clobber (reg:CC CC_REGNUM))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0"
"cfabs32%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@@ -284,7 +284,7 @@
(define_insn "*cirrus_abssf2"
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(abs:SF (match_operand:SF 1 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfabss%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@@ -292,7 +292,7 @@
(define_insn "*cirrus_absdf2"
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(abs:DF (match_operand:DF 1 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfabsd%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@@ -302,7 +302,7 @@
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(float:SF (match_operand:SI 1 "s_register_operand" "r")))
(clobber (match_scratch:DF 2 "=v"))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfmv64lr%?\\t%Z2, %1\;cfcvt32s%?\\t%V0, %Y2"
[(set_attr "length" "8")
(set_attr "cirrus" "move")]
@@ -312,7 +312,7 @@
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(float:DF (match_operand:SI 1 "s_register_operand" "r")))
(clobber (match_scratch:DF 2 "=v"))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfmv64lr%?\\t%Z2, %1\;cfcvt32d%?\\t%V0, %Y2"
[(set_attr "length" "8")
(set_attr "cirrus" "move")]
@@ -321,14 +321,14 @@
(define_insn "floatdisf2"
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(float:SF (match_operand:DI 1 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfcvt64s%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")])
(define_insn "floatdidf2"
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(float:DF (match_operand:DI 1 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfcvt64d%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")])
@@ -336,7 +336,7 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(fix:SI (fix:SF (match_operand:SF 1 "cirrus_fp_register" "v"))))
(clobber (match_scratch:DF 2 "=v"))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cftruncs32%?\\t%Y2, %V1\;cfmvr64l%?\\t%0, %Z2"
[(set_attr "length" "8")
(set_attr "cirrus" "normal")]
@@ -346,7 +346,7 @@
[(set (match_operand:SI 0 "s_register_operand" "=r")
(fix:SI (fix:DF (match_operand:DF 1 "cirrus_fp_register" "v"))))
(clobber (match_scratch:DF 2 "=v"))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cftruncd32%?\\t%Y2, %V1\;cfmvr64l%?\\t%0, %Z2"
[(set_attr "length" "8")]
)
@@ -355,7 +355,7 @@
[(set (match_operand:SF 0 "cirrus_fp_register" "=v")
(float_truncate:SF
(match_operand:DF 1 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfcvtds%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@@ -363,7 +363,7 @@
(define_insn "*cirrus_extendsfdf2"
[(set (match_operand:DF 0 "cirrus_fp_register" "=v")
(float_extend:DF (match_operand:SF 1 "cirrus_fp_register" "v")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_MAVERICK"
"cfcvtsd%?\\t%V0, %V1"
[(set_attr "cirrus" "normal")]
)
@@ -478,3 +478,111 @@
(set_attr "cirrus" " not, not,not, not, not,normal,double,move,normal,double")]
)
+(define_insn "*cirrus_thumb2_movdi"
+ [(set (match_operand:DI 0 "nonimmediate_di_operand" "=r,r,o<>,v,r,v,m,v")
+ (match_operand:DI 1 "di_operand" "rIK,mi,r,r,v,mi,v,v"))]
+ "TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_MAVERICK"
+ "*
+ {
+ switch (which_alternative)
+ {
+ case 0:
+ case 1:
+ case 2:
+ return (output_move_double (operands));
+
+ case 3: return \"cfmv64lr%?\\t%V0, %Q1\;cfmv64hr%?\\t%V0, %R1\";
+ case 4: return \"cfmvr64l%?\\t%Q0, %V1\;cfmvr64h%?\\t%R0, %V1\";
+
+ case 5: return \"cfldr64%?\\t%V0, %1\";
+ case 6: return \"cfstr64%?\\t%V1, %0\";
+
+ /* Shifting by 0 will just copy %1 into %0. */
+ case 7: return \"cfsh64%?\\t%V0, %V1, #0\";
+
+ default: abort ();
+ }
+ }"
+ [(set_attr "length" " 8, 8, 8, 8, 8, 4, 4, 4")
+ (set_attr "type" " *,load2,store2, *, *, load2,store2, *")
+ (set_attr "pool_range" " *,4096, *, *, *, 1020, *, *")
+ (set_attr "neg_pool_range" " *, 0, *, *, *, 1008, *, *")
+ (set_attr "cirrus" "not, not, not,move,normal,double,double,normal")]
+)
+
+;; Cirrus SI values have been outlawed. Look in arm.h for the comment
+;; on HARD_REGNO_MODE_OK.
+
+(define_insn "*cirrus_thumb2_movsi_insn"
+ [(set (match_operand:SI 0 "general_operand" "=r,r,r,m,*v,r,*v,T,*v")
+ (match_operand:SI 1 "general_operand" "rI,K,mi,r,r,*v,T,*v,*v"))]
+ "TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_MAVERICK && 0
+ && (register_operand (operands[0], SImode)
+ || register_operand (operands[1], SImode))"
+ "@
+ mov%?\\t%0, %1
+ mvn%?\\t%0, #%B1
+ ldr%?\\t%0, %1
+ str%?\\t%1, %0
+ cfmv64lr%?\\t%Z0, %1
+ cfmvr64l%?\\t%0, %Z1
+ cfldr32%?\\t%V0, %1
+ cfstr32%?\\t%V1, %0
+ cfsh32%?\\t%V0, %V1, #0"
+ [(set_attr "type" "*, *, load1,store1, *, *, load1,store1, *")
+ (set_attr "pool_range" "*, *, 4096, *, *, *, 1024, *, *")
+ (set_attr "neg_pool_range" "*, *, 0, *, *, *, 1012, *, *")
+ (set_attr "cirrus" "not,not, not, not,move,normal,normal,normal,normal")]
+)
+
+(define_insn "*thumb2_cirrus_movsf_hard_insn"
+ [(set (match_operand:SF 0 "nonimmediate_operand" "=v,v,v,r,m,r,r,m")
+ (match_operand:SF 1 "general_operand" "v,mE,r,v,v,r,mE,r"))]
+ "TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_MAVERICK
+ && (GET_CODE (operands[0]) != MEM
+ || register_operand (operands[1], SFmode))"
+ "@
+ cfcpys%?\\t%V0, %V1
+ cfldrs%?\\t%V0, %1
+ cfmvsr%?\\t%V0, %1
+ cfmvrs%?\\t%0, %V1
+ cfstrs%?\\t%V1, %0
+ mov%?\\t%0, %1
+ ldr%?\\t%0, %1\\t%@ float
+ str%?\\t%1, %0\\t%@ float"
+ [(set_attr "length" " *, *, *, *, *, 4, 4, 4")
+ (set_attr "type" " *, load1, *, *,store1, *,load1,store1")
+ (set_attr "pool_range" " *, 1020, *, *, *, *,4096, *")
+ (set_attr "neg_pool_range" " *, 1008, *, *, *, *, 0, *")
+ (set_attr "cirrus" "normal,normal,move,normal,normal,not, not, not")]
+)
+
+(define_insn "*thumb2_cirrus_movdf_hard_insn"
+ [(set (match_operand:DF 0 "nonimmediate_operand" "=r,Q,r,m,r,v,v,v,r,m")
+ (match_operand:DF 1 "general_operand" "Q,r,r,r,mF,v,mF,r,v,v"))]
+ "TARGET_THUMB2
+ && TARGET_HARD_FLOAT && TARGET_MAVERICK
+ && (GET_CODE (operands[0]) != MEM
+ || register_operand (operands[1], DFmode))"
+ "*
+ {
+ switch (which_alternative)
+ {
+ case 0: return \"ldm%?ia\\t%m1, %M0\\t%@ double\";
+ case 1: return \"stm%?ia\\t%m0, %M1\\t%@ double\";
+ case 2: case 3: case 4: return output_move_double (operands);
+ case 5: return \"cfcpyd%?\\t%V0, %V1\";
+ case 6: return \"cfldrd%?\\t%V0, %1\";
+ case 7: return \"cfmvdlr\\t%V0, %Q1\;cfmvdhr%?\\t%V0, %R1\";
+ case 8: return \"cfmvrdl%?\\t%Q0, %V1\;cfmvrdh%?\\t%R0, %V1\";
+ case 9: return \"cfstrd%?\\t%V1, %0\";
+ default: abort ();
+ }
+ }"
+ [(set_attr "type" "load1,store2, *,store2,load1, *, load1, *, *,store2")
+ (set_attr "length" " 4, 4, 8, 8, 8, 4, 4, 8, 8, 4")
+ (set_attr "pool_range" " *, *, *, *,4092, *, 1020, *, *, *")
+ (set_attr "neg_pool_range" " *, *, *, *, 0, *, 1008, *, *, *")
+ (set_attr "cirrus" " not, not,not, not, not,normal,double,move,normal,double")]
+)
+
diff --git a/gcc/config/arm/coff.h b/gcc/config/arm/coff.h
index f4ad416..4087d98 100644
--- a/gcc/config/arm/coff.h
+++ b/gcc/config/arm/coff.h
@@ -1,7 +1,7 @@
/* Definitions of target machine for GNU compiler.
For ARM with COFF object format.
- Copyright (C) 1995, 1996, 1997, 1998, 1999, 2000, 2002, 2003, 2004, 2005
- Free Software Foundation, Inc.
+ Copyright (C) 1995, 1996, 1997, 1998, 1999, 2000, 2002, 2003, 2004, 2005,
+ 2007 Free Software Foundation, Inc.
Contributed by Doug Evans (devans@cygnus.com).
This file is part of GCC.
@@ -59,9 +59,10 @@
/* Define this macro if jump tables (for `tablejump' insns) should be
output in the text section, along with the assembler instructions.
Otherwise, the readonly data section is used. */
-/* We put ARM jump tables in the text section, because it makes the code
- more efficient, but for Thumb it's better to put them out of band. */
-#define JUMP_TABLES_IN_TEXT_SECTION (TARGET_ARM)
+/* We put ARM and Thumb-2 jump tables in the text section, because it makes
+ the code more efficient, but for Thumb-1 it's better to put them out of
+ band. */
+#define JUMP_TABLES_IN_TEXT_SECTION (TARGET_32BIT)
#undef READONLY_DATA_SECTION_ASM_OP
#define READONLY_DATA_SECTION_ASM_OP "\t.section .rdata"
diff --git a/gcc/config/arm/constraints.md b/gcc/config/arm/constraints.md
index 790b7de..9a60671 100644
--- a/gcc/config/arm/constraints.md
+++ b/gcc/config/arm/constraints.md
@@ -1,5 +1,5 @@
;; Constraint definitions for ARM and Thumb
-;; Copyright (C) 2006 Free Software Foundation, Inc.
+;; Copyright (C) 2006, 2007 Free Software Foundation, Inc.
;; Contributed by ARM Ltd.
;; This file is part of GCC.
@@ -20,20 +20,21 @@
;; Boston, MA 02110-1301, USA.
;; The following register constraints have been used:
-;; - in ARM state: f, v, w, y, z
+;; - in ARM/Thumb-2 state: f, v, w, y, z
;; - in Thumb state: h, k, b
;; - in both states: l, c
;; In ARM state, 'l' is an alias for 'r'
;; The following normal constraints have been used:
-;; in ARM state: G, H, I, J, K, L, M
-;; in Thumb state: I, J, K, L, M, N, O
+;; in ARM/Thumb-2 state: G, H, I, J, K, L, M
+;; in Thumb-1 state: I, J, K, L, M, N, O
;; The following multi-letter normal constraints have been used:
-;; in ARM state: Da, Db, Dc
+;; in ARM/Thumb-2 state: Da, Db, Dc
;; The following memory constraints have been used:
-;; in ARM state: Q, Uq, Uv, Uy
+;; in ARM/Thumb-2 state: Q, Uv, Uy
+;; in ARM state: Uq
(define_register_constraint "f" "TARGET_ARM ? FPA_REGS : NO_REGS"
@@ -70,99 +71,103 @@
"@internal The condition code register.")
(define_constraint "I"
- "In ARM state a constant that can be used as an immediate value in a Data
- Processing instruction. In Thumb state a constant in the range 0-255."
+ "In ARM/Thumb-2 state a constant that can be used as an immediate value in a
+ Data Processing instruction. In Thumb-1 state a constant in the range
+ 0-255."
(and (match_code "const_int")
- (match_test "TARGET_ARM ? const_ok_for_arm (ival)
+ (match_test "TARGET_32BIT ? const_ok_for_arm (ival)
: ival >= 0 && ival <= 255")))
(define_constraint "J"
- "In ARM state a constant in the range @minus{}4095-4095. In Thumb state
- a constant in the range @minus{}255-@minus{}1."
+ "In ARM/Thumb-2 state a constant in the range @minus{}4095-4095. In Thumb-1
+ state a constant in the range @minus{}255-@minus{}1."
(and (match_code "const_int")
- (match_test "TARGET_ARM ? (ival >= -4095 && ival <= 4095)
+ (match_test "TARGET_32BIT ? (ival >= -4095 && ival <= 4095)
: (ival >= -255 && ival <= -1)")))
(define_constraint "K"
- "In ARM state a constant that satisfies the @code{I} constraint if inverted.
- In Thumb state a constant that satisfies the @code{I} constraint multiplied
- by any power of 2."
+ "In ARM/Thumb-2 state a constant that satisfies the @code{I} constraint if
+ inverted. In Thumb-1 state a constant that satisfies the @code{I}
+ constraint multiplied by any power of 2."
(and (match_code "const_int")
- (match_test "TARGET_ARM ? const_ok_for_arm (~ival)
+ (match_test "TARGET_32BIT ? const_ok_for_arm (~ival)
: thumb_shiftable_const (ival)")))
(define_constraint "L"
- "In ARM state a constant that satisfies the @code{I} constraint if negated.
- In Thumb state a constant in the range @minus{}7-7."
+ "In ARM/Thumb-2 state a constant that satisfies the @code{I} constraint if
+ negated. In Thumb-1 state a constant in the range @minus{}7-7."
(and (match_code "const_int")
- (match_test "TARGET_ARM ? const_ok_for_arm (-ival)
+ (match_test "TARGET_32BIT ? const_ok_for_arm (-ival)
: (ival >= -7 && ival <= 7)")))
;; The ARM state version is internal...
-;; @internal In ARM state a constant in the range 0-32 or any power of 2.
+;; @internal In ARM/Thumb-2 state a constant in the range 0-32 or any
+;; power of 2.
(define_constraint "M"
- "In Thumb state a constant that is a multiple of 4 in the range 0-1020."
+ "In Thumb-1 state a constant that is a multiple of 4 in the range 0-1020."
(and (match_code "const_int")
- (match_test "TARGET_ARM ? ((ival >= 0 && ival <= 32)
+ (match_test "TARGET_32BIT ? ((ival >= 0 && ival <= 32)
|| ((ival & (ival - 1)) == 0))
: ((ival >= 0 && ival <= 1020) && ((ival & 3) == 0))")))
(define_constraint "N"
- "In Thumb state a constant in the range 0-31."
+ "In ARM/Thumb-2 state a constant suitable for a MOVW instruction.
+ In Thumb-1 state a constant in the range 0-31."
(and (match_code "const_int")
- (match_test "TARGET_THUMB && ival >= 0 && ival <= 31")))
+ (match_test "TARGET_32BIT ? arm_arch_thumb2 && ((ival & 0xffff0000) == 0)
+ : (ival >= 0 && ival <= 31)")))
(define_constraint "O"
- "In Thumb state a constant that is a multiple of 4 in the range
+ "In Thumb-1 state a constant that is a multiple of 4 in the range
@minus{}508-508."
(and (match_code "const_int")
- (match_test "TARGET_THUMB && ival >= -508 && ival <= 508
+ (match_test "TARGET_THUMB1 && ival >= -508 && ival <= 508
&& ((ival & 3) == 0)")))
(define_constraint "G"
- "In ARM state a valid FPA immediate constant."
+ "In ARM/Thumb-2 state a valid FPA immediate constant."
(and (match_code "const_double")
- (match_test "TARGET_ARM && arm_const_double_rtx (op)")))
+ (match_test "TARGET_32BIT && arm_const_double_rtx (op)")))
(define_constraint "H"
- "In ARM state a valid FPA immediate constant when negated."
+ "In ARM/Thumb-2 state a valid FPA immediate constant when negated."
(and (match_code "const_double")
- (match_test "TARGET_ARM && neg_const_double_rtx_ok_for_fpa (op)")))
+ (match_test "TARGET_32BIT && neg_const_double_rtx_ok_for_fpa (op)")))
(define_constraint "Da"
"@internal
- In ARM state a const_int, const_double or const_vector that can
+ In ARM/Thumb-2 state a const_int, const_double or const_vector that can
be generated with two Data Processing insns."
(and (match_code "const_double,const_int,const_vector")
- (match_test "TARGET_ARM && arm_const_double_inline_cost (op) == 2")))
+ (match_test "TARGET_32BIT && arm_const_double_inline_cost (op) == 2")))
(define_constraint "Db"
"@internal
- In ARM state a const_int, const_double or const_vector that can
+ In ARM/Thumb-2 state a const_int, const_double or const_vector that can
be generated with three Data Processing insns."
(and (match_code "const_double,const_int,const_vector")
- (match_test "TARGET_ARM && arm_const_double_inline_cost (op) == 3")))
+ (match_test "TARGET_32BIT && arm_const_double_inline_cost (op) == 3")))
(define_constraint "Dc"
"@internal
- In ARM state a const_int, const_double or const_vector that can
+ In ARM/Thumb-2 state a const_int, const_double or const_vector that can
be generated with four Data Processing insns. This pattern is disabled
if optimizing for space or when we have load-delay slots to fill."
(and (match_code "const_double,const_int,const_vector")
- (match_test "TARGET_ARM && arm_const_double_inline_cost (op) == 4
+ (match_test "TARGET_32BIT && arm_const_double_inline_cost (op) == 4
&& !(optimize_size || arm_ld_sched)")))
(define_memory_constraint "Uv"
"@internal
- In ARM state a valid VFP load/store address."
+ In ARM/Thumb-2 state a valid VFP load/store address."
(and (match_code "mem")
- (match_test "TARGET_ARM && arm_coproc_mem_operand (op, FALSE)")))
+ (match_test "TARGET_32BIT && arm_coproc_mem_operand (op, FALSE)")))
(define_memory_constraint "Uy"
"@internal
- In ARM state a valid iWMMX load/store address."
+ In ARM/Thumb-2 state a valid iWMMX load/store address."
(and (match_code "mem")
- (match_test "TARGET_ARM && arm_coproc_mem_operand (op, TRUE)")))
+ (match_test "TARGET_32BIT && arm_coproc_mem_operand (op, TRUE)")))
(define_memory_constraint "Uq"
"@internal
@@ -174,7 +179,7 @@
(define_memory_constraint "Q"
"@internal
- In ARM state an address that is a single base register."
+ In ARM/Thumb-2 state an address that is a single base register."
(and (match_code "mem")
(match_test "REG_P (XEXP (op, 0))")))
diff --git a/gcc/config/arm/elf.h b/gcc/config/arm/elf.h
index c335f42..41bc774 100644
--- a/gcc/config/arm/elf.h
+++ b/gcc/config/arm/elf.h
@@ -1,6 +1,6 @@
/* Definitions of target machine for GNU compiler.
For ARM with ELF obj format.
- Copyright (C) 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2004, 2005
+ Copyright (C) 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2004, 2005, 2007
Free Software Foundation, Inc.
Contributed by Philip Blundell <philb@gnu.org> and
Catherine Moore <clm@cygnus.com>
@@ -96,9 +96,10 @@
/* Define this macro if jump tables (for `tablejump' insns) should be
output in the text section, along with the assembler instructions.
Otherwise, the readonly data section is used. */
-/* We put ARM jump tables in the text section, because it makes the code
- more efficient, but for Thumb it's better to put them out of band. */
-#define JUMP_TABLES_IN_TEXT_SECTION (TARGET_ARM)
+/* We put ARM and Thumb-2 jump tables in the text section, because it makes
+ the code more efficient, but for Thumb-1 it's better to put them out of
+ band. */
+#define JUMP_TABLES_IN_TEXT_SECTION (TARGET_32BIT)
#ifndef LINK_SPEC
#define LINK_SPEC "%{mbig-endian:-EB} %{mlittle-endian:-EL} -X"
diff --git a/gcc/config/arm/fpa.md b/gcc/config/arm/fpa.md
index b801f5a..d821a50 100644
--- a/gcc/config/arm/fpa.md
+++ b/gcc/config/arm/fpa.md
@@ -1,6 +1,6 @@
;;- Machine description for FPA co-processor for ARM cpus.
;; Copyright 1991, 1993, 1994, 1995, 1996, 1996, 1997, 1998, 1999, 2000,
-;; 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc.
+;; 2001, 2002, 2003, 2004, 2005, 2007 Free Software Foundation, Inc.
;; Contributed by Pieter `Tiggr' Schoenmakers (rcpieter@win.tue.nl)
;; and Martin Simmons (@harleqn.co.uk).
;; More major hacks by Richard Earnshaw (rearnsha@arm.com).
@@ -22,6 +22,10 @@
;; the Free Software Foundation, 51 Franklin Street, Fifth Floor,
;; Boston, MA 02110-1301, USA.
+;; Some FPA mnemonics are ambiguous between conditional infixes and
+;; conditional suffixes. All instructions use a conditional infix,
+;; even in unified assembly mode.
+
;; FPA automaton.
(define_automaton "armfp")
@@ -101,7 +105,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=f,f")
(plus:SF (match_operand:SF 1 "s_register_operand" "%f,f")
(match_operand:SF 2 "arm_float_add_operand" "fG,H")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
adf%?s\\t%0, %1, %2
suf%?s\\t%0, %1, #%N2"
@@ -113,7 +117,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f,f")
(plus:DF (match_operand:DF 1 "s_register_operand" "%f,f")
(match_operand:DF 2 "arm_float_add_operand" "fG,H")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
adf%?d\\t%0, %1, %2
suf%?d\\t%0, %1, #%N2"
@@ -126,7 +130,7 @@
(plus:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f,f"))
(match_operand:DF 2 "arm_float_add_operand" "fG,H")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
adf%?d\\t%0, %1, %2
suf%?d\\t%0, %1, #%N2"
@@ -139,7 +143,7 @@
(plus:DF (match_operand:DF 1 "s_register_operand" "f")
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"adf%?d\\t%0, %1, %2"
[(set_attr "type" "farith")
(set_attr "predicable" "yes")]
@@ -151,7 +155,7 @@
(match_operand:SF 1 "s_register_operand" "f"))
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"adf%?d\\t%0, %1, %2"
[(set_attr "type" "farith")
(set_attr "predicable" "yes")]
@@ -161,7 +165,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=f,f")
(minus:SF (match_operand:SF 1 "arm_float_rhs_operand" "f,G")
(match_operand:SF 2 "arm_float_rhs_operand" "fG,f")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
suf%?s\\t%0, %1, %2
rsf%?s\\t%0, %2, %1"
@@ -172,7 +176,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f,f")
(minus:DF (match_operand:DF 1 "arm_float_rhs_operand" "f,G")
(match_operand:DF 2 "arm_float_rhs_operand" "fG,f")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
suf%?d\\t%0, %1, %2
rsf%?d\\t%0, %2, %1"
@@ -185,7 +189,7 @@
(minus:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))
(match_operand:DF 2 "arm_float_rhs_operand" "fG")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"suf%?d\\t%0, %1, %2"
[(set_attr "type" "farith")
(set_attr "predicable" "yes")]
@@ -196,7 +200,7 @@
(minus:DF (match_operand:DF 1 "arm_float_rhs_operand" "f,G")
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f,f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
suf%?d\\t%0, %1, %2
rsf%?d\\t%0, %2, %1"
@@ -210,7 +214,7 @@
(match_operand:SF 1 "s_register_operand" "f"))
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"suf%?d\\t%0, %1, %2"
[(set_attr "type" "farith")
(set_attr "predicable" "yes")]
@@ -220,7 +224,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=f")
(mult:SF (match_operand:SF 1 "s_register_operand" "f")
(match_operand:SF 2 "arm_float_rhs_operand" "fG")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"fml%?s\\t%0, %1, %2"
[(set_attr "type" "ffmul")
(set_attr "predicable" "yes")]
@@ -230,7 +234,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f")
(mult:DF (match_operand:DF 1 "s_register_operand" "f")
(match_operand:DF 2 "arm_float_rhs_operand" "fG")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"muf%?d\\t%0, %1, %2"
[(set_attr "type" "fmul")
(set_attr "predicable" "yes")]
@@ -241,7 +245,7 @@
(mult:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))
(match_operand:DF 2 "arm_float_rhs_operand" "fG")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"muf%?d\\t%0, %1, %2"
[(set_attr "type" "fmul")
(set_attr "predicable" "yes")]
@@ -252,7 +256,7 @@
(mult:DF (match_operand:DF 1 "s_register_operand" "f")
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"muf%?d\\t%0, %1, %2"
[(set_attr "type" "fmul")
(set_attr "predicable" "yes")]
@@ -263,7 +267,7 @@
(mult:DF
(float_extend:DF (match_operand:SF 1 "s_register_operand" "f"))
(float_extend:DF (match_operand:SF 2 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"muf%?d\\t%0, %1, %2"
[(set_attr "type" "fmul")
(set_attr "predicable" "yes")]
@@ -275,7 +279,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=f,f")
(div:SF (match_operand:SF 1 "arm_float_rhs_operand" "f,G")
(match_operand:SF 2 "arm_float_rhs_operand" "fG,f")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
fdv%?s\\t%0, %1, %2
frd%?s\\t%0, %2, %1"
@@ -287,7 +291,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f,f")
(div:DF (match_operand:DF 1 "arm_float_rhs_operand" "f,G")
(match_operand:DF 2 "arm_float_rhs_operand" "fG,f")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
dvf%?d\\t%0, %1, %2
rdf%?d\\t%0, %2, %1"
@@ -300,7 +304,7 @@
(div:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))
(match_operand:DF 2 "arm_float_rhs_operand" "fG")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"dvf%?d\\t%0, %1, %2"
[(set_attr "type" "fdivd")
(set_attr "predicable" "yes")]
@@ -311,7 +315,7 @@
(div:DF (match_operand:DF 1 "arm_float_rhs_operand" "fG")
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"rdf%?d\\t%0, %2, %1"
[(set_attr "type" "fdivd")
(set_attr "predicable" "yes")]
@@ -323,7 +327,7 @@
(match_operand:SF 1 "s_register_operand" "f"))
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"dvf%?d\\t%0, %1, %2"
[(set_attr "type" "fdivd")
(set_attr "predicable" "yes")]
@@ -333,7 +337,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=f")
(mod:SF (match_operand:SF 1 "s_register_operand" "f")
(match_operand:SF 2 "arm_float_rhs_operand" "fG")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"rmf%?s\\t%0, %1, %2"
[(set_attr "type" "fdivs")
(set_attr "predicable" "yes")]
@@ -343,7 +347,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f")
(mod:DF (match_operand:DF 1 "s_register_operand" "f")
(match_operand:DF 2 "arm_float_rhs_operand" "fG")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"rmf%?d\\t%0, %1, %2"
[(set_attr "type" "fdivd")
(set_attr "predicable" "yes")]
@@ -354,7 +358,7 @@
(mod:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))
(match_operand:DF 2 "arm_float_rhs_operand" "fG")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"rmf%?d\\t%0, %1, %2"
[(set_attr "type" "fdivd")
(set_attr "predicable" "yes")]
@@ -365,7 +369,7 @@
(mod:DF (match_operand:DF 1 "s_register_operand" "f")
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"rmf%?d\\t%0, %1, %2"
[(set_attr "type" "fdivd")
(set_attr "predicable" "yes")]
@@ -377,7 +381,7 @@
(match_operand:SF 1 "s_register_operand" "f"))
(float_extend:DF
(match_operand:SF 2 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"rmf%?d\\t%0, %1, %2"
[(set_attr "type" "fdivd")
(set_attr "predicable" "yes")]
@@ -386,7 +390,7 @@
(define_insn "*negsf2_fpa"
[(set (match_operand:SF 0 "s_register_operand" "=f")
(neg:SF (match_operand:SF 1 "s_register_operand" "f")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"mnf%?s\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@@ -395,7 +399,7 @@
(define_insn "*negdf2_fpa"
[(set (match_operand:DF 0 "s_register_operand" "=f")
(neg:DF (match_operand:DF 1 "s_register_operand" "f")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"mnf%?d\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@@ -405,7 +409,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f")
(neg:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"mnf%?d\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@@ -414,7 +418,7 @@
(define_insn "*abssf2_fpa"
[(set (match_operand:SF 0 "s_register_operand" "=f")
(abs:SF (match_operand:SF 1 "s_register_operand" "f")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"abs%?s\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@@ -423,7 +427,7 @@
(define_insn "*absdf2_fpa"
[(set (match_operand:DF 0 "s_register_operand" "=f")
(abs:DF (match_operand:DF 1 "s_register_operand" "f")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"abs%?d\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@@ -433,7 +437,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f")
(abs:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"abs%?d\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@@ -442,7 +446,7 @@
(define_insn "*sqrtsf2_fpa"
[(set (match_operand:SF 0 "s_register_operand" "=f")
(sqrt:SF (match_operand:SF 1 "s_register_operand" "f")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"sqt%?s\\t%0, %1"
[(set_attr "type" "float_em")
(set_attr "predicable" "yes")]
@@ -451,7 +455,7 @@
(define_insn "*sqrtdf2_fpa"
[(set (match_operand:DF 0 "s_register_operand" "=f")
(sqrt:DF (match_operand:DF 1 "s_register_operand" "f")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"sqt%?d\\t%0, %1"
[(set_attr "type" "float_em")
(set_attr "predicable" "yes")]
@@ -461,7 +465,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=f")
(sqrt:DF (float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"sqt%?d\\t%0, %1"
[(set_attr "type" "float_em")
(set_attr "predicable" "yes")]
@@ -470,7 +474,7 @@
(define_insn "*floatsisf2_fpa"
[(set (match_operand:SF 0 "s_register_operand" "=f")
(float:SF (match_operand:SI 1 "s_register_operand" "r")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"flt%?s\\t%0, %1"
[(set_attr "type" "r_2_f")
(set_attr "predicable" "yes")]
@@ -479,7 +483,7 @@
(define_insn "*floatsidf2_fpa"
[(set (match_operand:DF 0 "s_register_operand" "=f")
(float:DF (match_operand:SI 1 "s_register_operand" "r")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"flt%?d\\t%0, %1"
[(set_attr "type" "r_2_f")
(set_attr "predicable" "yes")]
@@ -488,7 +492,7 @@
(define_insn "*fix_truncsfsi2_fpa"
[(set (match_operand:SI 0 "s_register_operand" "=r")
(fix:SI (fix:SF (match_operand:SF 1 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"fix%?z\\t%0, %1"
[(set_attr "type" "f_2_r")
(set_attr "predicable" "yes")]
@@ -497,7 +501,7 @@
(define_insn "*fix_truncdfsi2_fpa"
[(set (match_operand:SI 0 "s_register_operand" "=r")
(fix:SI (fix:DF (match_operand:DF 1 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"fix%?z\\t%0, %1"
[(set_attr "type" "f_2_r")
(set_attr "predicable" "yes")]
@@ -507,7 +511,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=f")
(float_truncate:SF
(match_operand:DF 1 "s_register_operand" "f")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"mvf%?s\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@@ -516,7 +520,7 @@
(define_insn "*extendsfdf2_fpa"
[(set (match_operand:DF 0 "s_register_operand" "=f")
(float_extend:DF (match_operand:SF 1 "s_register_operand" "f")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"mvf%?d\\t%0, %1"
[(set_attr "type" "ffarith")
(set_attr "predicable" "yes")]
@@ -561,8 +565,8 @@
switch (which_alternative)
{
default:
- case 0: return \"ldm%?ia\\t%m1, %M0\\t%@ double\";
- case 1: return \"stm%?ia\\t%m0, %M1\\t%@ double\";
+ case 0: return \"ldm%(ia%)\\t%m1, %M0\\t%@ double\";
+ case 1: return \"stm%(ia%)\\t%m0, %M1\\t%@ double\";
case 2: return \"#\";
case 3: case 4: return output_move_double (operands);
case 5: return \"mvf%?d\\t%0, %1\";
@@ -609,11 +613,102 @@
(set_attr "type" "ffarith,f_load,f_store")]
)
+;; stfs/ldfs always use a conditional infix. This works around the
+;; ambiguity between "stf pl s" and "sftp ls".
+(define_insn "*thumb2_movsf_fpa"
+ [(set (match_operand:SF 0 "nonimmediate_operand" "=f,f,f, m,f,r,r,r, m")
+ (match_operand:SF 1 "general_operand" "fG,H,mE,f,r,f,r,mE,r"))]
+ "TARGET_THUMB2
+ && TARGET_HARD_FLOAT && TARGET_FPA
+ && (GET_CODE (operands[0]) != MEM
+ || register_operand (operands[1], SFmode))"
+ "@
+ mvf%?s\\t%0, %1
+ mnf%?s\\t%0, #%N1
+ ldf%?s\\t%0, %1
+ stf%?s\\t%1, %0
+ str%?\\t%1, [%|sp, #-4]!\;ldf%?s\\t%0, [%|sp], #4
+ stf%?s\\t%1, [%|sp, #-4]!\;ldr%?\\t%0, [%|sp], #4
+ mov%?\\t%0, %1 @bar
+ ldr%?\\t%0, %1\\t%@ float
+ str%?\\t%1, %0\\t%@ float"
+ [(set_attr "length" "4,4,4,4,8,8,4,4,4")
+ (set_attr "ce_count" "1,1,1,1,2,2,1,1,1")
+ (set_attr "predicable" "yes")
+ (set_attr "type"
+ "ffarith,ffarith,f_load,f_store,r_mem_f,f_mem_r,*,load1,store1")
+ (set_attr "pool_range" "*,*,1024,*,*,*,*,4096,*")
+ (set_attr "neg_pool_range" "*,*,1012,*,*,*,*,0,*")]
+)
+
+;; Not predicable because we don't know the number of instructions.
+(define_insn "*thumb2_movdf_fpa"
+ [(set (match_operand:DF 0 "nonimmediate_operand"
+ "=r,Q,r,m,r, f, f,f, m,!f,!r")
+ (match_operand:DF 1 "general_operand"
+ "Q, r,r,r,mF,fG,H,mF,f,r, f"))]
+ "TARGET_THUMB2
+ && TARGET_HARD_FLOAT && TARGET_FPA
+ && (GET_CODE (operands[0]) != MEM
+ || register_operand (operands[1], DFmode))"
+ "*
+ {
+ switch (which_alternative)
+ {
+ default:
+ case 0: return \"ldm%(ia%)\\t%m1, %M0\\t%@ double\";
+ case 1: return \"stm%(ia%)\\t%m0, %M1\\t%@ double\";
+ case 2: case 3: case 4: return output_move_double (operands);
+ case 5: return \"mvf%?d\\t%0, %1\";
+ case 6: return \"mnf%?d\\t%0, #%N1\";
+ case 7: return \"ldf%?d\\t%0, %1\";
+ case 8: return \"stf%?d\\t%1, %0\";
+ case 9: return output_mov_double_fpa_from_arm (operands);
+ case 10: return output_mov_double_arm_from_fpa (operands);
+ }
+ }
+ "
+ [(set_attr "length" "4,4,8,8,8,4,4,4,4,8,8")
+ (set_attr "type"
+ "load1,store2,*,store2,load1,ffarith,ffarith,f_load,f_store,r_mem_f,f_mem_r")
+ (set_attr "pool_range" "*,*,*,*,4092,*,*,1024,*,*,*")
+ (set_attr "neg_pool_range" "*,*,*,*,0,*,*,1020,*,*,*")]
+)
+
+;; Saving and restoring the floating point registers in the prologue should
+;; be done in XFmode, even though we don't support that for anything else
+;; (Well, strictly it's 'internal representation', but that's effectively
+;; XFmode).
+;; Not predicable because we don't know the number of instructions.
+
+(define_insn "*thumb2_movxf_fpa"
+ [(set (match_operand:XF 0 "nonimmediate_operand" "=f,f,f,m,f,r,r")
+ (match_operand:XF 1 "general_operand" "fG,H,m,f,r,f,r"))]
+ "TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_FPA && reload_completed"
+ "*
+ switch (which_alternative)
+ {
+ default:
+ case 0: return \"mvf%?e\\t%0, %1\";
+ case 1: return \"mnf%?e\\t%0, #%N1\";
+ case 2: return \"ldf%?e\\t%0, %1\";
+ case 3: return \"stf%?e\\t%1, %0\";
+ case 4: return output_mov_long_double_fpa_from_arm (operands);
+ case 5: return output_mov_long_double_arm_from_fpa (operands);
+ case 6: return output_mov_long_double_arm_from_arm (operands);
+ }
+ "
+ [(set_attr "length" "4,4,4,4,8,8,12")
+ (set_attr "type" "ffarith,ffarith,f_load,f_store,r_mem_f,f_mem_r,*")
+ (set_attr "pool_range" "*,*,1024,*,*,*,*")
+ (set_attr "neg_pool_range" "*,*,1004,*,*,*,*")]
+)
+
(define_insn "*cmpsf_fpa"
[(set (reg:CCFP CC_REGNUM)
(compare:CCFP (match_operand:SF 0 "s_register_operand" "f,f")
(match_operand:SF 1 "arm_float_add_operand" "fG,H")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
cmf%?\\t%0, %1
cnf%?\\t%0, #%N1"
@@ -625,7 +720,7 @@
[(set (reg:CCFP CC_REGNUM)
(compare:CCFP (match_operand:DF 0 "s_register_operand" "f,f")
(match_operand:DF 1 "arm_float_add_operand" "fG,H")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
cmf%?\\t%0, %1
cnf%?\\t%0, #%N1"
@@ -638,7 +733,7 @@
(compare:CCFP (float_extend:DF
(match_operand:SF 0 "s_register_operand" "f,f"))
(match_operand:DF 1 "arm_float_add_operand" "fG,H")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
cmf%?\\t%0, %1
cnf%?\\t%0, #%N1"
@@ -651,7 +746,7 @@
(compare:CCFP (match_operand:DF 0 "s_register_operand" "f")
(float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"cmf%?\\t%0, %1"
[(set_attr "conds" "set")
(set_attr "type" "f_2_r")]
@@ -661,7 +756,7 @@
[(set (reg:CCFPE CC_REGNUM)
(compare:CCFPE (match_operand:SF 0 "s_register_operand" "f,f")
(match_operand:SF 1 "arm_float_add_operand" "fG,H")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
cmf%?e\\t%0, %1
cnf%?e\\t%0, #%N1"
@@ -673,7 +768,7 @@
[(set (reg:CCFPE CC_REGNUM)
(compare:CCFPE (match_operand:DF 0 "s_register_operand" "f,f")
(match_operand:DF 1 "arm_float_add_operand" "fG,H")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
cmf%?e\\t%0, %1
cnf%?e\\t%0, #%N1"
@@ -686,7 +781,7 @@
(compare:CCFPE (float_extend:DF
(match_operand:SF 0 "s_register_operand" "f,f"))
(match_operand:DF 1 "arm_float_add_operand" "fG,H")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"@
cmf%?e\\t%0, %1
cnf%?e\\t%0, #%N1"
@@ -699,7 +794,7 @@
(compare:CCFPE (match_operand:DF 0 "s_register_operand" "f")
(float_extend:DF
(match_operand:SF 1 "s_register_operand" "f"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_FPA"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_FPA"
"cmf%?e\\t%0, %1"
[(set_attr "conds" "set")
(set_attr "type" "f_2_r")]
@@ -748,3 +843,48 @@
(set_attr "type" "ffarith")
(set_attr "conds" "use")]
)
+
+(define_insn "*thumb2_movsfcc_fpa"
+ [(set (match_operand:SF 0 "s_register_operand" "=f,f,f,f,f,f,f,f")
+ (if_then_else:SF
+ (match_operator 3 "arm_comparison_operator"
+ [(match_operand 4 "cc_register" "") (const_int 0)])
+ (match_operand:SF 1 "arm_float_add_operand" "0,0,fG,H,fG,fG,H,H")
+ (match_operand:SF 2 "arm_float_add_operand" "fG,H,0,0,fG,H,fG,H")))]
+ "TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_FPA"
+ "@
+ it\\t%D3\;mvf%D3s\\t%0, %2
+ it\\t%D3\;mnf%D3s\\t%0, #%N2
+ it\\t%d3\;mvf%d3s\\t%0, %1
+ it\\t%d3\;mnf%d3s\\t%0, #%N1
+ ite\\t%d3\;mvf%d3s\\t%0, %1\;mvf%D3s\\t%0, %2
+ ite\\t%d3\;mvf%d3s\\t%0, %1\;mnf%D3s\\t%0, #%N2
+ ite\\t%d3\;mnf%d3s\\t%0, #%N1\;mvf%D3s\\t%0, %2
+ ite\\t%d3\;mnf%d3s\\t%0, #%N1\;mnf%D3s\\t%0, #%N2"
+ [(set_attr "length" "6,6,6,6,10,10,10,10")
+ (set_attr "type" "ffarith")
+ (set_attr "conds" "use")]
+)
+
+(define_insn "*thumb2_movdfcc_fpa"
+ [(set (match_operand:DF 0 "s_register_operand" "=f,f,f,f,f,f,f,f")
+ (if_then_else:DF
+ (match_operator 3 "arm_comparison_operator"
+ [(match_operand 4 "cc_register" "") (const_int 0)])
+ (match_operand:DF 1 "arm_float_add_operand" "0,0,fG,H,fG,fG,H,H")
+ (match_operand:DF 2 "arm_float_add_operand" "fG,H,0,0,fG,H,fG,H")))]
+ "TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_FPA"
+ "@
+ it\\t%D3\;mvf%D3d\\t%0, %2
+ it\\t%D3\;mnf%D3d\\t%0, #%N2
+ it\\t%d3\;mvf%d3d\\t%0, %1
+ it\\t%d3\;mnf%d3d\\t%0, #%N1
+ ite\\t%d3\;mvf%d3d\\t%0, %1\;mvf%D3d\\t%0, %2
+ ite\\t%d3\;mvf%d3d\\t%0, %1\;mnf%D3d\\t%0, #%N2
+ ite\\t%d3\;mnf%d3d\\t%0, #%N1\;mvf%D3d\\t%0, %2
+ ite\\t%d3\;mnf%d3d\\t%0, #%N1\;mnf%D3d\\t%0, #%N2"
+ [(set_attr "length" "6,6,6,6,10,10,10,10")
+ (set_attr "type" "ffarith")
+ (set_attr "conds" "use")]
+)
+
diff --git a/gcc/config/arm/ieee754-df.S b/gcc/config/arm/ieee754-df.S
index 0d6bf96..7a428a2 100644
--- a/gcc/config/arm/ieee754-df.S
+++ b/gcc/config/arm/ieee754-df.S
@@ -1,6 +1,6 @@
/* ieee754-df.S double-precision floating point support for ARM
- Copyright (C) 2003, 2004, 2005 Free Software Foundation, Inc.
+ Copyright (C) 2003, 2004, 2005, 2007 Free Software Foundation, Inc.
Contributed by Nicolas Pitre (nico@cam.org)
This file is free software; you can redistribute it and/or modify it
@@ -88,23 +88,26 @@ ARM_FUNC_ALIAS aeabi_dsub subdf3
ARM_FUNC_START adddf3
ARM_FUNC_ALIAS aeabi_dadd adddf3
-1: stmfd sp!, {r4, r5, lr}
+1: do_push {r4, r5, lr}
@ Look for zeroes, equal values, INF, or NAN.
- mov r4, xh, lsl #1
- mov r5, yh, lsl #1
+ shift1 lsl, r4, xh, #1
+ shift1 lsl, r5, yh, #1
teq r4, r5
+ do_it eq
teqeq xl, yl
- orrnes ip, r4, xl
- orrnes ip, r5, yl
- mvnnes ip, r4, asr #21
- mvnnes ip, r5, asr #21
+ do_it ne, ttt
+ COND(orr,s,ne) ip, r4, xl
+ COND(orr,s,ne) ip, r5, yl
+ COND(mvn,s,ne) ip, r4, asr #21
+ COND(mvn,s,ne) ip, r5, asr #21
beq LSYM(Lad_s)
@ Compute exponent difference. Make largest exponent in r4,
@ corresponding arg in xh-xl, and positive exponent difference in r5.
- mov r4, r4, lsr #21
+ shift1 lsr, r4, r4, #21
rsbs r5, r4, r5, lsr #21
+ do_it lt
rsblt r5, r5, #0
ble 1f
add r4, r4, r5
@@ -119,6 +122,7 @@ ARM_FUNC_ALIAS aeabi_dadd adddf3
@ already in xh-xl. We need up to 54 bit to handle proper rounding
@ of 0x1p54 - 1.1.
cmp r5, #54
+ do_it hi
RETLDM "r4, r5" hi
@ Convert mantissa to signed integer.
@@ -127,15 +131,25 @@ ARM_FUNC_ALIAS aeabi_dadd adddf3
mov ip, #0x00100000
orr xh, ip, xh, lsr #12
beq 1f
+#if defined(__thumb2__)
+ negs xl, xl
+ sbc xh, xh, xh, lsl #1
+#else
rsbs xl, xl, #0
rsc xh, xh, #0
+#endif
1:
tst yh, #0x80000000
mov yh, yh, lsl #12
orr yh, ip, yh, lsr #12
beq 1f
+#if defined(__thumb2__)
+ negs yl, yl
+ sbc yh, yh, yh, lsl #1
+#else
rsbs yl, yl, #0
rsc yh, yh, #0
+#endif
1:
@ If exponent == difference, one or both args were denormalized.
@ Since this is not common case, rescale them off line.
@@ -149,27 +163,35 @@ LSYM(Lad_x):
@ Shift yh-yl right per r5, add to xh-xl, keep leftover bits into ip.
rsbs lr, r5, #32
blt 1f
- mov ip, yl, lsl lr
- adds xl, xl, yl, lsr r5
+ shift1 lsl, ip, yl, lr
+ shiftop adds xl xl yl lsr r5 yl
adc xh, xh, #0
- adds xl, xl, yh, lsl lr
- adcs xh, xh, yh, asr r5
+ shiftop adds xl xl yh lsl lr yl
+ shiftop adcs xh xh yh asr r5 yh
b 2f
1: sub r5, r5, #32
add lr, lr, #32
cmp yl, #1
- mov ip, yh, lsl lr
+ shift1 lsl,ip, yh, lr
+ do_it cs
orrcs ip, ip, #2 @ 2 not 1, to allow lsr #1 later
- adds xl, xl, yh, asr r5
+ shiftop adds xl xl yh asr r5 yh
adcs xh, xh, yh, asr #31
2:
@ We now have a result in xh-xl-ip.
@ Keep absolute value in xh-xl-ip, sign in r5 (the n bit was set above)
and r5, xh, #0x80000000
bpl LSYM(Lad_p)
+#if defined(__thumb2__)
+ mov lr, #0
+ negs ip, ip
+ sbcs xl, lr, xl
+ sbc xh, lr, xh
+#else
rsbs ip, ip, #0
rscs xl, xl, #0
rsc xh, xh, #0
+#endif
@ Determine how to normalize the result.
LSYM(Lad_p):
@@ -195,7 +217,8 @@ LSYM(Lad_p):
@ Pack final result together.
LSYM(Lad_e):
cmp ip, #0x80000000
- moveqs ip, xl, lsr #1
+ do_it eq
+ COND(mov,s,eq) ip, xl, lsr #1
adcs xl, xl, #0
adc xh, xh, r4, lsl #20
orr xh, xh, r5
@@ -238,9 +261,11 @@ LSYM(Lad_l):
#else
teq xh, #0
+ do_it eq, t
moveq xh, xl
moveq xl, #0
clz r3, xh
+ do_it eq
addeq r3, r3, #32
sub r3, r3, #11
@@ -256,20 +281,29 @@ LSYM(Lad_l):
@ since a register switch happened above.
add ip, r2, #20
rsb r2, r2, #12
- mov xl, xh, lsl ip
- mov xh, xh, lsr r2
+ shift1 lsl, xl, xh, ip
+ shift1 lsr, xh, xh, r2
b 3f
@ actually shift value left 1 to 20 bits, which might also represent
@ 32 to 52 bits if counting the register switch that happened earlier.
1: add r2, r2, #20
-2: rsble ip, r2, #32
- mov xh, xh, lsl r2
+2: do_it le
+ rsble ip, r2, #32
+ shift1 lsl, xh, xh, r2
+#if defined(__thumb2__)
+ lsr ip, xl, ip
+ itt le
+ orrle xh, xh, ip
+ lslle xl, xl, r2
+#else
orrle xh, xh, xl, lsr ip
movle xl, xl, lsl r2
+#endif
@ adjust exponent accordingly.
3: subs r4, r4, r3
+ do_it ge, tt
addge xh, xh, r4, lsl #20
orrge xh, xh, r5
RETLDM "r4, r5" ge
@@ -285,23 +319,23 @@ LSYM(Lad_l):
@ shift result right of 1 to 20 bits, sign is in r5.
add r4, r4, #20
rsb r2, r4, #32
- mov xl, xl, lsr r4
- orr xl, xl, xh, lsl r2
- orr xh, r5, xh, lsr r4
+ shift1 lsr, xl, xl, r4
+ shiftop orr xl xl xh lsl r2 yh
+ shiftop orr xh r5 xh lsr r4 yh
RETLDM "r4, r5"
@ shift result right of 21 to 31 bits, or left 11 to 1 bits after
@ a register switch from xh to xl.
1: rsb r4, r4, #12
rsb r2, r4, #32
- mov xl, xl, lsr r2
- orr xl, xl, xh, lsl r4
+ shift1 lsr, xl, xl, r2
+ shiftop orr xl xl xh lsl r4 yh
mov xh, r5
RETLDM "r4, r5"
@ Shift value right of 32 to 64 bits, or 0 to 32 bits after a switch
@ from xh to xl.
-2: mov xl, xh, lsr r4
+2: shift1 lsr, xl, xh, r4
mov xh, r5
RETLDM "r4, r5"
@@ -310,6 +344,7 @@ LSYM(Lad_l):
LSYM(Lad_d):
teq r4, #0
eor yh, yh, #0x00100000
+ do_it eq, te
eoreq xh, xh, #0x00100000
addeq r4, r4, #1
subne r5, r5, #1
@@ -318,15 +353,18 @@ LSYM(Lad_d):
LSYM(Lad_s):
mvns ip, r4, asr #21
- mvnnes ip, r5, asr #21
+ do_it ne
+ COND(mvn,s,ne) ip, r5, asr #21
beq LSYM(Lad_i)
teq r4, r5
+ do_it eq
teqeq xl, yl
beq 1f
@ Result is x + 0.0 = x or 0.0 + y = y.
teq r4, #0
+ do_it eq, t
moveq xh, yh
moveq xl, yl
RETLDM "r4, r5"
@@ -334,6 +372,7 @@ LSYM(Lad_s):
1: teq xh, yh
@ Result is x - x = 0.
+ do_it ne, tt
movne xh, #0
movne xl, #0
RETLDM "r4, r5" ne
@@ -343,9 +382,11 @@ LSYM(Lad_s):
bne 2f
movs xl, xl, lsl #1
adcs xh, xh, xh
+ do_it cs
orrcs xh, xh, #0x80000000
RETLDM "r4, r5"
2: adds r4, r4, #(2 << 21)
+ do_it cc, t
addcc xh, xh, #(1 << 20)
RETLDM "r4, r5" cc
and r5, xh, #0x80000000
@@ -365,13 +406,16 @@ LSYM(Lad_o):
@ otherwise return xh-xl (which is INF or -INF)
LSYM(Lad_i):
mvns ip, r4, asr #21
+ do_it ne, te
movne xh, yh
movne xl, yl
- mvneqs ip, r5, asr #21
+ COND(mvn,s,eq) ip, r5, asr #21
+ do_it ne, t
movne yh, xh
movne yl, xl
orrs r4, xl, xh, lsl #12
- orreqs r5, yl, yh, lsl #12
+ do_it eq, te
+ COND(orr,s,eq) r5, yl, yh, lsl #12
teqeq xh, yh
orrne xh, xh, #0x00080000 @ quiet NAN
RETLDM "r4, r5"
@@ -385,9 +429,10 @@ ARM_FUNC_START floatunsidf
ARM_FUNC_ALIAS aeabi_ui2d floatunsidf
teq r0, #0
+ do_it eq, t
moveq r1, #0
RETc(eq)
- stmfd sp!, {r4, r5, lr}
+ do_push {r4, r5, lr}
mov r4, #0x400 @ initial exponent
add r4, r4, #(52-1 - 1)
mov r5, #0 @ sign bit is 0
@@ -404,12 +449,14 @@ ARM_FUNC_START floatsidf
ARM_FUNC_ALIAS aeabi_i2d floatsidf
teq r0, #0
+ do_it eq, t
moveq r1, #0
RETc(eq)
- stmfd sp!, {r4, r5, lr}
+ do_push {r4, r5, lr}
mov r4, #0x400 @ initial exponent
add r4, r4, #(52-1 - 1)
ands r5, r0, #0x80000000 @ sign bit in r5
+ do_it mi
rsbmi r0, r0, #0 @ absolute value
.ifnc xl, r0
mov xl, r0
@@ -427,17 +474,19 @@ ARM_FUNC_ALIAS aeabi_f2d extendsfdf2
mov xh, r2, asr #3 @ stretch exponent
mov xh, xh, rrx @ retrieve sign bit
mov xl, r2, lsl #28 @ retrieve remaining bits
- andnes r3, r2, #0xff000000 @ isolate exponent
+ do_it ne, ttt
+ COND(and,s,ne) r3, r2, #0xff000000 @ isolate exponent
teqne r3, #0xff000000 @ if not 0, check if INF or NAN
eorne xh, xh, #0x38000000 @ fixup exponent otherwise.
RETc(ne) @ and return it.
teq r2, #0 @ if actually 0
+ do_it ne, e
teqne r3, #0xff000000 @ or INF or NAN
RETc(eq) @ we are done already.
@ value was denormalized. We can normalize it now.
- stmfd sp!, {r4, r5, lr}
+ do_push {r4, r5, lr}
mov r4, #0x380 @ setup corresponding exponent
and r5, xh, #0x80000000 @ move sign bit in r5
bic xh, xh, #0x80000000
@@ -451,7 +500,10 @@ ARM_FUNC_ALIAS aeabi_ul2d floatundidf
orrs r2, r0, r1
#if !defined (__VFP_FP__) && !defined(__SOFTFP__)
+ do_it eq, t
mvfeqd f0, #0.0
+#else
+ do_it eq
#endif
RETc(eq)
@@ -460,9 +512,9 @@ ARM_FUNC_ALIAS aeabi_ul2d floatundidf
@ we can return the result in f0 as well as in r0/r1 for backwards
@ compatibility.
adr ip, LSYM(f0_ret)
- stmfd sp!, {r4, r5, ip, lr}
+ do_push {r4, r5, ip, lr}
#else
- stmfd sp!, {r4, r5, lr}
+ do_push {r4, r5, lr}
#endif
mov r5, #0
@@ -473,7 +525,10 @@ ARM_FUNC_ALIAS aeabi_l2d floatdidf
orrs r2, r0, r1
#if !defined (__VFP_FP__) && !defined(__SOFTFP__)
+ do_itt eq
mvfeqd f0, #0.0
+#else
+ do_it eq
#endif
RETc(eq)
@@ -482,15 +537,20 @@ ARM_FUNC_ALIAS aeabi_l2d floatdidf
@ we can return the result in f0 as well as in r0/r1 for backwards
@ compatibility.
adr ip, LSYM(f0_ret)
- stmfd sp!, {r4, r5, ip, lr}
+ do_push {r4, r5, ip, lr}
#else
- stmfd sp!, {r4, r5, lr}
+ do_push {r4, r5, lr}
#endif
ands r5, ah, #0x80000000 @ sign bit in r5
bpl 2f
+#if defined(__thumb2__)
+ negs al, al
+ sbc ah, ah, ah, lsl #1
+#else
rsbs al, al, #0
rsc ah, ah, #0
+#endif
2:
mov r4, #0x400 @ initial exponent
add r4, r4, #(52-1 - 1)
@@ -508,16 +568,18 @@ ARM_FUNC_ALIAS aeabi_l2d floatdidf
@ The value is too big. Scale it down a bit...
mov r2, #3
movs ip, ip, lsr #3
+ do_it ne
addne r2, r2, #3
movs ip, ip, lsr #3
+ do_it ne
addne r2, r2, #3
add r2, r2, ip, lsr #3
rsb r3, r2, #32
- mov ip, xl, lsl r3
- mov xl, xl, lsr r2
- orr xl, xl, xh, lsl r3
- mov xh, xh, lsr r2
+ shift1 lsl, ip, xl, r3
+ shift1 lsr, xl, xl, r2
+ shiftop orr xl xl xh lsl r3 lr
+ shift1 lsr, xh, xh, r2
add r4, r4, r2
b LSYM(Lad_p)
@@ -526,7 +588,7 @@ ARM_FUNC_ALIAS aeabi_l2d floatdidf
@ Legacy code expects the result to be returned in f0. Copy it
@ there as well.
LSYM(f0_ret):
- stmfd sp!, {r0, r1}
+ do_push {r0, r1}
ldfd f0, [sp], #8
RETLDM
@@ -543,13 +605,14 @@ LSYM(f0_ret):
ARM_FUNC_START muldf3
ARM_FUNC_ALIAS aeabi_dmul muldf3
- stmfd sp!, {r4, r5, r6, lr}
+ do_push {r4, r5, r6, lr}
@ Mask out exponents, trap any zero/denormal/INF/NAN.
mov ip, #0xff
orr ip, ip, #0x700
ands r4, ip, xh, lsr #20
- andnes r5, ip, yh, lsr #20
+ do_it ne, tte
+ COND(and,s,ne) r5, ip, yh, lsr #20
teqne r4, ip
teqne r5, ip
bleq LSYM(Lml_s)
@@ -565,7 +628,8 @@ ARM_FUNC_ALIAS aeabi_dmul muldf3
bic xh, xh, ip, lsl #21
bic yh, yh, ip, lsl #21
orrs r5, xl, xh, lsl #12
- orrnes r5, yl, yh, lsl #12
+ do_it ne
+ COND(orr,s,ne) r5, yl, yh, lsl #12
orr xh, xh, #0x00100000
orr yh, yh, #0x00100000
beq LSYM(Lml_1)
@@ -646,6 +710,7 @@ ARM_FUNC_ALIAS aeabi_dmul muldf3
@ The LSBs in ip are only significant for the final rounding.
@ Fold them into lr.
teq ip, #0
+ do_it ne
orrne lr, lr, #1
@ Adjust result upon the MSB position.
@@ -666,12 +731,14 @@ ARM_FUNC_ALIAS aeabi_dmul muldf3
@ Check exponent range for under/overflow.
subs ip, r4, #(254 - 1)
+ do_it hi
cmphi ip, #0x700
bhi LSYM(Lml_u)
@ Round the result, merge final exponent.
cmp lr, #0x80000000
- moveqs lr, xl, lsr #1
+ do_it eq
+ COND(mov,s,eq) lr, xl, lsr #1
adcs xl, xl, #0
adc xh, xh, r4, lsl #20
RETLDM "r4, r5, r6"
@@ -683,7 +750,8 @@ LSYM(Lml_1):
orr xl, xl, yl
eor xh, xh, yh
subs r4, r4, ip, lsr #1
- rsbgts r5, r4, ip
+ do_it gt, tt
+ COND(rsb,s,gt) r5, r4, ip
orrgt xh, xh, r4, lsl #20
RETLDM "r4, r5, r6" gt
@@ -698,6 +766,7 @@ LSYM(Lml_u):
@ Check if denormalized result is possible, otherwise return signed 0.
cmn r4, #(53 + 1)
+ do_it le, tt
movle xl, #0
bicle xh, xh, #0x7fffffff
RETLDM "r4, r5, r6" le
@@ -712,14 +781,15 @@ LSYM(Lml_u):
@ shift result right of 1 to 20 bits, preserve sign bit, round, etc.
add r4, r4, #20
rsb r5, r4, #32
- mov r3, xl, lsl r5
- mov xl, xl, lsr r4
- orr xl, xl, xh, lsl r5
+ shift1 lsl, r3, xl, r5
+ shift1 lsr, xl, xl, r4
+ shiftop orr xl xl xh lsl r5 r2
and r2, xh, #0x80000000
bic xh, xh, #0x80000000
adds xl, xl, r3, lsr #31
- adc xh, r2, xh, lsr r4
+ shiftop adc xh r2 xh lsr r4 r6
orrs lr, lr, r3, lsl #1
+ do_it eq
biceq xl, xl, r3, lsr #31
RETLDM "r4, r5, r6"
@@ -727,27 +797,29 @@ LSYM(Lml_u):
@ a register switch from xh to xl. Then round.
1: rsb r4, r4, #12
rsb r5, r4, #32
- mov r3, xl, lsl r4
- mov xl, xl, lsr r5
- orr xl, xl, xh, lsl r4
+ shift1 lsl, r3, xl, r4
+ shift1 lsr, xl, xl, r5
+ shiftop orr xl xl xh lsl r4 r2
bic xh, xh, #0x7fffffff
adds xl, xl, r3, lsr #31
adc xh, xh, #0
orrs lr, lr, r3, lsl #1
+ do_it eq
biceq xl, xl, r3, lsr #31
RETLDM "r4, r5, r6"
@ Shift value right of 32 to 64 bits, or 0 to 32 bits after a switch
@ from xh to xl. Leftover bits are in r3-r6-lr for rounding.
2: rsb r5, r4, #32
- orr lr, lr, xl, lsl r5
- mov r3, xl, lsr r4
- orr r3, r3, xh, lsl r5
- mov xl, xh, lsr r4
+ shiftop orr lr lr xl lsl r5 r2
+ shift1 lsr, r3, xl, r4
+ shiftop orr r3 r3 xh lsl r5 r2
+ shift1 lsr, xl, xh, r4
bic xh, xh, #0x7fffffff
- bic xl, xl, xh, lsr r4
+ shiftop bic xl xl xh lsr r4 r2
add xl, xl, r3, lsr #31
orrs lr, lr, r3, lsl #1
+ do_it eq
biceq xl, xl, r3, lsr #31
RETLDM "r4, r5, r6"
@@ -760,15 +832,18 @@ LSYM(Lml_d):
1: movs xl, xl, lsl #1
adc xh, xh, xh
tst xh, #0x00100000
+ do_it eq
subeq r4, r4, #1
beq 1b
orr xh, xh, r6
teq r5, #0
+ do_it ne
movne pc, lr
2: and r6, yh, #0x80000000
3: movs yl, yl, lsl #1
adc yh, yh, yh
tst yh, #0x00100000
+ do_it eq
subeq r5, r5, #1
beq 3b
orr yh, yh, r6
@@ -778,26 +853,29 @@ LSYM(Lml_s):
@ Isolate the INF and NAN cases away
teq r4, ip
and r5, ip, yh, lsr #20
+ do_it ne
teqne r5, ip
beq 1f
@ Here, one or more arguments are either denormalized or zero.
orrs r6, xl, xh, lsl #1
- orrnes r6, yl, yh, lsl #1
+ do_it ne
+ COND(orr,s,ne) r6, yl, yh, lsl #1
bne LSYM(Lml_d)
@ Result is 0, but determine sign anyway.
LSYM(Lml_z):
eor xh, xh, yh
- bic xh, xh, #0x7fffffff
+ and xh, xh, #0x80000000
mov xl, #0
RETLDM "r4, r5, r6"
1: @ One or both args are INF or NAN.
orrs r6, xl, xh, lsl #1
+ do_it eq, te
moveq xl, yl
moveq xh, yh
- orrnes r6, yl, yh, lsl #1
+ COND(orr,s,ne) r6, yl, yh, lsl #1
beq LSYM(Lml_n) @ 0 * INF or INF * 0 -> NAN
teq r4, ip
bne 1f
@@ -806,6 +884,7 @@ LSYM(Lml_z):
1: teq r5, ip
bne LSYM(Lml_i)
orrs r6, yl, yh, lsl #12
+ do_it ne, t
movne xl, yl
movne xh, yh
bne LSYM(Lml_n) @ <anything> * NAN -> NAN
@@ -834,13 +913,14 @@ LSYM(Lml_n):
ARM_FUNC_START divdf3
ARM_FUNC_ALIAS aeabi_ddiv divdf3
- stmfd sp!, {r4, r5, r6, lr}
+ do_push {r4, r5, r6, lr}
@ Mask out exponents, trap any zero/denormal/INF/NAN.
mov ip, #0xff
orr ip, ip, #0x700
ands r4, ip, xh, lsr #20
- andnes r5, ip, yh, lsr #20
+ do_it ne, tte
+ COND(and,s,ne) r5, ip, yh, lsr #20
teqne r4, ip
teqne r5, ip
bleq LSYM(Ldv_s)
@@ -871,6 +951,7 @@ ARM_FUNC_ALIAS aeabi_ddiv divdf3
@ Ensure result will land to known bit position.
@ Apply exponent bias accordingly.
cmp r5, yh
+ do_it eq
cmpeq r6, yl
adc r4, r4, #(255 - 2)
add r4, r4, #0x300
@@ -889,6 +970,7 @@ ARM_FUNC_ALIAS aeabi_ddiv divdf3
@ The actual division loop.
1: subs lr, r6, yl
sbcs lr, r5, yh
+ do_it cs, tt
subcs r6, r6, yl
movcs r5, lr
orrcs xl, xl, ip
@@ -896,6 +978,7 @@ ARM_FUNC_ALIAS aeabi_ddiv divdf3
mov yl, yl, rrx
subs lr, r6, yl
sbcs lr, r5, yh
+ do_it cs, tt
subcs r6, r6, yl
movcs r5, lr
orrcs xl, xl, ip, lsr #1
@@ -903,6 +986,7 @@ ARM_FUNC_ALIAS aeabi_ddiv divdf3
mov yl, yl, rrx
subs lr, r6, yl
sbcs lr, r5, yh
+ do_it cs, tt
subcs r6, r6, yl
movcs r5, lr
orrcs xl, xl, ip, lsr #2
@@ -910,6 +994,7 @@ ARM_FUNC_ALIAS aeabi_ddiv divdf3
mov yl, yl, rrx
subs lr, r6, yl
sbcs lr, r5, yh
+ do_it cs, tt
subcs r6, r6, yl
movcs r5, lr
orrcs xl, xl, ip, lsr #3
@@ -936,18 +1021,21 @@ ARM_FUNC_ALIAS aeabi_ddiv divdf3
2:
@ Be sure result starts in the high word.
tst xh, #0x00100000
+ do_it eq, t
orreq xh, xh, xl
moveq xl, #0
3:
@ Check exponent range for under/overflow.
subs ip, r4, #(254 - 1)
+ do_it hi
cmphi ip, #0x700
bhi LSYM(Lml_u)
@ Round the result, merge final exponent.
subs ip, r5, yh
- subeqs ip, r6, yl
- moveqs ip, xl, lsr #1
+ do_it eq, t
+ COND(sub,s,eq) ip, r6, yl
+ COND(mov,s,eq) ip, xl, lsr #1
adcs xl, xl, #0
adc xh, xh, r4, lsl #20
RETLDM "r4, r5, r6"
@@ -957,7 +1045,8 @@ LSYM(Ldv_1):
and lr, lr, #0x80000000
orr xh, lr, xh, lsr #12
adds r4, r4, ip, lsr #1
- rsbgts r5, r4, ip
+ do_it gt, tt
+ COND(rsb,s,gt) r5, r4, ip
orrgt xh, xh, r4, lsl #20
RETLDM "r4, r5, r6" gt
@@ -976,6 +1065,7 @@ LSYM(Ldv_u):
LSYM(Ldv_s):
and r5, ip, yh, lsr #20
teq r4, ip
+ do_it eq
teqeq r5, ip
beq LSYM(Lml_n) @ INF/NAN / INF/NAN -> NAN
teq r4, ip
@@ -996,7 +1086,8 @@ LSYM(Ldv_s):
b LSYM(Lml_n) @ <anything> / NAN -> NAN
2: @ If both are nonzero, we need to normalize and resume above.
orrs r6, xl, xh, lsl #1
- orrnes r6, yl, yh, lsl #1
+ do_it ne
+ COND(orr,s,ne) r6, yl, yh, lsl #1
bne LSYM(Lml_d)
@ One or both arguments are 0.
orrs r4, xl, xh, lsl #1
@@ -1035,14 +1126,17 @@ ARM_FUNC_ALIAS eqdf2 cmpdf2
mov ip, xh, lsl #1
mvns ip, ip, asr #21
mov ip, yh, lsl #1
- mvnnes ip, ip, asr #21
+ do_it ne
+ COND(mvn,s,ne) ip, ip, asr #21
beq 3f
@ Test for equality.
@ Note that 0.0 is equal to -0.0.
2: orrs ip, xl, xh, lsl #1 @ if x == 0.0 or -0.0
- orreqs ip, yl, yh, lsl #1 @ and y == 0.0 or -0.0
+ do_it eq, e
+ COND(orr,s,eq) ip, yl, yh, lsl #1 @ and y == 0.0 or -0.0
teqne xh, yh @ or xh == yh
+ do_it eq, tt
teqeq xl, yl @ and xl == yl
moveq r0, #0 @ then equal.
RETc(eq)
@@ -1054,10 +1148,13 @@ ARM_FUNC_ALIAS eqdf2 cmpdf2
teq xh, yh
@ Compare values if same sign
+ do_it pl
cmppl xh, yh
+ do_it eq
cmpeq xl, yl
@ Result:
+ do_it cs, e
movcs r0, yh, asr #31
mvncc r0, yh, asr #31
orr r0, r0, #1
@@ -1100,14 +1197,15 @@ ARM_FUNC_ALIAS aeabi_cdcmple aeabi_cdcmpeq
@ The status-returning routines are required to preserve all
@ registers except ip, lr, and cpsr.
-6: stmfd sp!, {r0, lr}
+6: do_push {r0, lr}
ARM_CALL cmpdf2
@ Set the Z flag correctly, and the C flag unconditionally.
- cmp r0, #0
+ cmp r0, #0
@ Clear the C flag if the return value was -1, indicating
@ that the first operand was smaller than the second.
- cmnmi r0, #0
- RETLDM "r0"
+ do_it mi
+ cmnmi r0, #0
+ RETLDM "r0"
FUNC_END aeabi_cdcmple
FUNC_END aeabi_cdcmpeq
@@ -1117,6 +1215,7 @@ ARM_FUNC_START aeabi_dcmpeq
str lr, [sp, #-8]!
ARM_CALL aeabi_cdcmple
+ do_it eq, e
moveq r0, #1 @ Equal to.
movne r0, #0 @ Less than, greater than, or unordered.
RETLDM
@@ -1127,6 +1226,7 @@ ARM_FUNC_START aeabi_dcmplt
str lr, [sp, #-8]!
ARM_CALL aeabi_cdcmple
+ do_it cc, e
movcc r0, #1 @ Less than.
movcs r0, #0 @ Equal to, greater than, or unordered.
RETLDM
@@ -1137,6 +1237,7 @@ ARM_FUNC_START aeabi_dcmple
str lr, [sp, #-8]!
ARM_CALL aeabi_cdcmple
+ do_it ls, e
movls r0, #1 @ Less than or equal to.
movhi r0, #0 @ Greater than or unordered.
RETLDM
@@ -1147,6 +1248,7 @@ ARM_FUNC_START aeabi_dcmpge
str lr, [sp, #-8]!
ARM_CALL aeabi_cdrcmple
+ do_it ls, e
movls r0, #1 @ Operand 2 is less than or equal to operand 1.
movhi r0, #0 @ Operand 2 greater than operand 1, or unordered.
RETLDM
@@ -1157,6 +1259,7 @@ ARM_FUNC_START aeabi_dcmpgt
str lr, [sp, #-8]!
ARM_CALL aeabi_cdrcmple
+ do_it cc, e
movcc r0, #1 @ Operand 2 is less than operand 1.
movcs r0, #0 @ Operand 2 is greater than or equal to operand 1,
@ or they are unordered.
@@ -1211,7 +1314,8 @@ ARM_FUNC_ALIAS aeabi_d2iz fixdfsi
orr r3, r3, #0x80000000
orr r3, r3, xl, lsr #21
tst xh, #0x80000000 @ the sign bit
- mov r0, r3, lsr r2
+ shift1 lsr, r0, r3, r2
+ do_it ne
rsbne r0, r0, #0
RET
@@ -1221,6 +1325,7 @@ ARM_FUNC_ALIAS aeabi_d2iz fixdfsi
2: orrs xl, xl, xh, lsl #12
bne 4f @ x is NAN.
3: ands r0, xh, #0x80000000 @ the sign bit
+ do_it eq
moveq r0, #0x7fffffff @ maximum signed positive si
RET
@@ -1251,7 +1356,7 @@ ARM_FUNC_ALIAS aeabi_d2uiz fixunsdfsi
mov r3, xh, lsl #11
orr r3, r3, #0x80000000
orr r3, r3, xl, lsr #21
- mov r0, r3, lsr r2
+ shift1 lsr, r0, r3, r2
RET
1: mov r0, #0
@@ -1278,8 +1383,9 @@ ARM_FUNC_ALIAS aeabi_d2f truncdfsf2
@ check exponent range.
mov r2, xh, lsl #1
subs r3, r2, #((1023 - 127) << 21)
- subcss ip, r3, #(1 << 21)
- rsbcss ip, ip, #(254 << 21)
+ do_it cs, t
+ COND(sub,s,cs) ip, r3, #(1 << 21)
+ COND(rsb,s,cs) ip, ip, #(254 << 21)
bls 2f @ value is out of range
1: @ shift and round mantissa
@@ -1288,6 +1394,7 @@ ARM_FUNC_ALIAS aeabi_d2f truncdfsf2
orr xl, ip, xl, lsr #29
cmp r2, #0x80000000
adc r0, xl, r3, lsl #2
+ do_it eq
biceq r0, r0, #1
RET
@@ -1297,6 +1404,7 @@ ARM_FUNC_ALIAS aeabi_d2f truncdfsf2
@ check if denormalized value is possible
adds r2, r3, #(23 << 21)
+ do_it lt, t
andlt r0, xh, #0x80000000 @ too small, return signed 0.
RETc(lt)
@@ -1305,13 +1413,18 @@ ARM_FUNC_ALIAS aeabi_d2f truncdfsf2
mov r2, r2, lsr #21
rsb r2, r2, #24
rsb ip, r2, #32
+#if defined(__thumb2__)
+ lsls r3, xl, ip
+#else
movs r3, xl, lsl ip
- mov xl, xl, lsr r2
+#endif
+ shift1 lsr, xl, xl, r2
+ do_it ne
orrne xl, xl, #1 @ fold r3 for rounding considerations.
mov r3, xh, lsl #11
mov r3, r3, lsr #11
- orr xl, xl, r3, lsl ip
- mov r3, r3, lsr r2
+ shiftop orr xl xl r3 lsl ip ip
+ shift1 lsr, r3, r3, r2
mov r3, r3, lsl #1
b 1b
@@ -1319,6 +1432,7 @@ ARM_FUNC_ALIAS aeabi_d2f truncdfsf2
mvns r3, r2, asr #21
bne 5f @ simple overflow
orrs r3, xl, xh, lsl #12
+ do_it ne, tt
movne r0, #0x7f000000
orrne r0, r0, #0x00c00000
RETc(ne) @ return NAN
diff --git a/gcc/config/arm/ieee754-sf.S b/gcc/config/arm/ieee754-sf.S
index f74f458..e36b4a4 100644
--- a/gcc/config/arm/ieee754-sf.S
+++ b/gcc/config/arm/ieee754-sf.S
@@ -1,6 +1,6 @@
/* ieee754-sf.S single-precision floating point support for ARM
- Copyright (C) 2003, 2004, 2005 Free Software Foundation, Inc.
+ Copyright (C) 2003, 2004, 2005, 2007 Free Software Foundation, Inc.
Contributed by Nicolas Pitre (nico@cam.org)
This file is free software; you can redistribute it and/or modify it
@@ -71,36 +71,42 @@ ARM_FUNC_ALIAS aeabi_fadd addsf3
1: @ Look for zeroes, equal values, INF, or NAN.
movs r2, r0, lsl #1
- movnes r3, r1, lsl #1
+ do_it ne, ttt
+ COND(mov,s,ne) r3, r1, lsl #1
teqne r2, r3
- mvnnes ip, r2, asr #24
- mvnnes ip, r3, asr #24
+ COND(mvn,s,ne) ip, r2, asr #24
+ COND(mvn,s,ne) ip, r3, asr #24
beq LSYM(Lad_s)
@ Compute exponent difference. Make largest exponent in r2,
@ corresponding arg in r0, and positive exponent difference in r3.
mov r2, r2, lsr #24
rsbs r3, r2, r3, lsr #24
+ do_it gt, ttt
addgt r2, r2, r3
eorgt r1, r0, r1
eorgt r0, r1, r0
eorgt r1, r0, r1
+ do_it lt
rsblt r3, r3, #0
@ If exponent difference is too large, return largest argument
@ already in r0. We need up to 25 bit to handle proper rounding
@ of 0x1p25 - 1.1.
cmp r3, #25
+ do_it hi
RETc(hi)
@ Convert mantissa to signed integer.
tst r0, #0x80000000
orr r0, r0, #0x00800000
bic r0, r0, #0xff000000
+ do_it ne
rsbne r0, r0, #0
tst r1, #0x80000000
orr r1, r1, #0x00800000
bic r1, r1, #0xff000000
+ do_it ne
rsbne r1, r1, #0
@ If exponent == difference, one or both args were denormalized.
@@ -114,15 +120,20 @@ LSYM(Lad_x):
@ Shift and add second arg to first arg in r0.
@ Keep leftover bits into r1.
- adds r0, r0, r1, asr r3
+ shiftop adds r0 r0 r1 asr r3 ip
rsb r3, r3, #32
- mov r1, r1, lsl r3
+ shift1 lsl, r1, r1, r3
@ Keep absolute value in r0-r1, sign in r3 (the n bit was set above)
and r3, r0, #0x80000000
bpl LSYM(Lad_p)
+#if defined(__thumb2__)
+ negs r1, r1
+ sbc r0, r0, r0, lsl #1
+#else
rsbs r1, r1, #0
rsc r0, r0, #0
+#endif
@ Determine how to normalize the result.
LSYM(Lad_p):
@@ -147,6 +158,7 @@ LSYM(Lad_p):
LSYM(Lad_e):
cmp r1, #0x80000000
adc r0, r0, r2, lsl #23
+ do_it eq
biceq r0, r0, #1
orr r0, r0, r3
RET
@@ -185,16 +197,23 @@ LSYM(Lad_l):
clz ip, r0
sub ip, ip, #8
subs r2, r2, ip
- mov r0, r0, lsl ip
+ shift1 lsl, r0, r0, ip
#endif
@ Final result with sign
@ If exponent negative, denormalize result.
+ do_it ge, et
addge r0, r0, r2, lsl #23
rsblt r2, r2, #0
orrge r0, r0, r3
+#if defined(__thumb2__)
+ do_it lt, t
+ lsrlt r0, r0, r2
+ orrlt r0, r3, r0
+#else
orrlt r0, r3, r0, lsr r2
+#endif
RET
@ Fixup and adjust bit position for denormalized arguments.
@@ -202,6 +221,7 @@ LSYM(Lad_l):
LSYM(Lad_d):
teq r2, #0
eor r1, r1, #0x00800000
+ do_it eq, te
eoreq r0, r0, #0x00800000
addeq r2, r2, #1
subne r3, r3, #1
@@ -211,7 +231,8 @@ LSYM(Lad_s):
mov r3, r1, lsl #1
mvns ip, r2, asr #24
- mvnnes ip, r3, asr #24
+ do_it ne
+ COND(mvn,s,ne) ip, r3, asr #24
beq LSYM(Lad_i)
teq r2, r3
@@ -219,12 +240,14 @@ LSYM(Lad_s):
@ Result is x + 0.0 = x or 0.0 + y = y.
teq r2, #0
+ do_it eq
moveq r0, r1
RET
1: teq r0, r1
@ Result is x - x = 0.
+ do_it ne, t
movne r0, #0
RETc(ne)
@@ -232,9 +255,11 @@ LSYM(Lad_s):
tst r2, #0xff000000
bne 2f
movs r0, r0, lsl #1
+ do_it cs
orrcs r0, r0, #0x80000000
RET
2: adds r2, r2, #(2 << 24)
+ do_it cc, t
addcc r0, r0, #(1 << 23)
RETc(cc)
and r3, r0, #0x80000000
@@ -253,11 +278,13 @@ LSYM(Lad_o):
@ otherwise return r0 (which is INF or -INF)
LSYM(Lad_i):
mvns r2, r2, asr #24
+ do_it ne, et
movne r0, r1
- mvneqs r3, r3, asr #24
+ COND(mvn,s,eq) r3, r3, asr #24
movne r1, r0
movs r2, r0, lsl #9
- moveqs r3, r1, lsl #9
+ do_it eq, te
+ COND(mov,s,eq) r3, r1, lsl #9
teqeq r0, r1
orrne r0, r0, #0x00400000 @ quiet NAN
RET
@@ -278,9 +305,11 @@ ARM_FUNC_START floatsisf
ARM_FUNC_ALIAS aeabi_i2f floatsisf
ands r3, r0, #0x80000000
+ do_it mi
rsbmi r0, r0, #0
1: movs ip, r0
+ do_it eq
RETc(eq)
@ Add initial exponent to sign
@@ -302,7 +331,10 @@ ARM_FUNC_ALIAS aeabi_ul2f floatundisf
orrs r2, r0, r1
#if !defined (__VFP_FP__) && !defined(__SOFTFP__)
+ do_itt eq
mvfeqs f0, #0.0
+#else
+ do_it eq
#endif
RETc(eq)
@@ -314,14 +346,22 @@ ARM_FUNC_ALIAS aeabi_l2f floatdisf
orrs r2, r0, r1
#if !defined (__VFP_FP__) && !defined(__SOFTFP__)
+ do_it eq, t
mvfeqs f0, #0.0
+#else
+ do_it eq
#endif
RETc(eq)
ands r3, ah, #0x80000000 @ sign bit in r3
bpl 1f
+#if defined(__thumb2__)
+ negs al, al
+ sbc ah, ah, ah, lsl #1
+#else
rsbs al, al, #0
rsc ah, ah, #0
+#endif
1:
#if !defined (__VFP_FP__) && !defined(__SOFTFP__)
@ For hard FPA code we want to return via the tail below so that
@@ -332,12 +372,14 @@ ARM_FUNC_ALIAS aeabi_l2f floatdisf
#endif
movs ip, ah
+ do_it eq, tt
moveq ip, al
moveq ah, al
moveq al, #0
@ Add initial exponent to sign
orr r3, r3, #((127 + 23 + 32) << 23)
+ do_it eq
subeq r3, r3, #(32 << 23)
2: sub r3, r3, #(1 << 23)
@@ -345,15 +387,19 @@ ARM_FUNC_ALIAS aeabi_l2f floatdisf
mov r2, #23
cmp ip, #(1 << 16)
+ do_it hs, t
movhs ip, ip, lsr #16
subhs r2, r2, #16
cmp ip, #(1 << 8)
+ do_it hs, t
movhs ip, ip, lsr #8
subhs r2, r2, #8
cmp ip, #(1 << 4)
+ do_it hs, t
movhs ip, ip, lsr #4
subhs r2, r2, #4
cmp ip, #(1 << 2)
+ do_it hs, e
subhs r2, r2, #2
sublo r2, r2, ip, lsr #1
subs r2, r2, ip, lsr #3
@@ -368,19 +414,21 @@ ARM_FUNC_ALIAS aeabi_l2f floatdisf
sub r3, r3, r2, lsl #23
blt 3f
- add r3, r3, ah, lsl r2
- mov ip, al, lsl r2
+ shiftop add r3 r3 ah lsl r2 ip
+ shift1 lsl, ip, al, r2
rsb r2, r2, #32
cmp ip, #0x80000000
- adc r0, r3, al, lsr r2
+ shiftop adc r0 r3 al lsr r2 r2
+ do_it eq
biceq r0, r0, #1
RET
3: add r2, r2, #32
- mov ip, ah, lsl r2
+ shift1 lsl, ip, ah, r2
rsb r2, r2, #32
orrs al, al, ip, lsl #1
- adc r0, r3, ah, lsr r2
+ shiftop adc r0 r3 ah lsr r2 r2
+ do_it eq
biceq r0, r0, ip, lsr #31
RET
@@ -408,7 +456,8 @@ ARM_FUNC_ALIAS aeabi_fmul mulsf3
@ Mask out exponents, trap any zero/denormal/INF/NAN.
mov ip, #0xff
ands r2, ip, r0, lsr #23
- andnes r3, ip, r1, lsr #23
+ do_it ne, tt
+ COND(and,s,ne) r3, ip, r1, lsr #23
teqne r2, ip
teqne r3, ip
beq LSYM(Lml_s)
@@ -424,7 +473,8 @@ LSYM(Lml_x):
@ If power of two, branch to a separate path.
@ Make up for final alignment.
movs r0, r0, lsl #9
- movnes r1, r1, lsl #9
+ do_it ne
+ COND(mov,s,ne) r1, r1, lsl #9
beq LSYM(Lml_1)
mov r3, #0x08000000
orr r0, r3, r0, lsr #5
@@ -436,7 +486,7 @@ LSYM(Lml_x):
and r3, ip, #0x80000000
@ Well, no way to make it shorter without the umull instruction.
- stmfd sp!, {r3, r4, r5}
+ do_push {r3, r4, r5}
mov r4, r0, lsr #16
mov r5, r1, lsr #16
bic r0, r0, r4, lsl #16
@@ -447,7 +497,7 @@ LSYM(Lml_x):
mla r0, r4, r1, r0
adds r3, r3, r0, lsl #16
adc r1, ip, r0, lsr #16
- ldmfd sp!, {r0, r4, r5}
+ do_pop {r0, r4, r5}
#else
@@ -461,6 +511,7 @@ LSYM(Lml_x):
@ Adjust result upon the MSB position.
cmp r1, #(1 << 23)
+ do_it cc, tt
movcc r1, r1, lsl #1
orrcc r1, r1, r3, lsr #31
movcc r3, r3, lsl #1
@@ -476,6 +527,7 @@ LSYM(Lml_x):
@ Round the result, merge final exponent.
cmp r3, #0x80000000
adc r0, r0, r2, lsl #23
+ do_it eq
biceq r0, r0, #1
RET
@@ -483,11 +535,13 @@ LSYM(Lml_x):
LSYM(Lml_1):
teq r0, #0
and ip, ip, #0x80000000
+ do_it eq
moveq r1, r1, lsl #9
orr r0, ip, r0, lsr #9
orr r0, r0, r1, lsr #9
subs r2, r2, #127
- rsbgts r3, r2, #255
+ do_it gt, tt
+ COND(rsb,s,gt) r3, r2, #255
orrgt r0, r0, r2, lsl #23
RETc(gt)
@@ -502,18 +556,20 @@ LSYM(Lml_u):
@ Check if denormalized result is possible, otherwise return signed 0.
cmn r2, #(24 + 1)
+ do_it le, t
bicle r0, r0, #0x7fffffff
RETc(le)
@ Shift value right, round, etc.
rsb r2, r2, #0
movs r1, r0, lsl #1
- mov r1, r1, lsr r2
+ shift1 lsr, r1, r1, r2
rsb r2, r2, #32
- mov ip, r0, lsl r2
+ shift1 lsl, ip, r0, r2
movs r0, r1, rrx
adc r0, r0, #0
orrs r3, r3, ip, lsl #1
+ do_it eq
biceq r0, r0, ip, lsr #31
RET
@@ -522,14 +578,16 @@ LSYM(Lml_u):
LSYM(Lml_d):
teq r2, #0
and ip, r0, #0x80000000
-1: moveq r0, r0, lsl #1
+1: do_it eq, tt
+ moveq r0, r0, lsl #1
tsteq r0, #0x00800000
subeq r2, r2, #1
beq 1b
orr r0, r0, ip
teq r3, #0
and ip, r1, #0x80000000
-2: moveq r1, r1, lsl #1
+2: do_it eq, tt
+ moveq r1, r1, lsl #1
tsteq r1, #0x00800000
subeq r3, r3, #1
beq 2b
@@ -540,12 +598,14 @@ LSYM(Lml_s):
@ Isolate the INF and NAN cases away
and r3, ip, r1, lsr #23
teq r2, ip
+ do_it ne
teqne r3, ip
beq 1f
@ Here, one or more arguments are either denormalized or zero.
bics ip, r0, #0x80000000
- bicnes ip, r1, #0x80000000
+ do_it ne
+ COND(bic,s,ne) ip, r1, #0x80000000
bne LSYM(Lml_d)
@ Result is 0, but determine sign anyway.
@@ -556,6 +616,7 @@ LSYM(Lml_z):
1: @ One or both args are INF or NAN.
teq r0, #0x0
+ do_it ne, ett
teqne r0, #0x80000000
moveq r0, r1
teqne r1, #0x0
@@ -568,6 +629,7 @@ LSYM(Lml_z):
1: teq r3, ip
bne LSYM(Lml_i)
movs r3, r1, lsl #9
+ do_it ne
movne r0, r1
bne LSYM(Lml_n) @ <anything> * NAN -> NAN
@@ -597,7 +659,8 @@ ARM_FUNC_ALIAS aeabi_fdiv divsf3
@ Mask out exponents, trap any zero/denormal/INF/NAN.
mov ip, #0xff
ands r2, ip, r0, lsr #23
- andnes r3, ip, r1, lsr #23
+ do_it ne, tt
+ COND(and,s,ne) r3, ip, r1, lsr #23
teqne r2, ip
teqne r3, ip
beq LSYM(Ldv_s)
@@ -624,25 +687,31 @@ LSYM(Ldv_x):
@ Ensure result will land to known bit position.
@ Apply exponent bias accordingly.
cmp r3, r1
+ do_it cc
movcc r3, r3, lsl #1
adc r2, r2, #(127 - 2)
@ The actual division loop.
mov ip, #0x00800000
1: cmp r3, r1
+ do_it cs, t
subcs r3, r3, r1
orrcs r0, r0, ip
cmp r3, r1, lsr #1
+ do_it cs, t
subcs r3, r3, r1, lsr #1
orrcs r0, r0, ip, lsr #1
cmp r3, r1, lsr #2
+ do_it cs, t
subcs r3, r3, r1, lsr #2
orrcs r0, r0, ip, lsr #2
cmp r3, r1, lsr #3
+ do_it cs, t
subcs r3, r3, r1, lsr #3
orrcs r0, r0, ip, lsr #3
movs r3, r3, lsl #4
- movnes ip, ip, lsr #4
+ do_it ne
+ COND(mov,s,ne) ip, ip, lsr #4
bne 1b
@ Check exponent for under/overflow.
@@ -652,6 +721,7 @@ LSYM(Ldv_x):
@ Round the result, merge final exponent.
cmp r3, r1
adc r0, r0, r2, lsl #23
+ do_it eq
biceq r0, r0, #1
RET
@@ -660,7 +730,8 @@ LSYM(Ldv_1):
and ip, ip, #0x80000000
orr r0, ip, r0, lsr #9
adds r2, r2, #127
- rsbgts r3, r2, #255
+ do_it gt, tt
+ COND(rsb,s,gt) r3, r2, #255
orrgt r0, r0, r2, lsl #23
RETc(gt)
@@ -674,14 +745,16 @@ LSYM(Ldv_1):
LSYM(Ldv_d):
teq r2, #0
and ip, r0, #0x80000000
-1: moveq r0, r0, lsl #1
+1: do_it eq, tt
+ moveq r0, r0, lsl #1
tsteq r0, #0x00800000
subeq r2, r2, #1
beq 1b
orr r0, r0, ip
teq r3, #0
and ip, r1, #0x80000000
-2: moveq r1, r1, lsl #1
+2: do_it eq, tt
+ moveq r1, r1, lsl #1
tsteq r1, #0x00800000
subeq r3, r3, #1
beq 2b
@@ -707,7 +780,8 @@ LSYM(Ldv_s):
b LSYM(Lml_n) @ <anything> / NAN -> NAN
2: @ If both are nonzero, we need to normalize and resume above.
bics ip, r0, #0x80000000
- bicnes ip, r1, #0x80000000
+ do_it ne
+ COND(bic,s,ne) ip, r1, #0x80000000
bne LSYM(Ldv_d)
@ One or both arguments are zero.
bics r2, r0, #0x80000000
@@ -759,18 +833,24 @@ ARM_FUNC_ALIAS eqsf2 cmpsf2
mov r2, r0, lsl #1
mov r3, r1, lsl #1
mvns ip, r2, asr #24
- mvnnes ip, r3, asr #24
+ do_it ne
+ COND(mvn,s,ne) ip, r3, asr #24
beq 3f
@ Compare values.
@ Note that 0.0 is equal to -0.0.
2: orrs ip, r2, r3, lsr #1 @ test if both are 0, clear C flag
+ do_it ne
teqne r0, r1 @ if not 0 compare sign
- subpls r0, r2, r3 @ if same sign compare values, set r0
+ do_it pl
+ COND(sub,s,pl) r0, r2, r3 @ if same sign compare values, set r0
@ Result:
+ do_it hi
movhi r0, r1, asr #31
+ do_it lo
mvnlo r0, r1, asr #31
+ do_it ne
orrne r0, r0, #1
RET
@@ -806,14 +886,15 @@ ARM_FUNC_ALIAS aeabi_cfcmple aeabi_cfcmpeq
@ The status-returning routines are required to preserve all
@ registers except ip, lr, and cpsr.
-6: stmfd sp!, {r0, r1, r2, r3, lr}
+6: do_push {r0, r1, r2, r3, lr}
ARM_CALL cmpsf2
@ Set the Z flag correctly, and the C flag unconditionally.
- cmp r0, #0
+ cmp r0, #0
@ Clear the C flag if the return value was -1, indicating
@ that the first operand was smaller than the second.
- cmnmi r0, #0
- RETLDM "r0, r1, r2, r3"
+ do_it mi
+ cmnmi r0, #0
+ RETLDM "r0, r1, r2, r3"
FUNC_END aeabi_cfcmple
FUNC_END aeabi_cfcmpeq
@@ -823,6 +904,7 @@ ARM_FUNC_START aeabi_fcmpeq
str lr, [sp, #-8]!
ARM_CALL aeabi_cfcmple
+ do_it eq, e
moveq r0, #1 @ Equal to.
movne r0, #0 @ Less than, greater than, or unordered.
RETLDM
@@ -833,6 +915,7 @@ ARM_FUNC_START aeabi_fcmplt
str lr, [sp, #-8]!
ARM_CALL aeabi_cfcmple
+ do_it cc, e
movcc r0, #1 @ Less than.
movcs r0, #0 @ Equal to, greater than, or unordered.
RETLDM
@@ -843,6 +926,7 @@ ARM_FUNC_START aeabi_fcmple
str lr, [sp, #-8]!
ARM_CALL aeabi_cfcmple
+ do_it ls, e
movls r0, #1 @ Less than or equal to.
movhi r0, #0 @ Greater than or unordered.
RETLDM
@@ -853,6 +937,7 @@ ARM_FUNC_START aeabi_fcmpge
str lr, [sp, #-8]!
ARM_CALL aeabi_cfrcmple
+ do_it ls, e
movls r0, #1 @ Operand 2 is less than or equal to operand 1.
movhi r0, #0 @ Operand 2 greater than operand 1, or unordered.
RETLDM
@@ -863,6 +948,7 @@ ARM_FUNC_START aeabi_fcmpgt
str lr, [sp, #-8]!
ARM_CALL aeabi_cfrcmple
+ do_it cc, e
movcc r0, #1 @ Operand 2 is less than operand 1.
movcs r0, #0 @ Operand 2 is greater than or equal to operand 1,
@ or they are unordered.
@@ -914,7 +1000,8 @@ ARM_FUNC_ALIAS aeabi_f2iz fixsfsi
mov r3, r0, lsl #8
orr r3, r3, #0x80000000
tst r0, #0x80000000 @ the sign bit
- mov r0, r3, lsr r2
+ shift1 lsr, r0, r3, r2
+ do_it ne
rsbne r0, r0, #0
RET
@@ -926,6 +1013,7 @@ ARM_FUNC_ALIAS aeabi_f2iz fixsfsi
movs r2, r0, lsl #9
bne 4f @ r0 is NAN.
3: ands r0, r0, #0x80000000 @ the sign bit
+ do_it eq
moveq r0, #0x7fffffff @ the maximum signed positive si
RET
@@ -954,7 +1042,7 @@ ARM_FUNC_ALIAS aeabi_f2uiz fixunssfsi
@ scale the value
mov r3, r0, lsl #8
orr r3, r3, #0x80000000
- mov r0, r3, lsr r2
+ shift1 lsr, r0, r3, r2
RET
1: mov r0, #0
diff --git a/gcc/config/arm/iwmmxt.md b/gcc/config/arm/iwmmxt.md
index 9ae55a7..10b915d 100644
--- a/gcc/config/arm/iwmmxt.md
+++ b/gcc/config/arm/iwmmxt.md
@@ -1,5 +1,6 @@
+;; ??? This file needs auditing for thumb2
;; Patterns for the Intel Wireless MMX technology architecture.
-;; Copyright (C) 2003, 2004, 2005 Free Software Foundation, Inc.
+;; Copyright (C) 2003, 2004, 2005, 2007 Free Software Foundation, Inc.
;; Contributed by Red Hat.
;; This file is part of GCC.
diff --git a/gcc/config/arm/lib1funcs.asm b/gcc/config/arm/lib1funcs.asm
index 93c0df8..f0cf5db 100644
--- a/gcc/config/arm/lib1funcs.asm
+++ b/gcc/config/arm/lib1funcs.asm
@@ -1,7 +1,7 @@
@ libgcc routines for ARM cpu.
@ Division routines, written by Richard Earnshaw, (rearnsha@armltd.co.uk)
-/* Copyright 1995, 1996, 1998, 1999, 2000, 2003, 2004, 2005
+/* Copyright 1995, 1996, 1998, 1999, 2000, 2003, 2004, 2005, 2007
Free Software Foundation, Inc.
This file is free software; you can redistribute it and/or modify it
@@ -69,31 +69,30 @@ Boston, MA 02110-1301, USA. */
/* Function end macros. Variants for interworking. */
-@ This selects the minimum architecture level required.
-#define __ARM_ARCH__ 3
-
#if defined(__ARM_ARCH_3M__) || defined(__ARM_ARCH_4__) \
|| defined(__ARM_ARCH_4T__)
/* We use __ARM_ARCH__ set to 4 here, but in reality it's any processor with
long multiply instructions. That includes v3M. */
-# undef __ARM_ARCH__
# define __ARM_ARCH__ 4
#endif
#if defined(__ARM_ARCH_5__) || defined(__ARM_ARCH_5T__) \
|| defined(__ARM_ARCH_5E__) || defined(__ARM_ARCH_5TE__) \
|| defined(__ARM_ARCH_5TEJ__)
-# undef __ARM_ARCH__
# define __ARM_ARCH__ 5
#endif
#if defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) \
|| defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6Z__) \
- || defined(__ARM_ARCH_6ZK__)
-# undef __ARM_ARCH__
+ || defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__)
# define __ARM_ARCH__ 6
#endif
+#if defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_7A__) \
+ || defined(__ARM_ARCH_7R__) || defined(__ARM_ARCH_7M__)
+# define __ARM_ARCH__ 7
+#endif
+
#ifndef __ARM_ARCH__
#error Unable to determine architecture.
#endif
@@ -193,7 +192,11 @@ LSYM(Lend_fde):
.ifc "\regs",""
ldr\cond lr, [sp], #8
.else
+# if defined(__thumb2__)
+ pop\cond {\regs, lr}
+# else
ldm\cond\dirn sp!, {\regs, lr}
+# endif
.endif
.ifnc "\unwind", ""
/* Mark LR as restored. */
@@ -201,14 +204,51 @@ LSYM(Lend_fde):
.endif
bx\cond lr
#else
+ /* Caller is responsible for providing IT instruction. */
.ifc "\regs",""
ldr\cond pc, [sp], #8
.else
- ldm\cond\dirn sp!, {\regs, pc}
+# if defined(__thumb2__)
+ pop\cond {\regs, pc}
+# else
+ ldm\cond\dirn sp!, {\regs, lr}
+# endif
.endif
#endif
.endm
+/* The Unified assembly syntax allows the same code to be assembled for both
+ ARM and Thumb-2. However this is only supported by recent gas, so define
+ a set of macros to allow ARM code on older assemblers. */
+#if defined(__thumb2__)
+.macro do_it cond, suffix=""
+ it\suffix \cond
+.endm
+.macro shift1 op, arg0, arg1, arg2
+ \op \arg0, \arg1, \arg2
+.endm
+#define do_push push
+#define do_pop pop
+#define COND(op1, op2, cond) op1 ## op2 ## cond
+/* Perform an arithmetic operation with a variable shift operand. This
+ requires two instructions and a scratch register on Thumb-2. */
+.macro shiftop name, dest, src1, src2, shiftop, shiftreg, tmp
+ \shiftop \tmp, \src2, \shiftreg
+ \name \dest, \src1, \tmp
+.endm
+#else
+.macro do_it cond, suffix=""
+.endm
+.macro shift1 op, arg0, arg1, arg2
+ mov \arg0, \arg1, \op \arg2
+.endm
+#define do_push stmfd sp!,
+#define do_pop ldmfd sp!,
+#define COND(op1, op2, cond) op1 ## cond ## op2
+.macro shiftop name, dest, src1, src2, shiftop, shiftreg, tmp
+ \name \dest, \src1, \src2, \shiftop \shiftreg
+.endm
+#endif
.macro ARM_LDIV0 name
str lr, [sp, #-8]!
@@ -260,11 +300,17 @@ SYM (\name):
#ifdef __thumb__
#define THUMB_FUNC .thumb_func
#define THUMB_CODE .force_thumb
+# if defined(__thumb2__)
+#define THUMB_SYNTAX .syntax divided
+# else
+#define THUMB_SYNTAX
+# endif
#else
#define THUMB_FUNC
#define THUMB_CODE
+#define THUMB_SYNTAX
#endif
-
+
.macro FUNC_START name
.text
.globl SYM (__\name)
@@ -272,13 +318,27 @@ SYM (\name):
.align 0
THUMB_CODE
THUMB_FUNC
+ THUMB_SYNTAX
SYM (__\name):
.endm
/* Special function that will always be coded in ARM assembly, even if
in Thumb-only compilation. */
-#if defined(__INTERWORKING_STUBS__)
+#if defined(__thumb2__)
+
+/* For Thumb-2 we build everything in thumb mode. */
+.macro ARM_FUNC_START name
+ FUNC_START \name
+ .syntax unified
+.endm
+#define EQUIV .thumb_set
+.macro ARM_CALL name
+ bl __\name
+.endm
+
+#elif defined(__INTERWORKING_STUBS__)
+
.macro ARM_FUNC_START name
FUNC_START \name
bx pc
@@ -294,7 +354,9 @@ _L__\name:
.macro ARM_CALL name
bl _L__\name
.endm
-#else
+
+#else /* !(__INTERWORKING_STUBS__ || __thumb2__) */
+
.macro ARM_FUNC_START name
.text
.globl SYM (__\name)
@@ -307,6 +369,7 @@ SYM (__\name):
.macro ARM_CALL name
bl __\name
.endm
+
#endif
.macro FUNC_ALIAS new old
@@ -1183,6 +1246,10 @@ LSYM(Lover12):
#endif /* L_call_via_rX */
+/* Don't bother with the old interworking routines for Thumb-2. */
+/* ??? Maybe only omit these on v7m. */
+#ifndef __thumb2__
+
#if defined L_interwork_call_via_rX
/* These labels & instructions are used by the Arm/Thumb interworking code,
@@ -1307,6 +1374,7 @@ LSYM(Lchange_\register):
SIZE (_interwork_call_via_lr)
#endif /* L_interwork_call_via_rX */
+#endif /* !__thumb2__ */
#endif /* Arch supports thumb. */
#ifndef __symbian__
diff --git a/gcc/config/arm/libunwind.S b/gcc/config/arm/libunwind.S
index fd66724..ef3f89a 100644
--- a/gcc/config/arm/libunwind.S
+++ b/gcc/config/arm/libunwind.S
@@ -1,5 +1,5 @@
/* Support functions for the unwinder.
- Copyright (C) 2003, 2004, 2005 Free Software Foundation, Inc.
+ Copyright (C) 2003, 2004, 2005, 2007 Free Software Foundation, Inc.
Contributed by Paul Brook
This file is free software; you can redistribute it and/or modify it
@@ -49,7 +49,14 @@ ARM_FUNC_START restore_core_regs
this. */
add r1, r0, #52
ldmia r1, {r3, r4, r5} /* {sp, lr, pc}. */
-#ifdef __INTERWORKING__
+#if defined(__thumb2__)
+ /* Thumb-2 doesn't allow sp in a load-multiple instruction, so push
+ the target address onto the target stack. This is safe as
+ we're always returning to somewhere further up the call stack. */
+ mov ip, r3
+ mov lr, r4
+ str r5, [ip, #-4]!
+#elif defined(__INTERWORKING__)
/* Restore pc into ip. */
mov r2, r5
stmfd sp!, {r2, r3, r4}
@@ -58,8 +65,12 @@ ARM_FUNC_START restore_core_regs
#endif
/* Don't bother restoring ip. */
ldmia r0, {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp}
+#if defined(__thumb2__)
+ /* Pop the return address off the target stack. */
+ mov sp, ip
+ pop {pc}
+#elif defined(__INTERWORKING__)
/* Pop the three registers we pushed earlier. */
-#ifdef __INTERWORKING__
ldmfd sp, {ip, sp, lr}
bx ip
#else
@@ -114,7 +125,13 @@ ARM_FUNC_START gnu_Unwind_Save_VFP_D_16_to_31
ARM_FUNC_START \name
/* Create a phase2_vrs structure. */
/* Split reg push in two to ensure the correct value for sp. */
+#if defined(__thumb2__)
+ mov ip, sp
+ push {lr} /* PC is ignored. */
+ push {ip, lr} /* Push original SP and LR. */
+#else
stmfd sp!, {sp, lr, pc}
+#endif
stmfd sp!, {r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, sl, fp, ip}
/* Demand-save flags, plus an extra word for alignment. */
@@ -123,7 +140,7 @@ ARM_FUNC_START gnu_Unwind_Save_VFP_D_16_to_31
/* Point r1 at the block. Pass r[0..nargs) unchanged. */
add r\nargs, sp, #4
-#if defined(__thumb__)
+#if defined(__thumb__) && !defined(__thumb2__)
/* Switch back to thumb mode to avoid interworking hassle. */
adr ip, .L1_\name
orr ip, ip, #1
diff --git a/gcc/config/arm/predicates.md b/gcc/config/arm/predicates.md
index 4a08204..06d8371 100644
--- a/gcc/config/arm/predicates.md
+++ b/gcc/config/arm/predicates.md
@@ -1,5 +1,5 @@
;; Predicate definitions for ARM and Thumb
-;; Copyright (C) 2004 Free Software Foundation, Inc.
+;; Copyright (C) 2004, 2007 Free Software Foundation, Inc.
;; Contributed by ARM Ltd.
;; This file is part of GCC.
@@ -39,6 +39,16 @@
return REGNO (op) < FIRST_PSEUDO_REGISTER;
})
+;; A low register.
+(define_predicate "low_register_operand"
+ (and (match_code "reg")
+ (match_test "REGNO (op) <= LAST_LO_REGNUM")))
+
+;; A low register or const_int.
+(define_predicate "low_reg_or_int_operand"
+ (ior (match_code "const_int")
+ (match_operand 0 "low_register_operand")))
+
;; Any core register, or any pseudo. */
(define_predicate "arm_general_register_operand"
(match_code "reg,subreg")
@@ -174,6 +184,10 @@
(match_code "ashift,ashiftrt,lshiftrt,rotatert"))
(match_test "mode == GET_MODE (op)")))
+;; True for operators that have 16-bit thumb variants. */
+(define_special_predicate "thumb_16bit_operator"
+ (match_code "plus,minus,and,ior,xor"))
+
;; True for EQ & NE
(define_special_predicate "equality_operator"
(match_code "eq,ne"))
@@ -399,13 +413,13 @@
;; Thumb predicates
;;
-(define_predicate "thumb_cmp_operand"
+(define_predicate "thumb1_cmp_operand"
(ior (and (match_code "reg,subreg")
(match_operand 0 "s_register_operand"))
(and (match_code "const_int")
(match_test "((unsigned HOST_WIDE_INT) INTVAL (op)) < 256"))))
-(define_predicate "thumb_cmpneg_operand"
+(define_predicate "thumb1_cmpneg_operand"
(and (match_code "const_int")
(match_test "INTVAL (op) < 0 && INTVAL (op) > -256")))
diff --git a/gcc/config/arm/t-arm b/gcc/config/arm/t-arm
index 9fcd187..1727407 100644
--- a/gcc/config/arm/t-arm
+++ b/gcc/config/arm/t-arm
@@ -10,7 +10,8 @@ MD_INCLUDES= $(srcdir)/config/arm/arm-tune.md \
$(srcdir)/config/arm/cirrus.md \
$(srcdir)/config/arm/fpa.md \
$(srcdir)/config/arm/iwmmxt.md \
- $(srcdir)/config/arm/vfp.md
+ $(srcdir)/config/arm/vfp.md \
+ $(srcdir)/config/arm/thumb2.md
s-config s-conditions s-flags s-codes s-constants s-emit s-recog s-preds \
s-opinit s-extract s-peep s-attr s-attrtab s-output: $(MD_INCLUDES)
diff --git a/gcc/config/arm/t-arm-elf b/gcc/config/arm/t-arm-elf
index bee4051..b423bbb 100644
--- a/gcc/config/arm/t-arm-elf
+++ b/gcc/config/arm/t-arm-elf
@@ -11,6 +11,16 @@ MULTILIB_DIRNAMES = arm thumb
MULTILIB_EXCEPTIONS =
MULTILIB_MATCHES =
+#MULTILIB_OPTIONS += march=armv7
+#MULTILIB_DIRNAMES += thumb2
+#MULTILIB_EXCEPTIONS += march=armv7* marm/*march=armv7*
+#MULTILIB_MATCHES += march?armv7=march?armv7-a
+#MULTILIB_MATCHES += march?armv7=march?armv7-r
+#MULTILIB_MATCHES += march?armv7=march?armv7-m
+#MULTILIB_MATCHES += march?armv7=mcpu?cortex-a8
+#MULTILIB_MATCHES += march?armv7=mcpu?cortex-r4
+#MULTILIB_MATCHES += march?armv7=mcpu?cortex-m3
+
# MULTILIB_OPTIONS += mcpu=ep9312
# MULTILIB_DIRNAMES += ep9312
# MULTILIB_EXCEPTIONS += *mthumb/*mcpu=ep9312*
diff --git a/gcc/config/arm/thumb2.md b/gcc/config/arm/thumb2.md
new file mode 100644
index 0000000..7406b74
--- /dev/null
+++ b/gcc/config/arm/thumb2.md
@@ -0,0 +1,1188 @@
+;; ARM Thumb-2 Machine Description
+;; Copyright (C) 2007 Free Software Foundation, Inc.
+;; Written by CodeSourcery, LLC.
+;;
+;; This file is part of GCC.
+;;
+;; GCC is free software; you can redistribute it and/or modify it
+;; under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 2, or (at your option)
+;; any later version.
+;;
+;; GCC is distributed in the hope that it will be useful, but
+;; WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+;; General Public License for more details.
+;;
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING. If not, write to the Free
+;; Software Foundation, 59 Temple Place - Suite 330, Boston, MA
+;; 02111-1307, USA. */
+
+;; Note: Thumb-2 is the variant of the Thumb architecture that adds
+;; 32-bit encodings of [almost all of] the Arm instruction set.
+;; Some old documents refer to the relatively minor interworking
+;; changes made in armv5t as "thumb2". These are considered part
+;; the 16-bit Thumb-1 instruction set.
+
+(define_insn "*thumb2_incscc"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r")
+ (plus:SI (match_operator:SI 2 "arm_comparison_operator"
+ [(match_operand:CC 3 "cc_register" "") (const_int 0)])
+ (match_operand:SI 1 "s_register_operand" "0,?r")))]
+ "TARGET_THUMB2"
+ "@
+ it\\t%d2\;add%d2\\t%0, %1, #1
+ ite\\t%D2\;mov%D2\\t%0, %1\;add%d2\\t%0, %1, #1"
+ [(set_attr "conds" "use")
+ (set_attr "length" "6,10")]
+)
+
+(define_insn "*thumb2_decscc"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r")
+ (minus:SI (match_operand:SI 1 "s_register_operand" "0,?r")
+ (match_operator:SI 2 "arm_comparison_operator"
+ [(match_operand 3 "cc_register" "") (const_int 0)])))]
+ "TARGET_THUMB2"
+ "@
+ it\\t%d2\;sub%d2\\t%0, %1, #1
+ ite\\t%D2\;mov%D2\\t%0, %1\;sub%d2\\t%0, %1, #1"
+ [(set_attr "conds" "use")
+ (set_attr "length" "6,10")]
+)
+
+;; Thumb-2 only allows shift by constant on data processing instructions
+(define_insn "*thumb_andsi_not_shiftsi_si"
+ [(set (match_operand:SI 0 "s_register_operand" "=r")
+ (and:SI (not:SI (match_operator:SI 4 "shift_operator"
+ [(match_operand:SI 2 "s_register_operand" "r")
+ (match_operand:SI 3 "const_int_operand" "M")]))
+ (match_operand:SI 1 "s_register_operand" "r")))]
+ "TARGET_ARM"
+ "bic%?\\t%0, %1, %2%S4"
+ [(set_attr "predicable" "yes")
+ (set_attr "shift" "2")
+ (set_attr "type" "alu_shift")]
+)
+
+(define_insn "*thumb2_smaxsi3"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r,r")
+ (smax:SI (match_operand:SI 1 "s_register_operand" "0,r,?r")
+ (match_operand:SI 2 "arm_rhs_operand" "rI,0,rI")))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2"
+ "@
+ cmp\\t%1, %2\;it\\tlt\;movlt\\t%0, %2
+ cmp\\t%1, %2\;it\\tge\;movge\\t%0, %1
+ cmp\\t%1, %2\;ite\\tge\;movge\\t%0, %1\;movlt\\t%0, %2"
+ [(set_attr "conds" "clob")
+ (set_attr "length" "10,10,14")]
+)
+
+(define_insn "*thumb2_sminsi3"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r,r")
+ (smin:SI (match_operand:SI 1 "s_register_operand" "0,r,?r")
+ (match_operand:SI 2 "arm_rhs_operand" "rI,0,rI")))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2"
+ "@
+ cmp\\t%1, %2\;it\\tge\;movge\\t%0, %2
+ cmp\\t%1, %2\;it\\tlt\;movlt\\t%0, %1
+ cmp\\t%1, %2\;ite\\tlt\;movlt\\t%0, %1\;movge\\t%0, %2"
+ [(set_attr "conds" "clob")
+ (set_attr "length" "10,10,14")]
+)
+
+(define_insn "*thumb32_umaxsi3"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r,r")
+ (umax:SI (match_operand:SI 1 "s_register_operand" "0,r,?r")
+ (match_operand:SI 2 "arm_rhs_operand" "rI,0,rI")))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2"
+ "@
+ cmp\\t%1, %2\;it\\tcc\;movcc\\t%0, %2
+ cmp\\t%1, %2\;it\\tcs\;movcs\\t%0, %1
+ cmp\\t%1, %2\;ite\\tcs\;movcs\\t%0, %1\;movcc\\t%0, %2"
+ [(set_attr "conds" "clob")
+ (set_attr "length" "10,10,14")]
+)
+
+(define_insn "*thumb2_uminsi3"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r,r")
+ (umin:SI (match_operand:SI 1 "s_register_operand" "0,r,?r")
+ (match_operand:SI 2 "arm_rhs_operand" "rI,0,rI")))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2"
+ "@
+ cmp\\t%1, %2\;it\\tcs\;movcs\\t%0, %2
+ cmp\\t%1, %2\;it\\tcc\;movcc\\t%0, %1
+ cmp\\t%1, %2\;ite\\tcc\;movcc\\t%0, %1\;movcs\\t%0, %2"
+ [(set_attr "conds" "clob")
+ (set_attr "length" "10,10,14")]
+)
+
+(define_insn "*thumb2_notsi_shiftsi"
+ [(set (match_operand:SI 0 "s_register_operand" "=r")
+ (not:SI (match_operator:SI 3 "shift_operator"
+ [(match_operand:SI 1 "s_register_operand" "r")
+ (match_operand:SI 2 "const_int_operand" "M")])))]
+ "TARGET_THUMB2"
+ "mvn%?\\t%0, %1%S3"
+ [(set_attr "predicable" "yes")
+ (set_attr "shift" "1")
+ (set_attr "type" "alu_shift")]
+)
+
+(define_insn "*thumb2_notsi_shiftsi_compare0"
+ [(set (reg:CC_NOOV CC_REGNUM)
+ (compare:CC_NOOV (not:SI (match_operator:SI 3 "shift_operator"
+ [(match_operand:SI 1 "s_register_operand" "r")
+ (match_operand:SI 2 "const_int_operand" "M")]))
+ (const_int 0)))
+ (set (match_operand:SI 0 "s_register_operand" "=r")
+ (not:SI (match_op_dup 3 [(match_dup 1) (match_dup 2)])))]
+ "TARGET_THUMB2"
+ "mvn%.\\t%0, %1%S3"
+ [(set_attr "conds" "set")
+ (set_attr "shift" "1")
+ (set_attr "type" "alu_shift")]
+)
+
+(define_insn "*thumb2_not_shiftsi_compare0_scratch"
+ [(set (reg:CC_NOOV CC_REGNUM)
+ (compare:CC_NOOV (not:SI (match_operator:SI 3 "shift_operator"
+ [(match_operand:SI 1 "s_register_operand" "r")
+ (match_operand:SI 2 "const_int_operand" "M")]))
+ (const_int 0)))
+ (clobber (match_scratch:SI 0 "=r"))]
+ "TARGET_THUMB2"
+ "mvn%.\\t%0, %1%S3"
+ [(set_attr "conds" "set")
+ (set_attr "shift" "1")
+ (set_attr "type" "alu_shift")]
+)
+
+;; Thumb-2 does not have rsc, so use a clever trick with shifter operands.
+(define_insn "*thumb2_negdi2"
+ [(set (match_operand:DI 0 "s_register_operand" "=&r,r")
+ (neg:DI (match_operand:DI 1 "s_register_operand" "?r,0")))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2"
+ "negs\\t%Q0, %Q1\;sbc\\t%R0, %R1, %R1, lsl #1"
+ [(set_attr "conds" "clob")
+ (set_attr "length" "8")]
+)
+
+(define_insn "*thumb2_abssi2"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,&r")
+ (abs:SI (match_operand:SI 1 "s_register_operand" "0,r")))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2"
+ "@
+ cmp\\t%0, #0\;it\tlt\;rsblt\\t%0, %0, #0
+ eor%?\\t%0, %1, %1, asr #31\;sub%?\\t%0, %0, %1, asr #31"
+ [(set_attr "conds" "clob,*")
+ (set_attr "shift" "1")
+ ;; predicable can't be set based on the variant, so left as no
+ (set_attr "length" "10,8")]
+)
+
+(define_insn "*thumb2_neg_abssi2"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,&r")
+ (neg:SI (abs:SI (match_operand:SI 1 "s_register_operand" "0,r"))))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2"
+ "@
+ cmp\\t%0, #0\;it\\tgt\;rsbgt\\t%0, %0, #0
+ eor%?\\t%0, %1, %1, asr #31\;rsb%?\\t%0, %0, %1, asr #31"
+ [(set_attr "conds" "clob,*")
+ (set_attr "shift" "1")
+ ;; predicable can't be set based on the variant, so left as no
+ (set_attr "length" "10,8")]
+)
+
+(define_insn "*thumb2_movdi"
+ [(set (match_operand:DI 0 "nonimmediate_di_operand" "=r, r, r, r, m")
+ (match_operand:DI 1 "di_operand" "rDa,Db,Dc,mi,r"))]
+ "TARGET_THUMB2
+ && !(TARGET_HARD_FLOAT && (TARGET_MAVERICK || TARGET_VFP))
+ && !TARGET_IWMMXT"
+ "*
+ switch (which_alternative)
+ {
+ case 0:
+ case 1:
+ case 2:
+ return \"#\";
+ default:
+ return output_move_double (operands);
+ }
+ "
+ [(set_attr "length" "8,12,16,8,8")
+ (set_attr "type" "*,*,*,load2,store2")
+ (set_attr "pool_range" "*,*,*,4096,*")
+ (set_attr "neg_pool_range" "*,*,*,0,*")]
+)
+
+(define_insn "*thumb2_movsi_insn"
+ [(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r,r, m")
+ (match_operand:SI 1 "general_operand" "rI,K,N,mi,r"))]
+ "TARGET_THUMB2 && ! TARGET_IWMMXT
+ && !(TARGET_HARD_FLOAT && TARGET_VFP)
+ && ( register_operand (operands[0], SImode)
+ || register_operand (operands[1], SImode))"
+ "@
+ mov%?\\t%0, %1
+ mvn%?\\t%0, #%B1
+ movw%?\\t%0, %1
+ ldr%?\\t%0, %1
+ str%?\\t%1, %0"
+ [(set_attr "type" "*,*,*,load1,store1")
+ (set_attr "predicable" "yes")
+ (set_attr "pool_range" "*,*,*,4096,*")
+ (set_attr "neg_pool_range" "*,*,*,0,*")]
+)
+
+;; ??? We can probably do better with thumb2
+(define_insn "pic_load_addr_thumb2"
+ [(set (match_operand:SI 0 "s_register_operand" "=r")
+ (unspec:SI [(match_operand:SI 1 "" "mX")] UNSPEC_PIC_SYM))]
+ "TARGET_THUMB2 && flag_pic"
+ "ldr%?\\t%0, %1"
+ [(set_attr "type" "load1")
+ (set_attr "pool_range" "4096")
+ (set_attr "neg_pool_range" "0")]
+)
+
+;; Set reg to the address of this instruction plus four. The low two
+;; bits of the PC are always read as zero, so ensure the instructions is
+;; word aligned.
+(define_insn "pic_load_dot_plus_four"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (unspec:SI [(const (plus:SI (pc) (const_int 4)))]
+ UNSPEC_PIC_BASE))
+ (use (match_operand 1 "" ""))]
+ "TARGET_THUMB2"
+ "*
+ assemble_align(BITS_PER_WORD);
+ (*targetm.asm_out.internal_label) (asm_out_file, \"LPIC\",
+ INTVAL (operands[1]));
+ /* We use adr because some buggy gas assemble add r8, pc, #0
+ to add.w r8, pc, #0, not addw r8, pc, #0. */
+ asm_fprintf (asm_out_file, \"\\tadr\\t%r, %LLPIC%d + 4\\n\",
+ REGNO(operands[0]), (int)INTVAL (operands[1]));
+ return \"\";
+ "
+ [(set_attr "length" "6")]
+)
+
+;; Thumb-2 always has load/store halfword instructions, so we can avoid a lot
+;; of the messyness assocuated with the ARM patterns.
+(define_insn "*thumb2_movhi_insn"
+ [(set (match_operand:HI 0 "nonimmediate_operand" "=r,r,m,r")
+ (match_operand:HI 1 "general_operand" "rI,n,r,m"))]
+ "TARGET_THUMB2"
+ "@
+ mov%?\\t%0, %1\\t%@ movhi
+ movw%?\\t%0, %L1\\t%@ movhi
+ str%(h%)\\t%1, %0\\t%@ movhi
+ ldr%(h%)\\t%0, %1\\t%@ movhi"
+ [(set_attr "type" "*,*,store1,load1")
+ (set_attr "predicable" "yes")
+ (set_attr "pool_range" "*,*,*,4096")
+ (set_attr "neg_pool_range" "*,*,*,250")]
+)
+
+(define_insn "*thumb2_movsf_soft_insn"
+ [(set (match_operand:SF 0 "nonimmediate_operand" "=r,r,m")
+ (match_operand:SF 1 "general_operand" "r,mE,r"))]
+ "TARGET_THUMB2
+ && TARGET_SOFT_FLOAT
+ && (GET_CODE (operands[0]) != MEM
+ || register_operand (operands[1], SFmode))"
+ "@
+ mov%?\\t%0, %1
+ ldr%?\\t%0, %1\\t%@ float
+ str%?\\t%1, %0\\t%@ float"
+ [(set_attr "predicable" "yes")
+ (set_attr "type" "*,load1,store1")
+ (set_attr "pool_range" "*,4096,*")
+ (set_attr "neg_pool_range" "*,0,*")]
+)
+
+(define_insn "*thumb2_movdf_soft_insn"
+ [(set (match_operand:DF 0 "nonimmediate_soft_df_operand" "=r,r,r,r,m")
+ (match_operand:DF 1 "soft_df_operand" "rDa,Db,Dc,mF,r"))]
+ "TARGET_THUMB2 && TARGET_SOFT_FLOAT
+ && ( register_operand (operands[0], DFmode)
+ || register_operand (operands[1], DFmode))"
+ "*
+ switch (which_alternative)
+ {
+ case 0:
+ case 1:
+ case 2:
+ return \"#\";
+ default:
+ return output_move_double (operands);
+ }
+ "
+ [(set_attr "length" "8,12,16,8,8")
+ (set_attr "type" "*,*,*,load2,store2")
+ (set_attr "pool_range" "1020")
+ (set_attr "neg_pool_range" "0")]
+)
+
+(define_insn "*thumb2_cmpsi_shiftsi"
+ [(set (reg:CC CC_REGNUM)
+ (compare:CC (match_operand:SI 0 "s_register_operand" "r")
+ (match_operator:SI 3 "shift_operator"
+ [(match_operand:SI 1 "s_register_operand" "r")
+ (match_operand:SI 2 "const_int_operand" "M")])))]
+ "TARGET_THUMB2"
+ "cmp%?\\t%0, %1%S3"
+ [(set_attr "conds" "set")
+ (set_attr "shift" "1")
+ (set_attr "type" "alu_shift")]
+)
+
+(define_insn "*thumb2_cmpsi_shiftsi_swp"
+ [(set (reg:CC_SWP CC_REGNUM)
+ (compare:CC_SWP (match_operator:SI 3 "shift_operator"
+ [(match_operand:SI 1 "s_register_operand" "r")
+ (match_operand:SI 2 "const_int_operand" "M")])
+ (match_operand:SI 0 "s_register_operand" "r")))]
+ "TARGET_THUMB2"
+ "cmp%?\\t%0, %1%S3"
+ [(set_attr "conds" "set")
+ (set_attr "shift" "1")
+ (set_attr "type" "alu_shift")]
+)
+
+(define_insn "*thumb2_cmpsi_neg_shiftsi"
+ [(set (reg:CC CC_REGNUM)
+ (compare:CC (match_operand:SI 0 "s_register_operand" "r")
+ (neg:SI (match_operator:SI 3 "shift_operator"
+ [(match_operand:SI 1 "s_register_operand" "r")
+ (match_operand:SI 2 "const_int_operand" "M")]))))]
+ "TARGET_THUMB2"
+ "cmn%?\\t%0, %1%S3"
+ [(set_attr "conds" "set")
+ (set_attr "shift" "1")
+ (set_attr "type" "alu_shift")]
+)
+
+(define_insn "*thumb2_mov_scc"
+ [(set (match_operand:SI 0 "s_register_operand" "=r")
+ (match_operator:SI 1 "arm_comparison_operator"
+ [(match_operand 2 "cc_register" "") (const_int 0)]))]
+ "TARGET_THUMB2"
+ "ite\\t%D1\;mov%D1\\t%0, #0\;mov%d1\\t%0, #1"
+ [(set_attr "conds" "use")
+ (set_attr "length" "10")]
+)
+
+(define_insn "*thumb2_mov_negscc"
+ [(set (match_operand:SI 0 "s_register_operand" "=r")
+ (neg:SI (match_operator:SI 1 "arm_comparison_operator"
+ [(match_operand 2 "cc_register" "") (const_int 0)])))]
+ "TARGET_THUMB2"
+ "ite\\t%D1\;mov%D1\\t%0, #0\;mvn%d1\\t%0, #0"
+ [(set_attr "conds" "use")
+ (set_attr "length" "10")]
+)
+
+(define_insn "*thumb2_mov_notscc"
+ [(set (match_operand:SI 0 "s_register_operand" "=r")
+ (not:SI (match_operator:SI 1 "arm_comparison_operator"
+ [(match_operand 2 "cc_register" "") (const_int 0)])))]
+ "TARGET_THUMB2"
+ "ite\\t%D1\;mov%D1\\t%0, #0\;mvn%d1\\t%0, #1"
+ [(set_attr "conds" "use")
+ (set_attr "length" "10")]
+)
+
+(define_insn "*thumb2_movsicc_insn"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r,r,r,r,r,r,r")
+ (if_then_else:SI
+ (match_operator 3 "arm_comparison_operator"
+ [(match_operand 4 "cc_register" "") (const_int 0)])
+ (match_operand:SI 1 "arm_not_operand" "0,0,rI,K,rI,rI,K,K")
+ (match_operand:SI 2 "arm_not_operand" "rI,K,0,0,rI,K,rI,K")))]
+ "TARGET_THUMB2"
+ "@
+ it\\t%D3\;mov%D3\\t%0, %2
+ it\\t%D3\;mvn%D3\\t%0, #%B2
+ it\\t%d3\;mov%d3\\t%0, %1
+ it\\t%d3\;mvn%d3\\t%0, #%B1
+ ite\\t%d3\;mov%d3\\t%0, %1\;mov%D3\\t%0, %2
+ ite\\t%d3\;mov%d3\\t%0, %1\;mvn%D3\\t%0, #%B2
+ ite\\t%d3\;mvn%d3\\t%0, #%B1\;mov%D3\\t%0, %2
+ ite\\t%d3\;mvn%d3\\t%0, #%B1\;mvn%D3\\t%0, #%B2"
+ [(set_attr "length" "6,6,6,6,10,10,10,10")
+ (set_attr "conds" "use")]
+)
+
+(define_insn "*thumb2_movsfcc_soft_insn"
+ [(set (match_operand:SF 0 "s_register_operand" "=r,r")
+ (if_then_else:SF (match_operator 3 "arm_comparison_operator"
+ [(match_operand 4 "cc_register" "") (const_int 0)])
+ (match_operand:SF 1 "s_register_operand" "0,r")
+ (match_operand:SF 2 "s_register_operand" "r,0")))]
+ "TARGET_THUMB2 && TARGET_SOFT_FLOAT"
+ "@
+ it\\t%D3\;mov%D3\\t%0, %2
+ it\\t%d3\;mov%d3\\t%0, %1"
+ [(set_attr "length" "6,6")
+ (set_attr "conds" "use")]
+)
+
+(define_insn "*call_reg_thumb2"
+ [(call (mem:SI (match_operand:SI 0 "s_register_operand" "r"))
+ (match_operand 1 "" ""))
+ (use (match_operand 2 "" ""))
+ (clobber (reg:SI LR_REGNUM))]
+ "TARGET_THUMB2"
+ "blx%?\\t%0"
+ [(set_attr "type" "call")]
+)
+
+(define_insn "*call_value_reg_thumb2"
+ [(set (match_operand 0 "" "")
+ (call (mem:SI (match_operand:SI 1 "register_operand" "l*r"))
+ (match_operand 2 "" "")))
+ (use (match_operand 3 "" ""))
+ (clobber (reg:SI LR_REGNUM))]
+ "TARGET_THUMB2"
+ "blx\\t%1"
+ [(set_attr "type" "call")]
+)
+
+(define_insn "*thumb2_indirect_jump"
+ [(set (pc)
+ (match_operand:SI 0 "register_operand" "l*r"))]
+ "TARGET_THUMB2"
+ "bx\\t%0"
+ [(set_attr "conds" "clob")]
+)
+;; Don't define thumb2_load_indirect_jump because we can't guarantee label
+;; addresses will have the thumb bit set correctly.
+
+
+;; Patterns to allow combination of arithmetic, cond code and shifts
+
+(define_insn "*thumb2_arith_shiftsi"
+ [(set (match_operand:SI 0 "s_register_operand" "=r")
+ (match_operator:SI 1 "shiftable_operator"
+ [(match_operator:SI 3 "shift_operator"
+ [(match_operand:SI 4 "s_register_operand" "r")
+ (match_operand:SI 5 "const_int_operand" "M")])
+ (match_operand:SI 2 "s_register_operand" "r")]))]
+ "TARGET_THUMB2"
+ "%i1%?\\t%0, %2, %4%S3"
+ [(set_attr "predicable" "yes")
+ (set_attr "shift" "4")
+ (set_attr "type" "alu_shift")]
+)
+
+;; ??? What does this splitter do? Copied from the ARM version
+(define_split
+ [(set (match_operand:SI 0 "s_register_operand" "")
+ (match_operator:SI 1 "shiftable_operator"
+ [(match_operator:SI 2 "shiftable_operator"
+ [(match_operator:SI 3 "shift_operator"
+ [(match_operand:SI 4 "s_register_operand" "")
+ (match_operand:SI 5 "const_int_operand" "")])
+ (match_operand:SI 6 "s_register_operand" "")])
+ (match_operand:SI 7 "arm_rhs_operand" "")]))
+ (clobber (match_operand:SI 8 "s_register_operand" ""))]
+ "TARGET_32BIT"
+ [(set (match_dup 8)
+ (match_op_dup 2 [(match_op_dup 3 [(match_dup 4) (match_dup 5)])
+ (match_dup 6)]))
+ (set (match_dup 0)
+ (match_op_dup 1 [(match_dup 8) (match_dup 7)]))]
+ "")
+
+(define_insn "*thumb2_arith_shiftsi_compare0"
+ [(set (reg:CC_NOOV CC_REGNUM)
+ (compare:CC_NOOV (match_operator:SI 1 "shiftable_operator"
+ [(match_operator:SI 3 "shift_operator"
+ [(match_operand:SI 4 "s_register_operand" "r")
+ (match_operand:SI 5 "const_int_operand" "M")])
+ (match_operand:SI 2 "s_register_operand" "r")])
+ (const_int 0)))
+ (set (match_operand:SI 0 "s_register_operand" "=r")
+ (match_op_dup 1 [(match_op_dup 3 [(match_dup 4) (match_dup 5)])
+ (match_dup 2)]))]
+ "TARGET_32BIT"
+ "%i1%.\\t%0, %2, %4%S3"
+ [(set_attr "conds" "set")
+ (set_attr "shift" "4")
+ (set_attr "type" "alu_shift")]
+)
+
+(define_insn "*thumb2_arith_shiftsi_compare0_scratch"
+ [(set (reg:CC_NOOV CC_REGNUM)
+ (compare:CC_NOOV (match_operator:SI 1 "shiftable_operator"
+ [(match_operator:SI 3 "shift_operator"
+ [(match_operand:SI 4 "s_register_operand" "r")
+ (match_operand:SI 5 "const_int_operand" "M")])
+ (match_operand:SI 2 "s_register_operand" "r")])
+ (const_int 0)))
+ (clobber (match_scratch:SI 0 "=r"))]
+ "TARGET_THUMB2"
+ "%i1%.\\t%0, %2, %4%S3"
+ [(set_attr "conds" "set")
+ (set_attr "shift" "4")
+ (set_attr "type" "alu_shift")]
+)
+
+(define_insn "*thumb2_sub_shiftsi"
+ [(set (match_operand:SI 0 "s_register_operand" "=r")
+ (minus:SI (match_operand:SI 1 "s_register_operand" "r")
+ (match_operator:SI 2 "shift_operator"
+ [(match_operand:SI 3 "s_register_operand" "r")
+ (match_operand:SI 4 "const_int_operand" "M")])))]
+ "TARGET_THUMB2"
+ "sub%?\\t%0, %1, %3%S2"
+ [(set_attr "predicable" "yes")
+ (set_attr "shift" "3")
+ (set_attr "type" "alu_shift")]
+)
+
+(define_insn "*thumb2_sub_shiftsi_compare0"
+ [(set (reg:CC_NOOV CC_REGNUM)
+ (compare:CC_NOOV
+ (minus:SI (match_operand:SI 1 "s_register_operand" "r")
+ (match_operator:SI 2 "shift_operator"
+ [(match_operand:SI 3 "s_register_operand" "r")
+ (match_operand:SI 4 "const_int_operand" "M")]))
+ (const_int 0)))
+ (set (match_operand:SI 0 "s_register_operand" "=r")
+ (minus:SI (match_dup 1) (match_op_dup 2 [(match_dup 3)
+ (match_dup 4)])))]
+ "TARGET_THUMB2"
+ "sub%.\\t%0, %1, %3%S2"
+ [(set_attr "conds" "set")
+ (set_attr "shift" "3")
+ (set_attr "type" "alu_shift")]
+)
+
+(define_insn "*thumb2_sub_shiftsi_compare0_scratch"
+ [(set (reg:CC_NOOV CC_REGNUM)
+ (compare:CC_NOOV
+ (minus:SI (match_operand:SI 1 "s_register_operand" "r")
+ (match_operator:SI 2 "shift_operator"
+ [(match_operand:SI 3 "s_register_operand" "r")
+ (match_operand:SI 4 "const_int_operand" "M")]))
+ (const_int 0)))
+ (clobber (match_scratch:SI 0 "=r"))]
+ "TARGET_THUMB2"
+ "sub%.\\t%0, %1, %3%S2"
+ [(set_attr "conds" "set")
+ (set_attr "shift" "3")
+ (set_attr "type" "alu_shift")]
+)
+
+(define_insn "*thumb2_and_scc"
+ [(set (match_operand:SI 0 "s_register_operand" "=r")
+ (and:SI (match_operator:SI 1 "arm_comparison_operator"
+ [(match_operand 3 "cc_register" "") (const_int 0)])
+ (match_operand:SI 2 "s_register_operand" "r")))]
+ "TARGET_THUMB2"
+ "ite\\t%D1\;mov%D1\\t%0, #0\;and%d1\\t%0, %2, #1"
+ [(set_attr "conds" "use")
+ (set_attr "length" "10")]
+)
+
+(define_insn "*thumb2_ior_scc"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r")
+ (ior:SI (match_operator:SI 2 "arm_comparison_operator"
+ [(match_operand 3 "cc_register" "") (const_int 0)])
+ (match_operand:SI 1 "s_register_operand" "0,?r")))]
+ "TARGET_THUMB2"
+ "@
+ it\\t%d2\;orr%d2\\t%0, %1, #1
+ ite\\t%D2\;mov%D2\\t%0, %1\;orr%d2\\t%0, %1, #1"
+ [(set_attr "conds" "use")
+ (set_attr "length" "6,10")]
+)
+
+(define_insn "*thumb2_compare_scc"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r")
+ (match_operator:SI 1 "arm_comparison_operator"
+ [(match_operand:SI 2 "s_register_operand" "r,r")
+ (match_operand:SI 3 "arm_add_operand" "rI,L")]))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2"
+ "*
+ if (operands[3] == const0_rtx)
+ {
+ if (GET_CODE (operands[1]) == LT)
+ return \"lsr\\t%0, %2, #31\";
+
+ if (GET_CODE (operands[1]) == GE)
+ return \"mvn\\t%0, %2\;lsr\\t%0, %0, #31\";
+
+ if (GET_CODE (operands[1]) == EQ)
+ return \"rsbs\\t%0, %2, #1\;it\\tcc\;movcc\\t%0, #0\";
+ }
+
+ if (GET_CODE (operands[1]) == NE)
+ {
+ if (which_alternative == 1)
+ return \"adds\\t%0, %2, #%n3\;it\\tne\;movne\\t%0, #1\";
+ return \"subs\\t%0, %2, %3\;it\\tne\;movne\\t%0, #1\";
+ }
+ if (which_alternative == 1)
+ output_asm_insn (\"cmn\\t%2, #%n3\", operands);
+ else
+ output_asm_insn (\"cmp\\t%2, %3\", operands);
+ return \"ite\\t%D1\;mov%D1\\t%0, #0\;mov%d1\\t%0, #1\";
+ "
+ [(set_attr "conds" "clob")
+ (set_attr "length" "14")]
+)
+
+(define_insn "*thumb2_cond_move"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r,r")
+ (if_then_else:SI (match_operator 3 "equality_operator"
+ [(match_operator 4 "arm_comparison_operator"
+ [(match_operand 5 "cc_register" "") (const_int 0)])
+ (const_int 0)])
+ (match_operand:SI 1 "arm_rhs_operand" "0,rI,?rI")
+ (match_operand:SI 2 "arm_rhs_operand" "rI,0,rI")))]
+ "TARGET_THUMB2"
+ "*
+ if (GET_CODE (operands[3]) == NE)
+ {
+ if (which_alternative != 1)
+ output_asm_insn (\"it\\t%D4\;mov%D4\\t%0, %2\", operands);
+ if (which_alternative != 0)
+ output_asm_insn (\"it\\t%d4\;mov%d4\\t%0, %1\", operands);
+ return \"\";
+ }
+ switch (which_alternative)
+ {
+ case 0:
+ output_asm_insn (\"it\\t%d4\", operands);
+ break;
+ case 1:
+ output_asm_insn (\"it\\t%D4\", operands);
+ break;
+ case 2:
+ output_asm_insn (\"ite\\t%D4\", operands);
+ break;
+ default:
+ abort();
+ }
+ if (which_alternative != 0)
+ output_asm_insn (\"mov%D4\\t%0, %1\", operands);
+ if (which_alternative != 1)
+ output_asm_insn (\"mov%d4\\t%0, %2\", operands);
+ return \"\";
+ "
+ [(set_attr "conds" "use")
+ (set_attr "length" "6,6,10")]
+)
+
+(define_insn "*thumb2_cond_arith"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r")
+ (match_operator:SI 5 "shiftable_operator"
+ [(match_operator:SI 4 "arm_comparison_operator"
+ [(match_operand:SI 2 "s_register_operand" "r,r")
+ (match_operand:SI 3 "arm_rhs_operand" "rI,rI")])
+ (match_operand:SI 1 "s_register_operand" "0,?r")]))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2"
+ "*
+ if (GET_CODE (operands[4]) == LT && operands[3] == const0_rtx)
+ return \"%i5\\t%0, %1, %2, lsr #31\";
+
+ output_asm_insn (\"cmp\\t%2, %3\", operands);
+ if (GET_CODE (operands[5]) == AND)
+ {
+ output_asm_insn (\"ite\\t%D4\", operands);
+ output_asm_insn (\"mov%D4\\t%0, #0\", operands);
+ }
+ else if (GET_CODE (operands[5]) == MINUS)
+ {
+ output_asm_insn (\"ite\\t%D4\", operands);
+ output_asm_insn (\"rsb%D4\\t%0, %1, #0\", operands);
+ }
+ else if (which_alternative != 0)
+ {
+ output_asm_insn (\"ite\\t%D4\", operands);
+ output_asm_insn (\"mov%D4\\t%0, %1\", operands);
+ }
+ else
+ output_asm_insn (\"it\\t%d4\", operands);
+ return \"%i5%d4\\t%0, %1, #1\";
+ "
+ [(set_attr "conds" "clob")
+ (set_attr "length" "14")]
+)
+
+(define_insn "*thumb2_cond_sub"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r")
+ (minus:SI (match_operand:SI 1 "s_register_operand" "0,?r")
+ (match_operator:SI 4 "arm_comparison_operator"
+ [(match_operand:SI 2 "s_register_operand" "r,r")
+ (match_operand:SI 3 "arm_rhs_operand" "rI,rI")])))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2"
+ "*
+ output_asm_insn (\"cmp\\t%2, %3\", operands);
+ if (which_alternative != 0)
+ {
+ output_asm_insn (\"ite\\t%D4\", operands);
+ output_asm_insn (\"mov%D4\\t%0, %1\", operands);
+ }
+ else
+ output_asm_insn (\"it\\t%d4\", operands);
+ return \"sub%d4\\t%0, %1, #1\";
+ "
+ [(set_attr "conds" "clob")
+ (set_attr "length" "10,14")]
+)
+
+(define_insn "*thumb2_negscc"
+ [(set (match_operand:SI 0 "s_register_operand" "=r")
+ (neg:SI (match_operator 3 "arm_comparison_operator"
+ [(match_operand:SI 1 "s_register_operand" "r")
+ (match_operand:SI 2 "arm_rhs_operand" "rI")])))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2"
+ "*
+ if (GET_CODE (operands[3]) == LT && operands[3] == const0_rtx)
+ return \"asr\\t%0, %1, #31\";
+
+ if (GET_CODE (operands[3]) == NE)
+ return \"subs\\t%0, %1, %2\;it\\tne\;mvnne\\t%0, #0\";
+
+ if (GET_CODE (operands[3]) == GT)
+ return \"subs\\t%0, %1, %2\;it\\tne\;mvnne\\t%0, %0, asr #31\";
+
+ output_asm_insn (\"cmp\\t%1, %2\", operands);
+ output_asm_insn (\"ite\\t%D3\", operands);
+ output_asm_insn (\"mov%D3\\t%0, #0\", operands);
+ return \"mvn%d3\\t%0, #0\";
+ "
+ [(set_attr "conds" "clob")
+ (set_attr "length" "14")]
+)
+
+(define_insn "*thumb2_movcond"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r,r")
+ (if_then_else:SI
+ (match_operator 5 "arm_comparison_operator"
+ [(match_operand:SI 3 "s_register_operand" "r,r,r")
+ (match_operand:SI 4 "arm_add_operand" "rIL,rIL,rIL")])
+ (match_operand:SI 1 "arm_rhs_operand" "0,rI,?rI")
+ (match_operand:SI 2 "arm_rhs_operand" "rI,0,rI")))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2"
+ "*
+ if (GET_CODE (operands[5]) == LT
+ && (operands[4] == const0_rtx))
+ {
+ if (which_alternative != 1 && GET_CODE (operands[1]) == REG)
+ {
+ if (operands[2] == const0_rtx)
+ return \"and\\t%0, %1, %3, asr #31\";
+ return \"ands\\t%0, %1, %3, asr #32\;it\\tcc\;movcc\\t%0, %2\";
+ }
+ else if (which_alternative != 0 && GET_CODE (operands[2]) == REG)
+ {
+ if (operands[1] == const0_rtx)
+ return \"bic\\t%0, %2, %3, asr #31\";
+ return \"bics\\t%0, %2, %3, asr #32\;it\\tcs\;movcs\\t%0, %1\";
+ }
+ /* The only case that falls through to here is when both ops 1 & 2
+ are constants. */
+ }
+
+ if (GET_CODE (operands[5]) == GE
+ && (operands[4] == const0_rtx))
+ {
+ if (which_alternative != 1 && GET_CODE (operands[1]) == REG)
+ {
+ if (operands[2] == const0_rtx)
+ return \"bic\\t%0, %1, %3, asr #31\";
+ return \"bics\\t%0, %1, %3, asr #32\;it\\tcs\;movcs\\t%0, %2\";
+ }
+ else if (which_alternative != 0 && GET_CODE (operands[2]) == REG)
+ {
+ if (operands[1] == const0_rtx)
+ return \"and\\t%0, %2, %3, asr #31\";
+ return \"ands\\t%0, %2, %3, asr #32\;it\tcc\;movcc\\t%0, %1\";
+ }
+ /* The only case that falls through to here is when both ops 1 & 2
+ are constants. */
+ }
+ if (GET_CODE (operands[4]) == CONST_INT
+ && !const_ok_for_arm (INTVAL (operands[4])))
+ output_asm_insn (\"cmn\\t%3, #%n4\", operands);
+ else
+ output_asm_insn (\"cmp\\t%3, %4\", operands);
+ switch (which_alternative)
+ {
+ case 0:
+ output_asm_insn (\"it\\t%D5\", operands);
+ break;
+ case 1:
+ output_asm_insn (\"it\\t%d5\", operands);
+ break;
+ case 2:
+ output_asm_insn (\"ite\\t%d5\", operands);
+ break;
+ default:
+ abort();
+ }
+ if (which_alternative != 0)
+ output_asm_insn (\"mov%d5\\t%0, %1\", operands);
+ if (which_alternative != 1)
+ output_asm_insn (\"mov%D5\\t%0, %2\", operands);
+ return \"\";
+ "
+ [(set_attr "conds" "clob")
+ (set_attr "length" "10,10,14")]
+)
+
+;; Zero and sign extension instructions.
+
+(define_insn "*thumb2_zero_extendsidi2"
+ [(set (match_operand:DI 0 "s_register_operand" "=r")
+ (zero_extend:DI (match_operand:SI 1 "s_register_operand" "r")))]
+ "TARGET_THUMB2"
+ "*
+ /* ??? Output both instructions unconditionally, otherwise the conditional
+ executon insn counter gets confused.
+ if (REGNO (operands[1])
+ != REGNO (operands[0]) + (WORDS_BIG_ENDIAN ? 1 : 0)) */
+ output_asm_insn (\"mov%?\\t%Q0, %1\", operands);
+ return \"mov%?\\t%R0, #0\";
+ "
+ [(set_attr "length" "8")
+ (set_attr "ce_count" "2")
+ (set_attr "predicable" "yes")]
+)
+
+(define_insn "*thumb2_zero_extendqidi2"
+ [(set (match_operand:DI 0 "s_register_operand" "=r,r")
+ (zero_extend:DI (match_operand:QI 1 "nonimmediate_operand" "r,m")))]
+ "TARGET_THUMB2"
+ "@
+ and%?\\t%Q0, %1, #255\;mov%?\\t%R0, #0
+ ldr%(b%)\\t%Q0, %1\;mov%?\\t%R0, #0"
+ [(set_attr "length" "8")
+ (set_attr "ce_count" "2")
+ (set_attr "predicable" "yes")
+ (set_attr "type" "*,load_byte")
+ (set_attr "pool_range" "*,4092")
+ (set_attr "neg_pool_range" "*,250")]
+)
+
+(define_insn "*thumb2_extendsidi2"
+ [(set (match_operand:DI 0 "s_register_operand" "=r")
+ (sign_extend:DI (match_operand:SI 1 "s_register_operand" "r")))]
+ "TARGET_THUMB2"
+ "*
+ /* ??? Output both instructions unconditionally, otherwise the conditional
+ executon insn counter gets confused.
+ if (REGNO (operands[1])
+ != REGNO (operands[0]) + (WORDS_BIG_ENDIAN ? 1 : 0)) */
+ output_asm_insn (\"mov%?\\t%Q0, %1\", operands);
+ return \"asr%?\\t%R0, %Q0, #31\";
+ "
+ [(set_attr "length" "8")
+ (set_attr "ce_count" "2")
+ (set_attr "shift" "1")
+ (set_attr "predicable" "yes")]
+)
+
+;; All supported Thumb2 implementations are armv6, so only that case is
+;; provided.
+(define_insn "*thumb2_extendqisi_v6"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r")
+ (sign_extend:SI (match_operand:QI 1 "nonimmediate_operand" "r,m")))]
+ "TARGET_THUMB2 && arm_arch6"
+ "@
+ sxtb%?\\t%0, %1
+ ldr%(sb%)\\t%0, %1"
+ [(set_attr "type" "alu_shift,load_byte")
+ (set_attr "predicable" "yes")
+ (set_attr "pool_range" "*,4096")
+ (set_attr "neg_pool_range" "*,250")]
+)
+
+(define_insn "*thumb2_zero_extendhisi2_v6"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r")
+ (zero_extend:SI (match_operand:HI 1 "nonimmediate_operand" "r,m")))]
+ "TARGET_THUMB2 && arm_arch6"
+ "@
+ uxth%?\\t%0, %1
+ ldr%(h%)\\t%0, %1"
+ [(set_attr "type" "alu_shift,load_byte")
+ (set_attr "predicable" "yes")
+ (set_attr "pool_range" "*,4096")
+ (set_attr "neg_pool_range" "*,250")]
+)
+
+(define_insn "*thumb2_zero_extendqisi2_v6"
+ [(set (match_operand:SI 0 "s_register_operand" "=r,r")
+ (zero_extend:SI (match_operand:QI 1 "nonimmediate_operand" "r,m")))]
+ "TARGET_THUMB2 && arm_arch6"
+ "@
+ uxtb%(%)\\t%0, %1
+ ldr%(b%)\\t%0, %1\\t%@ zero_extendqisi2"
+ [(set_attr "type" "alu_shift,load_byte")
+ (set_attr "predicable" "yes")
+ (set_attr "pool_range" "*,4096")
+ (set_attr "neg_pool_range" "*,250")]
+)
+
+(define_insn "thumb2_casesi_internal"
+ [(parallel [(set (pc)
+ (if_then_else
+ (leu (match_operand:SI 0 "s_register_operand" "r")
+ (match_operand:SI 1 "arm_rhs_operand" "rI"))
+ (mem:SI (plus:SI (mult:SI (match_dup 0) (const_int 4))
+ (label_ref (match_operand 2 "" ""))))
+ (label_ref (match_operand 3 "" ""))))
+ (clobber (reg:CC CC_REGNUM))
+ (clobber (match_scratch:SI 4 "=r"))
+ (use (label_ref (match_dup 2)))])]
+ "TARGET_THUMB2 && !flag_pic"
+ "* return thumb2_output_casesi(operands);"
+ [(set_attr "conds" "clob")
+ (set_attr "length" "16")]
+)
+
+(define_insn "thumb2_casesi_internal_pic"
+ [(parallel [(set (pc)
+ (if_then_else
+ (leu (match_operand:SI 0 "s_register_operand" "r")
+ (match_operand:SI 1 "arm_rhs_operand" "rI"))
+ (mem:SI (plus:SI (mult:SI (match_dup 0) (const_int 4))
+ (label_ref (match_operand 2 "" ""))))
+ (label_ref (match_operand 3 "" ""))))
+ (clobber (reg:CC CC_REGNUM))
+ (clobber (match_scratch:SI 4 "=r"))
+ (clobber (match_scratch:SI 5 "=r"))
+ (use (label_ref (match_dup 2)))])]
+ "TARGET_THUMB2 && flag_pic"
+ "* return thumb2_output_casesi(operands);"
+ [(set_attr "conds" "clob")
+ (set_attr "length" "20")]
+)
+
+(define_insn_and_split "thumb2_eh_return"
+ [(unspec_volatile [(match_operand:SI 0 "s_register_operand" "r")]
+ VUNSPEC_EH_RETURN)
+ (clobber (match_scratch:SI 1 "=&r"))]
+ "TARGET_THUMB2"
+ "#"
+ "&& reload_completed"
+ [(const_int 0)]
+ "
+ {
+ thumb_set_return_address (operands[0], operands[1]);
+ DONE;
+ }"
+)
+
+;; Peepholes and insns for 16-bit flag clobbering instructions.
+;; The conditional forms of these instructions do not clobber CC.
+;; However by the time peepholes are run it is probably too late to do
+;; anything useful with this information.
+(define_peephole2
+ [(set (match_operand:SI 0 "low_register_operand" "")
+ (match_operator:SI 3 "thumb_16bit_operator"
+ [(match_operand:SI 1 "low_register_operand" "")
+ (match_operand:SI 2 "low_register_operand" "")]))]
+ "TARGET_THUMB2 && rtx_equal_p(operands[0], operands[1])
+ && peep2_regno_dead_p(0, CC_REGNUM)"
+ [(parallel
+ [(set (match_dup 0)
+ (match_op_dup 3
+ [(match_dup 1)
+ (match_dup 2)]))
+ (clobber (reg:CC CC_REGNUM))])]
+ ""
+)
+
+(define_insn "*thumb2_alusi3_short"
+ [(set (match_operand:SI 0 "s_register_operand" "=l")
+ (match_operator:SI 3 "thumb_16bit_operator"
+ [(match_operand:SI 1 "s_register_operand" "0")
+ (match_operand:SI 2 "s_register_operand" "l")]))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2 && reload_completed"
+ "%I3%!\\t%0, %1, %2"
+ [(set_attr "predicable" "yes")
+ (set_attr "length" "2")]
+)
+
+;; Similarly for 16-bit shift instructions
+;; There is no 16-bit rotate by immediate instruction.
+(define_peephole2
+ [(set (match_operand:SI 0 "low_register_operand" "")
+ (match_operator:SI 3 "shift_operator"
+ [(match_operand:SI 1 "low_register_operand" "")
+ (match_operand:SI 2 "low_reg_or_int_operand" "")]))]
+ "TARGET_THUMB2
+ && peep2_regno_dead_p(0, CC_REGNUM)
+ && ((GET_CODE(operands[3]) != ROTATE && GET_CODE(operands[3]) != ROTATERT)
+ || REG_P(operands[2]))"
+ [(parallel
+ [(set (match_dup 0)
+ (match_op_dup 3
+ [(match_dup 1)
+ (match_dup 2)]))
+ (clobber (reg:CC CC_REGNUM))])]
+ ""
+)
+
+(define_insn "*thumb2_shiftsi3_short"
+ [(set (match_operand:SI 0 "low_register_operand" "=l")
+ (match_operator:SI 3 "shift_operator"
+ [(match_operand:SI 1 "low_register_operand" "l")
+ (match_operand:SI 2 "low_reg_or_int_operand" "lM")]))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2 && reload_completed
+ && ((GET_CODE(operands[3]) != ROTATE && GET_CODE(operands[3]) != ROTATERT)
+ || REG_P(operands[2]))"
+ "* return arm_output_shift(operands, 2);"
+ [(set_attr "predicable" "yes")
+ (set_attr "shift" "1")
+ (set_attr "length" "2")
+ (set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "")
+ (const_string "alu_shift")
+ (const_string "alu_shift_reg")))]
+)
+
+;; 16-bit load immediate
+(define_peephole2
+ [(set (match_operand:SI 0 "low_register_operand" "")
+ (match_operand:SI 1 "const_int_operand" ""))]
+ "TARGET_THUMB2
+ && peep2_regno_dead_p(0, CC_REGNUM)
+ && (unsigned HOST_WIDE_INT) INTVAL(operands[1]) < 256"
+ [(parallel
+ [(set (match_dup 0)
+ (match_dup 1))
+ (clobber (reg:CC CC_REGNUM))])]
+ ""
+)
+
+(define_insn "*thumb2_movsi_shortim"
+ [(set (match_operand:SI 0 "low_register_operand" "=l")
+ (match_operand:SI 1 "const_int_operand" "I"))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2 && reload_completed"
+ "mov%!\t%0, %1"
+ [(set_attr "predicable" "yes")
+ (set_attr "length" "2")]
+)
+
+;; 16-bit add/sub immediate
+(define_peephole2
+ [(set (match_operand:SI 0 "low_register_operand" "")
+ (plus:SI (match_operand:SI 1 "low_register_operand" "")
+ (match_operand:SI 2 "const_int_operand" "")))]
+ "TARGET_THUMB2
+ && peep2_regno_dead_p(0, CC_REGNUM)
+ && ((rtx_equal_p(operands[0], operands[1])
+ && INTVAL(operands[2]) > -256 && INTVAL(operands[2]) < 256)
+ || (INTVAL(operands[2]) > -8 && INTVAL(operands[2]) < 8))"
+ [(parallel
+ [(set (match_dup 0)
+ (plus:SI (match_dup 1)
+ (match_dup 2)))
+ (clobber (reg:CC CC_REGNUM))])]
+ ""
+)
+
+(define_insn "*thumb2_addsi_shortim"
+ [(set (match_operand:SI 0 "low_register_operand" "=l")
+ (plus:SI (match_operand:SI 1 "low_register_operand" "l")
+ (match_operand:SI 2 "const_int_operand" "IL")))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2 && reload_completed"
+ "*
+ HOST_WIDE_INT val;
+
+ val = INTVAL(operands[2]);
+ /* We prefer eg. subs rn, rn, #1 over adds rn, rn, #0xffffffff. */
+ if (val < 0 && const_ok_for_arm(ARM_SIGN_EXTEND (-val)))
+ return \"sub%!\\t%0, %1, #%n2\";
+ else
+ return \"add%!\\t%0, %1, %2\";
+ "
+ [(set_attr "predicable" "yes")
+ (set_attr "length" "2")]
+)
+
+(define_insn "divsi3"
+ [(set (match_operand:SI 0 "s_register_operand" "=r")
+ (div:SI (match_operand:SI 1 "s_register_operand" "r")
+ (match_operand:SI 2 "s_register_operand" "r")))]
+ "TARGET_THUMB2 && arm_arch_hwdiv"
+ "sdiv%?\t%0, %1, %2"
+ [(set_attr "predicable" "yes")]
+)
+
+(define_insn "udivsi3"
+ [(set (match_operand:SI 0 "s_register_operand" "=r")
+ (udiv:SI (match_operand:SI 1 "s_register_operand" "r")
+ (match_operand:SI 2 "s_register_operand" "r")))]
+ "TARGET_THUMB2 && arm_arch_hwdiv"
+ "udiv%?\t%0, %1, %2"
+ [(set_attr "predicable" "yes")]
+)
+
+(define_insn "*thumb2_cbz"
+ [(set (pc) (if_then_else
+ (eq (match_operand:SI 0 "s_register_operand" "l,?r")
+ (const_int 0))
+ (label_ref (match_operand 1 "" ""))
+ (pc)))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2"
+ "*
+ if (get_attr_length (insn) == 2 && which_alternative == 0)
+ return \"cbz\\t%0, %l1\";
+ else
+ return \"cmp\\t%0, #0\;beq\\t%l1\";
+ "
+ [(set (attr "length")
+ (if_then_else
+ (and (ge (minus (match_dup 1) (pc)) (const_int 2))
+ (le (minus (match_dup 1) (pc)) (const_int 128)))
+ (const_int 2)
+ (const_int 8)))]
+)
+
+(define_insn "*thumb2_cbnz"
+ [(set (pc) (if_then_else
+ (ne (match_operand:SI 0 "s_register_operand" "l,?r")
+ (const_int 0))
+ (label_ref (match_operand 1 "" ""))
+ (pc)))
+ (clobber (reg:CC CC_REGNUM))]
+ "TARGET_THUMB2"
+ "*
+ if (get_attr_length (insn) == 2 && which_alternative == 0)
+ return \"cbnz\\t%0, %l1\";
+ else
+ return \"cmp\\t%0, #0\;bne\\t%l1\";
+ "
+ [(set (attr "length")
+ (if_then_else
+ (and (ge (minus (match_dup 1) (pc)) (const_int 2))
+ (le (minus (match_dup 1) (pc)) (const_int 128)))
+ (const_int 2)
+ (const_int 8)))]
+)
diff --git a/gcc/config/arm/vfp.md b/gcc/config/arm/vfp.md
index 6d4d017..7d5a7dc 100644
--- a/gcc/config/arm/vfp.md
+++ b/gcc/config/arm/vfp.md
@@ -1,5 +1,5 @@
;; ARM VFP coprocessor Machine Description
-;; Copyright (C) 2003, 2005 Free Software Foundation, Inc.
+;; Copyright (C) 2003, 2005, 2006, 2007 Free Software Foundation, Inc.
;; Written by CodeSourcery, LLC.
;;
;; This file is part of GCC.
@@ -121,25 +121,77 @@
;; ??? For now do not allow loading constants into vfp regs. This causes
;; problems because small constants get converted into adds.
(define_insn "*arm_movsi_vfp"
- [(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r ,m,*w,r,*w,*w, *Uv")
- (match_operand:SI 1 "general_operand" "rI,K,mi,r,r,*w,*w,*Uvi,*w"))]
+ [(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r,r ,m,*w,r,*w,*w, *Uv")
+ (match_operand:SI 1 "general_operand" "rI,K,N,mi,r,r,*w,*w,*Uvi,*w"))]
"TARGET_ARM && TARGET_VFP && TARGET_HARD_FLOAT
&& ( s_register_operand (operands[0], SImode)
|| s_register_operand (operands[1], SImode))"
- "@
- mov%?\\t%0, %1
- mvn%?\\t%0, #%B1
- ldr%?\\t%0, %1
- str%?\\t%1, %0
- fmsr%?\\t%0, %1\\t%@ int
- fmrs%?\\t%0, %1\\t%@ int
- fcpys%?\\t%0, %1\\t%@ int
- flds%?\\t%0, %1\\t%@ int
- fsts%?\\t%1, %0\\t%@ int"
+ "*
+ switch (which_alternative)
+ {
+ case 0:
+ return \"mov%?\\t%0, %1\";
+ case 1:
+ return \"mvn%?\\t%0, #%B1\";
+ case 2:
+ return \"movw%?\\t%0, %1\";
+ case 3:
+ return \"ldr%?\\t%0, %1\";
+ case 4:
+ return \"str%?\\t%1, %0\";
+ case 5:
+ return \"fmsr%?\\t%0, %1\\t%@ int\";
+ case 6:
+ return \"fmrs%?\\t%0, %1\\t%@ int\";
+ case 7:
+ return \"fcpys%?\\t%0, %1\\t%@ int\";
+ case 8: case 9:
+ return output_move_vfp (operands);
+ default:
+ gcc_unreachable ();
+ }
+ "
+ [(set_attr "predicable" "yes")
+ (set_attr "type" "*,*,*,load1,store1,r_2_f,f_2_r,ffarith,f_loads,f_stores")
+ (set_attr "pool_range" "*,*,*,4096,*,*,*,*,1020,*")
+ (set_attr "neg_pool_range" "*,*,*,4084,*,*,*,*,1008,*")]
+)
+
+(define_insn "*thumb2_movsi_vfp"
+ [(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r,r,m,*w,r,*w,*w, *Uv")
+ (match_operand:SI 1 "general_operand" "rI,K,N,mi,r,r,*w,*w,*Uvi,*w"))]
+ "TARGET_THUMB2 && TARGET_VFP && TARGET_HARD_FLOAT
+ && ( s_register_operand (operands[0], SImode)
+ || s_register_operand (operands[1], SImode))"
+ "*
+ switch (which_alternative)
+ {
+ case 0:
+ return \"mov%?\\t%0, %1\";
+ case 1:
+ return \"mvn%?\\t%0, #%B1\";
+ case 2:
+ return \"movw%?\\t%0, %1\";
+ case 3:
+ return \"ldr%?\\t%0, %1\";
+ case 4:
+ return \"str%?\\t%1, %0\";
+ case 5:
+ return \"fmsr%?\\t%0, %1\\t%@ int\";
+ case 6:
+ return \"fmrs%?\\t%0, %1\\t%@ int\";
+ case 7:
+ return \"fcpys%?\\t%0, %1\\t%@ int\";
+ case 8: case 9:
+ return output_move_vfp (operands);
+ default:
+ gcc_unreachable ();
+ }
+ "
[(set_attr "predicable" "yes")
- (set_attr "type" "*,*,load1,store1,r_2_f,f_2_r,ffarith,f_loads,f_stores")
- (set_attr "pool_range" "*,*,4096,*,*,*,*,1020,*")
- (set_attr "neg_pool_range" "*,*,4084,*,*,*,*,1008,*")]
+ (set_attr "type" "*,*,*,load1,store1,r_2_f,f_2_r,ffarith,f_load,f_store")
+ (set_attr "pool_range" "*,*,*,4096,*,*,*,*,1020,*")
+ (set_attr "neg_pool_range" "*,*,*, 0,*,*,*,*,1008,*")]
)
@@ -165,10 +217,8 @@
return \"fmrrd%?\\t%Q0, %R0, %P1\\t%@ int\";
case 5:
return \"fcpyd%?\\t%P0, %P1\\t%@ int\";
- case 6:
- return \"fldd%?\\t%P0, %1\\t%@ int\";
- case 7:
- return \"fstd%?\\t%P1, %0\\t%@ int\";
+ case 6: case 7:
+ return output_move_vfp (operands);
default:
gcc_unreachable ();
}
@@ -179,6 +229,33 @@
(set_attr "neg_pool_range" "*,1008,*,*,*,*,1008,*")]
)
+(define_insn "*thumb2_movdi_vfp"
+ [(set (match_operand:DI 0 "nonimmediate_di_operand" "=r, r,m,w,r,w,w, Uv")
+ (match_operand:DI 1 "di_operand" "rIK,mi,r,r,w,w,Uvi,w"))]
+ "TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_VFP"
+ "*
+ switch (which_alternative)
+ {
+ case 0: case 1: case 2:
+ return (output_move_double (operands));
+ case 3:
+ return \"fmdrr%?\\t%P0, %Q1, %R1\\t%@ int\";
+ case 4:
+ return \"fmrrd%?\\t%Q0, %R0, %P1\\t%@ int\";
+ case 5:
+ return \"fcpyd%?\\t%P0, %P1\\t%@ int\";
+ case 6: case 7:
+ return output_move_vfp (operands);
+ default:
+ abort ();
+ }
+ "
+ [(set_attr "type" "*,load2,store2,r_2_f,f_2_r,ffarith,f_load,f_store")
+ (set_attr "length" "8,8,8,4,4,4,4,4")
+ (set_attr "pool_range" "*,4096,*,*,*,*,1020,*")
+ (set_attr "neg_pool_range" "*, 0,*,*,*,*,1008,*")]
+)
+
;; SFmode moves
;; Disparage the w<->r cases because reloading an invalid address is
@@ -190,21 +267,66 @@
"TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP
&& ( s_register_operand (operands[0], SFmode)
|| s_register_operand (operands[1], SFmode))"
- "@
- fmsr%?\\t%0, %1
- fmrs%?\\t%0, %1
- flds%?\\t%0, %1
- fsts%?\\t%1, %0
- ldr%?\\t%0, %1\\t%@ float
- str%?\\t%1, %0\\t%@ float
- fcpys%?\\t%0, %1
- mov%?\\t%0, %1\\t%@ float"
+ "*
+ switch (which_alternative)
+ {
+ case 0:
+ return \"fmsr%?\\t%0, %1\";
+ case 1:
+ return \"fmrs%?\\t%0, %1\";
+ case 2: case 3:
+ return output_move_vfp (operands);
+ case 4:
+ return \"ldr%?\\t%0, %1\\t%@ float\";
+ case 5:
+ return \"str%?\\t%1, %0\\t%@ float\";
+ case 6:
+ return \"fcpys%?\\t%0, %1\";
+ case 7:
+ return \"mov%?\\t%0, %1\\t%@ float\";
+ default:
+ gcc_unreachable ();
+ }
+ "
[(set_attr "predicable" "yes")
(set_attr "type" "r_2_f,f_2_r,ffarith,*,f_loads,f_stores,load1,store1")
(set_attr "pool_range" "*,*,1020,*,4096,*,*,*")
(set_attr "neg_pool_range" "*,*,1008,*,4080,*,*,*")]
)
+(define_insn "*thumb2_movsf_vfp"
+ [(set (match_operand:SF 0 "nonimmediate_operand" "=w,?r,w ,Uv,r ,m,w,r")
+ (match_operand:SF 1 "general_operand" " ?r,w,UvE,w, mE,r,w,r"))]
+ "TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_VFP
+ && ( s_register_operand (operands[0], SFmode)
+ || s_register_operand (operands[1], SFmode))"
+ "*
+ switch (which_alternative)
+ {
+ case 0:
+ return \"fmsr%?\\t%0, %1\";
+ case 1:
+ return \"fmrs%?\\t%0, %1\";
+ case 2: case 3:
+ return output_move_vfp (operands);
+ case 4:
+ return \"ldr%?\\t%0, %1\\t%@ float\";
+ case 5:
+ return \"str%?\\t%1, %0\\t%@ float\";
+ case 6:
+ return \"fcpys%?\\t%0, %1\";
+ case 7:
+ return \"mov%?\\t%0, %1\\t%@ float\";
+ default:
+ gcc_unreachable ();
+ }
+ "
+ [(set_attr "predicable" "yes")
+ (set_attr "type" "r_2_f,f_2_r,ffarith,*,f_load,f_store,load1,store1")
+ (set_attr "pool_range" "*,*,1020,*,4092,*,*,*")
+ (set_attr "neg_pool_range" "*,*,1008,*,0,*,*,*")]
+)
+
;; DFmode moves
@@ -224,10 +346,8 @@
return \"fmrrd%?\\t%Q0, %R0, %P1\";
case 2: case 3:
return output_move_double (operands);
- case 4:
- return \"fldd%?\\t%P0, %1\";
- case 5:
- return \"fstd%?\\t%P1, %0\";
+ case 4: case 5:
+ return output_move_vfp (operands);
case 6:
return \"fcpyd%?\\t%P0, %P1\";
case 7:
@@ -243,6 +363,35 @@
(set_attr "neg_pool_range" "*,*,1008,*,1008,*,*,*")]
)
+(define_insn "*thumb2_movdf_vfp"
+ [(set (match_operand:DF 0 "nonimmediate_soft_df_operand" "=w,?r,r, m,w ,Uv,w,r")
+ (match_operand:DF 1 "soft_df_operand" " ?r,w,mF,r,UvF,w, w,r"))]
+ "TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_VFP"
+ "*
+ {
+ switch (which_alternative)
+ {
+ case 0:
+ return \"fmdrr%?\\t%P0, %Q1, %R1\";
+ case 1:
+ return \"fmrrd%?\\t%Q0, %R0, %P1\";
+ case 2: case 3: case 7:
+ return output_move_double (operands);
+ case 4: case 5:
+ return output_move_vfp (operands);
+ case 6:
+ return \"fcpyd%?\\t%P0, %P1\";
+ default:
+ abort ();
+ }
+ }
+ "
+ [(set_attr "type" "r_2_f,f_2_r,ffarith,*,load2,store2,f_load,f_store")
+ (set_attr "length" "4,4,8,8,4,4,4,8")
+ (set_attr "pool_range" "*,*,4096,*,1020,*,*,*")
+ (set_attr "neg_pool_range" "*,*,0,*,1008,*,*,*")]
+)
+
;; Conditional move patterns
@@ -269,6 +418,29 @@
(set_attr "type" "ffarith,ffarith,ffarith,r_2_f,r_2_f,r_2_f,f_2_r,f_2_r,f_2_r")]
)
+(define_insn "*thumb2_movsfcc_vfp"
+ [(set (match_operand:SF 0 "s_register_operand" "=w,w,w,w,w,w,?r,?r,?r")
+ (if_then_else:SF
+ (match_operator 3 "arm_comparison_operator"
+ [(match_operand 4 "cc_register" "") (const_int 0)])
+ (match_operand:SF 1 "s_register_operand" "0,w,w,0,?r,?r,0,w,w")
+ (match_operand:SF 2 "s_register_operand" "w,0,w,?r,0,?r,w,0,w")))]
+ "TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_VFP"
+ "@
+ it\\t%D3\;fcpys%D3\\t%0, %2
+ it\\t%d3\;fcpys%d3\\t%0, %1
+ ite\\t%D3\;fcpys%D3\\t%0, %2\;fcpys%d3\\t%0, %1
+ it\\t%D3\;fmsr%D3\\t%0, %2
+ it\\t%d3\;fmsr%d3\\t%0, %1
+ ite\\t%D3\;fmsr%D3\\t%0, %2\;fmsr%d3\\t%0, %1
+ it\\t%D3\;fmrs%D3\\t%0, %2
+ it\\t%d3\;fmrs%d3\\t%0, %1
+ ite\\t%D3\;fmrs%D3\\t%0, %2\;fmrs%d3\\t%0, %1"
+ [(set_attr "conds" "use")
+ (set_attr "length" "6,6,10,6,6,10,6,6,10")
+ (set_attr "type" "ffarith,ffarith,ffarith,r_2_f,r_2_f,r_2_f,f_2_r,f_2_r,f_2_r")]
+)
+
(define_insn "*movdfcc_vfp"
[(set (match_operand:DF 0 "s_register_operand" "=w,w,w,w,w,w,?r,?r,?r")
(if_then_else:DF
@@ -292,13 +464,36 @@
(set_attr "type" "ffarith,ffarith,ffarith,r_2_f,r_2_f,r_2_f,f_2_r,f_2_r,f_2_r")]
)
+(define_insn "*thumb2_movdfcc_vfp"
+ [(set (match_operand:DF 0 "s_register_operand" "=w,w,w,w,w,w,?r,?r,?r")
+ (if_then_else:DF
+ (match_operator 3 "arm_comparison_operator"
+ [(match_operand 4 "cc_register" "") (const_int 0)])
+ (match_operand:DF 1 "s_register_operand" "0,w,w,0,?r,?r,0,w,w")
+ (match_operand:DF 2 "s_register_operand" "w,0,w,?r,0,?r,w,0,w")))]
+ "TARGET_THUMB2 && TARGET_HARD_FLOAT && TARGET_VFP"
+ "@
+ it\\t%D3\;fcpyd%D3\\t%P0, %P2
+ it\\t%d3\;fcpyd%d3\\t%P0, %P1
+ ite\\t%D3\;fcpyd%D3\\t%P0, %P2\;fcpyd%d3\\t%P0, %P1
+ it\t%D3\;fmdrr%D3\\t%P0, %Q2, %R2
+ it\t%d3\;fmdrr%d3\\t%P0, %Q1, %R1
+ ite\\t%D3\;fmdrr%D3\\t%P0, %Q2, %R2\;fmdrr%d3\\t%P0, %Q1, %R1
+ it\t%D3\;fmrrd%D3\\t%Q0, %R0, %P2
+ it\t%d3\;fmrrd%d3\\t%Q0, %R0, %P1
+ ite\\t%D3\;fmrrd%D3\\t%Q0, %R0, %P2\;fmrrd%d3\\t%Q0, %R0, %P1"
+ [(set_attr "conds" "use")
+ (set_attr "length" "6,6,10,6,6,10,6,6,10")
+ (set_attr "type" "ffarith,ffarith,ffarith,r_2_f,r_2_f,r_2_f,f_2_r,f_2_r,f_2_r")]
+)
+
;; Sign manipulation functions
(define_insn "*abssf2_vfp"
[(set (match_operand:SF 0 "s_register_operand" "=w")
(abs:SF (match_operand:SF 1 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fabss%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "ffarith")]
@@ -307,7 +502,7 @@
(define_insn "*absdf2_vfp"
[(set (match_operand:DF 0 "s_register_operand" "=w")
(abs:DF (match_operand:DF 1 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fabsd%?\\t%P0, %P1"
[(set_attr "predicable" "yes")
(set_attr "type" "ffarith")]
@@ -316,7 +511,7 @@
(define_insn "*negsf2_vfp"
[(set (match_operand:SF 0 "s_register_operand" "=w,?r")
(neg:SF (match_operand:SF 1 "s_register_operand" "w,r")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"@
fnegs%?\\t%0, %1
eor%?\\t%0, %1, #-2147483648"
@@ -327,12 +522,12 @@
(define_insn_and_split "*negdf2_vfp"
[(set (match_operand:DF 0 "s_register_operand" "=w,?r,?r")
(neg:DF (match_operand:DF 1 "s_register_operand" "w,0,r")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"@
fnegd%?\\t%P0, %P1
#
#"
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP && reload_completed
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP && reload_completed
&& arm_general_register_operand (operands[0], DFmode)"
[(set (match_dup 0) (match_dup 1))]
"
@@ -377,7 +572,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=w")
(plus:SF (match_operand:SF 1 "s_register_operand" "w")
(match_operand:SF 2 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fadds%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@@ -387,7 +582,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=w")
(plus:DF (match_operand:DF 1 "s_register_operand" "w")
(match_operand:DF 2 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"faddd%?\\t%P0, %P1, %P2"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@@ -398,7 +593,7 @@
[(set (match_operand:SF 0 "s_register_operand" "=w")
(minus:SF (match_operand:SF 1 "s_register_operand" "w")
(match_operand:SF 2 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fsubs%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@@ -408,7 +603,7 @@
[(set (match_operand:DF 0 "s_register_operand" "=w")
(minus:DF (match_operand:DF 1 "s_register_operand" "w")
(match_operand:DF 2 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fsubd%?\\t%P0, %P1, %P2"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@@ -421,7 +616,7 @@
[(set (match_operand:SF 0 "s_register_operand" "+w")
(div:SF (match_operand:SF 1 "s_register_operand" "w")
(match_operand:SF 2 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fdivs%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
(set_attr "type" "fdivs")]
@@ -431,7 +626,7 @@
[(set (match_operand:DF 0 "s_register_operand" "+w")
(div:DF (match_operand:DF 1 "s_register_operand" "w")
(match_operand:DF 2 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fdivd%?\\t%P0, %P1, %P2"
[(set_attr "predicable" "yes")
(set_attr "type" "fdivd")]
@@ -444,7 +639,7 @@
[(set (match_operand:SF 0 "s_register_operand" "+w")
(mult:SF (match_operand:SF 1 "s_register_operand" "w")
(match_operand:SF 2 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fmuls%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@@ -454,7 +649,7 @@
[(set (match_operand:DF 0 "s_register_operand" "+w")
(mult:DF (match_operand:DF 1 "s_register_operand" "w")
(match_operand:DF 2 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fmuld%?\\t%P0, %P1, %P2"
[(set_attr "predicable" "yes")
(set_attr "type" "fmul")]
@@ -465,7 +660,7 @@
[(set (match_operand:SF 0 "s_register_operand" "+w")
(mult:SF (neg:SF (match_operand:SF 1 "s_register_operand" "w"))
(match_operand:SF 2 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fnmuls%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@@ -475,7 +670,7 @@
[(set (match_operand:DF 0 "s_register_operand" "+w")
(mult:DF (neg:DF (match_operand:DF 1 "s_register_operand" "w"))
(match_operand:DF 2 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fnmuld%?\\t%P0, %P1, %P2"
[(set_attr "predicable" "yes")
(set_attr "type" "fmul")]
@@ -490,7 +685,7 @@
(plus:SF (mult:SF (match_operand:SF 2 "s_register_operand" "w")
(match_operand:SF 3 "s_register_operand" "w"))
(match_operand:SF 1 "s_register_operand" "0")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fmacs%?\\t%0, %2, %3"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@@ -501,7 +696,7 @@
(plus:DF (mult:DF (match_operand:DF 2 "s_register_operand" "w")
(match_operand:DF 3 "s_register_operand" "w"))
(match_operand:DF 1 "s_register_operand" "0")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fmacd%?\\t%P0, %P2, %P3"
[(set_attr "predicable" "yes")
(set_attr "type" "fmul")]
@@ -513,7 +708,7 @@
(minus:SF (mult:SF (match_operand:SF 2 "s_register_operand" "w")
(match_operand:SF 3 "s_register_operand" "w"))
(match_operand:SF 1 "s_register_operand" "0")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fmscs%?\\t%0, %2, %3"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@@ -524,7 +719,7 @@
(minus:DF (mult:DF (match_operand:DF 2 "s_register_operand" "w")
(match_operand:DF 3 "s_register_operand" "w"))
(match_operand:DF 1 "s_register_operand" "0")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fmscd%?\\t%P0, %P2, %P3"
[(set_attr "predicable" "yes")
(set_attr "type" "fmul")]
@@ -536,7 +731,7 @@
(minus:SF (match_operand:SF 1 "s_register_operand" "0")
(mult:SF (match_operand:SF 2 "s_register_operand" "w")
(match_operand:SF 3 "s_register_operand" "w"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fnmacs%?\\t%0, %2, %3"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@@ -547,7 +742,7 @@
(minus:DF (match_operand:DF 1 "s_register_operand" "0")
(mult:DF (match_operand:DF 2 "s_register_operand" "w")
(match_operand:DF 3 "s_register_operand" "w"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fnmacd%?\\t%P0, %P2, %P3"
[(set_attr "predicable" "yes")
(set_attr "type" "fmul")]
@@ -561,7 +756,7 @@
(neg:SF (match_operand:SF 2 "s_register_operand" "w"))
(match_operand:SF 3 "s_register_operand" "w"))
(match_operand:SF 1 "s_register_operand" "0")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fnmscs%?\\t%0, %2, %3"
[(set_attr "predicable" "yes")
(set_attr "type" "farith")]
@@ -573,7 +768,7 @@
(neg:DF (match_operand:DF 2 "s_register_operand" "w"))
(match_operand:DF 3 "s_register_operand" "w"))
(match_operand:DF 1 "s_register_operand" "0")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fnmscd%?\\t%P0, %P2, %P3"
[(set_attr "predicable" "yes")
(set_attr "type" "fmul")]
@@ -585,7 +780,7 @@
(define_insn "*extendsfdf2_vfp"
[(set (match_operand:DF 0 "s_register_operand" "=w")
(float_extend:DF (match_operand:SF 1 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fcvtds%?\\t%P0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@@ -594,7 +789,7 @@
(define_insn "*truncdfsf2_vfp"
[(set (match_operand:SF 0 "s_register_operand" "=w")
(float_truncate:SF (match_operand:DF 1 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fcvtsd%?\\t%0, %P1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@@ -603,7 +798,7 @@
(define_insn "*truncsisf2_vfp"
[(set (match_operand:SI 0 "s_register_operand" "=w")
(fix:SI (fix:SF (match_operand:SF 1 "s_register_operand" "w"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"ftosizs%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@@ -612,7 +807,7 @@
(define_insn "*truncsidf2_vfp"
[(set (match_operand:SI 0 "s_register_operand" "=w")
(fix:SI (fix:DF (match_operand:DF 1 "s_register_operand" "w"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"ftosizd%?\\t%0, %P1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@@ -622,7 +817,7 @@
(define_insn "fixuns_truncsfsi2"
[(set (match_operand:SI 0 "s_register_operand" "=w")
(unsigned_fix:SI (fix:SF (match_operand:SF 1 "s_register_operand" "w"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"ftouizs%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@@ -631,7 +826,7 @@
(define_insn "fixuns_truncdfsi2"
[(set (match_operand:SI 0 "s_register_operand" "=w")
(unsigned_fix:SI (fix:DF (match_operand:DF 1 "s_register_operand" "w"))))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"ftouizd%?\\t%0, %P1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@@ -641,7 +836,7 @@
(define_insn "*floatsisf2_vfp"
[(set (match_operand:SF 0 "s_register_operand" "=w")
(float:SF (match_operand:SI 1 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fsitos%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@@ -650,7 +845,7 @@
(define_insn "*floatsidf2_vfp"
[(set (match_operand:DF 0 "s_register_operand" "=w")
(float:DF (match_operand:SI 1 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fsitod%?\\t%P0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@@ -660,7 +855,7 @@
(define_insn "floatunssisf2"
[(set (match_operand:SF 0 "s_register_operand" "=w")
(unsigned_float:SF (match_operand:SI 1 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fuitos%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@@ -669,7 +864,7 @@
(define_insn "floatunssidf2"
[(set (match_operand:DF 0 "s_register_operand" "=w")
(unsigned_float:DF (match_operand:SI 1 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fuitod%?\\t%P0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "f_cvt")]
@@ -681,7 +876,7 @@
(define_insn "*sqrtsf2_vfp"
[(set (match_operand:SF 0 "s_register_operand" "=w")
(sqrt:SF (match_operand:SF 1 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fsqrts%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "type" "fdivs")]
@@ -690,7 +885,7 @@
(define_insn "*sqrtdf2_vfp"
[(set (match_operand:DF 0 "s_register_operand" "=w")
(sqrt:DF (match_operand:DF 1 "s_register_operand" "w")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fsqrtd%?\\t%P0, %P1"
[(set_attr "predicable" "yes")
(set_attr "type" "fdivd")]
@@ -702,7 +897,7 @@
(define_insn "*movcc_vfp"
[(set (reg CC_REGNUM)
(reg VFPCC_REGNUM))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"fmstat%?"
[(set_attr "conds" "set")
(set_attr "type" "f_flag")]
@@ -712,9 +907,9 @@
[(set (reg:CCFP CC_REGNUM)
(compare:CCFP (match_operand:SF 0 "s_register_operand" "w")
(match_operand:SF 1 "vfp_compare_operand" "wG")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"#"
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
[(set (reg:CCFP VFPCC_REGNUM)
(compare:CCFP (match_dup 0)
(match_dup 1)))
@@ -727,9 +922,9 @@
[(set (reg:CCFPE CC_REGNUM)
(compare:CCFPE (match_operand:SF 0 "s_register_operand" "w")
(match_operand:SF 1 "vfp_compare_operand" "wG")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"#"
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
[(set (reg:CCFPE VFPCC_REGNUM)
(compare:CCFPE (match_dup 0)
(match_dup 1)))
@@ -742,9 +937,9 @@
[(set (reg:CCFP CC_REGNUM)
(compare:CCFP (match_operand:DF 0 "s_register_operand" "w")
(match_operand:DF 1 "vfp_compare_operand" "wG")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"#"
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
[(set (reg:CCFP VFPCC_REGNUM)
(compare:CCFP (match_dup 0)
(match_dup 1)))
@@ -757,9 +952,9 @@
[(set (reg:CCFPE CC_REGNUM)
(compare:CCFPE (match_operand:DF 0 "s_register_operand" "w")
(match_operand:DF 1 "vfp_compare_operand" "wG")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"#"
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
[(set (reg:CCFPE VFPCC_REGNUM)
(compare:CCFPE (match_dup 0)
(match_dup 1)))
@@ -775,7 +970,7 @@
[(set (reg:CCFP VFPCC_REGNUM)
(compare:CCFP (match_operand:SF 0 "s_register_operand" "w,w")
(match_operand:SF 1 "vfp_compare_operand" "w,G")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"@
fcmps%?\\t%0, %1
fcmpzs%?\\t%0"
@@ -787,7 +982,7 @@
[(set (reg:CCFPE VFPCC_REGNUM)
(compare:CCFPE (match_operand:SF 0 "s_register_operand" "w,w")
(match_operand:SF 1 "vfp_compare_operand" "w,G")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"@
fcmpes%?\\t%0, %1
fcmpezs%?\\t%0"
@@ -799,7 +994,7 @@
[(set (reg:CCFP VFPCC_REGNUM)
(compare:CCFP (match_operand:DF 0 "s_register_operand" "w,w")
(match_operand:DF 1 "vfp_compare_operand" "w,G")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"@
fcmpd%?\\t%P0, %P1
fcmpzd%?\\t%P0"
@@ -811,7 +1006,7 @@
[(set (reg:CCFPE VFPCC_REGNUM)
(compare:CCFPE (match_operand:DF 0 "s_register_operand" "w,w")
(match_operand:DF 1 "vfp_compare_operand" "w,G")))]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"@
fcmped%?\\t%P0, %P1
fcmpezd%?\\t%P0"
@@ -827,7 +1022,7 @@
[(set (match_operand:BLK 0 "memory_operand" "=m")
(unspec:BLK [(match_operand:DF 1 "s_register_operand" "w")]
UNSPEC_PUSH_MULT))])]
- "TARGET_ARM && TARGET_HARD_FLOAT && TARGET_VFP"
+ "TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP"
"* return vfp_output_fstmd (operands);"
[(set_attr "type" "f_stored")]
)
diff --git a/gcc/doc/extend.texi b/gcc/doc/extend.texi
index 7e9aa11..3d05ad4 100644
--- a/gcc/doc/extend.texi
+++ b/gcc/doc/extend.texi
@@ -1,5 +1,5 @@
@c Copyright (C) 1988, 1989, 1992, 1993, 1994, 1996, 1998, 1999, 2000,
-@c 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
+@c 2001, 2002, 2003, 2004, 2005, 2006, 2007 Free Software Foundation, Inc.
@c This is part of the GCC manual.
@c For copying conditions, see the file gcc.texi.
@@ -1965,6 +1965,9 @@ void f () __attribute__ ((interrupt ("IRQ")));
Permissible values for this parameter are: IRQ, FIQ, SWI, ABORT and UNDEF@.
+On ARMv7-M the interrupt type is ignored, and the attibute means the function
+may be called with a word aligned stack pointer.
+
@item interrupt_handler
@cindex interrupt handler functions on the Blackfin, m68k, H8/300 and SH processors
Use this attribute on the Blackfin, m68k, H8/300, H8/300H, H8S, and SH to
diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
index 2101192..0b1ab49 100644
--- a/gcc/doc/invoke.texi
+++ b/gcc/doc/invoke.texi
@@ -1,5 +1,6 @@
@c Copyright (C) 1988, 1989, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999,
-@c 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
+@c 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007
+@c Free Software Foundation, Inc.
@c This is part of the GCC manual.
@c For copying conditions, see the file gcc.texi.
@@ -7605,8 +7606,9 @@ assembly code. Permissible names are: @samp{arm2}, @samp{arm250},
@samp{arm10tdmi}, @samp{arm1020t}, @samp{arm1026ej-s},
@samp{arm10e}, @samp{arm1020e}, @samp{arm1022e},
@samp{arm1136j-s}, @samp{arm1136jf-s}, @samp{mpcore}, @samp{mpcorenovfp},
-@samp{arm1176jz-s}, @samp{arm1176jzf-s}, @samp{xscale}, @samp{iwmmxt},
-@samp{ep9312}.
+@samp{arm1156t2-s}, @samp{arm1176jz-s}, @samp{arm1176jzf-s},
+@samp{cortex-a8}, @samp{cortex-r4}, @samp{cortex-m3},
+@samp{xscale}, @samp{iwmmxt}, @samp{ep9312}.
@itemx -mtune=@var{name}
@opindex mtune
@@ -7627,7 +7629,8 @@ assembly code. This option can be used in conjunction with or instead
of the @option{-mcpu=} option. Permissible names are: @samp{armv2},
@samp{armv2a}, @samp{armv3}, @samp{armv3m}, @samp{armv4}, @samp{armv4t},
@samp{armv5}, @samp{armv5t}, @samp{armv5te}, @samp{armv6}, @samp{armv6j},
-@samp{iwmmxt}, @samp{ep9312}.
+@samp{armv6t2}, @samp{armv6z}, @samp{armv6zk}, @samp{armv7}, @samp{armv7-a},
+@samp{armv7-r}, @samp{armv7-m}, @samp{iwmmxt}, @samp{ep9312}.
@item -mfpu=@var{name}
@itemx -mfpe=@var{number}
@@ -7745,8 +7748,11 @@ and has length @code{((pc[-3]) & 0xff000000)}.
@item -mthumb
@opindex mthumb
-Generate code for the 16-bit Thumb instruction set. The default is to
+Generate code for the Thumb instruction set. The default is to
use the 32-bit ARM instruction set.
+This option automatically enables either 16-bit Thumb-1 or
+mixed 16/32-bit Thumb-2 instructions based on the @option{-mcpu=@var{name}}
+and @option{-march=@var{name}} options.
@item -mtpcs-frame
@opindex mtpcs-frame