Age | Commit message (Collapse) | Author | Files | Lines |
|
While in some cases deriving an AT&T-style suffix from an Intel syntax
memory operand size specifier is necessary, in many cases this is not
only pointless, but has led to the introduction of various workarounds:
Excessive use of IgnoreSize and NoRex64 as well as the ToDword and
ToQword attributes. Suppress suffix derivation when we can clearly tell
that the memory operand's size isn't going to be needed to infer the
possible need for the low byte/word opcode bit or an operand size prefix
(0x66 or REX.W).
As a result ToDword and ToQword can be dropped entirely, plus a fair
number of IgnoreSize and NoRex64 can also be got rid of. Note that
IgnoreSize needs to remain on legacy encoded SIMD insns with GPR
operand, to avoid emitting an operand size prefix in 16-bit mode. (Since
16-bit code using SIMD insns isn't well tested, clone an existing
testcase just enough to cover a few insns which are potentially
problematic but are being touched here.)
Note that while folding the VCVT{,T}S{S,D}2SI templates, VCVT{,T}SH2SI
isn't included there. This is to fulfill the request of not allowing L
and Q suffixes there, despite the inconsistency with VCVT{,T}S{S,D}2SI.
|
|
Many of the vector conversion insns come with X/Y/Z suffixed forms, for
disambiguation purposes in AT&T syntax. All of these gorups follow
certain patterns. Introduce "xy" and "xyz" templates to reduce
redundancy.
To facilitate using a uniform name for both AVX and AVX512, further
introduce a means to purge a previously defined template: A standalone
<name> will be recognized to have this effect.
Note that in the course of the conversion VFPCLASSPH is properly split
to separate AT&T and Intel syntax forms, matching VFPCLASSP{S,D} and
yielding the intended "ambiguous operand size" diagnostic in Intel mode.
|
|
Many of the vector integer insns come in byte/word element pairs. Most
of these pairs follow certain encoding patterns. Introduce a "bw"
template to reduce redundancy.
Note that in the course of the conversion
- the AVX VPEXTRW template which is not being touched needs to remain
ahead of the new "combined" ones, as (a) this should be tried first
when matching insns against templates and (b) its Load attributes
requires it to be first,
- this add a benign/meaningless IgnoreSize attribute to the memory form
of PEXTRB; it didn't seem worth avoiding this.
|
|
The AVX2 gather ones are nicely grouped - do the same for the various
AVX512 scatter/gather ones. On the moved lines also convert EVex=<n> to
EVex<N>.
|
|
Many of the vector integer insns come in dword/qword element pairs. Most
of these pairs follow certain encoding patterns. Introduce a "dq"
template to reduce redundancy.
Note that in the course of the conversion
- a few otherwise untouched templates are moved, so they end up next to
their siblings),
- drop an unhelpful Cpu64 from the GPR form of VPBROADCASTQ, matching
what we already have for KMOVQ - the diagnostic is better this way for
insns with multiple forms (i.e. the same Cpu64 attributes on {,V}MOVQ,
{,V}PEXTRQ, and {,V}PINSRQ are useful to keep),
- this adds benign/meaningless IgnoreSize attributes to the GPR forms of
KMOVD and VPBROADCASTD; it didn't seem worth avoiding this.
|
|
The vast majority of vector FP insns comes in single/double pairs. Many
pairs follow certain encoding patterns. Introduce an "sd" template to
reduce redundancy. Similarly, to further cover similarities between
AVX512F and AVX512-FP16, introduce an "sdh" template.
For element-size Disp8 shift generalize i386-gen's broadcast size
determination, allowing Disp8MemShift to be specified without an operand
in the affected templated templates. While doing the adjustment also
eliminate an unhelpful (lost information) diagnostic combined with a use
after free in what is now get_element_size().
Note that in the course of the conversion
- the AVX512F form of VMOVUPD has a stray (leftover) Load attribute
dropped,
- VMOVSH has a benign IgnoreSize added (the attribute is still strictly
necessary for VMOVSD, and necessary for VMOVSS as long as we permit
strange combinations like "-march=i286+avx"),
- VFPCLASSPH is properly split to separate AT&T and Intel syntax forms,
matching VFPCLASSP{S,D}.
|
|
It is unclear to me why the corresponding MOV (no Q suffix) can be
issued without REX.W, but MOVQ has to have that prefix (bit). Add
NoRex64 and in exchange drop Size64.
|
|
The non-SSE2AVX form of the SIMD variant of the instruction needlessly
has the (still multi-purpose) IgnoreSize attribute. All other similar
SSE2 insns use NoRex64 instead. Make this consistent, noting that the
SSE2AVX form can't have the same change made - there the memory operand
doesn't at the same time permit RegXMM (which logic uses when deciding
whether a Q suffix is okay outside of 64-bit mode).
|
|
While the other three variants each differ in attributes and hence can't
be folded, these two pairs actually can be (and were previously
overlooked). This effectively matches their AVX512VL counterparts, which
are also expressed as a single template.
|
|
While the x/y/z suffix isn't necessary to use in this case, it is still
odd that these forms don't support broadcast (unlike their AVX512F /
AVX512DQ counterparts). The lack thereof can e.g. make macro-ized
programming more difficult.
|
|
One more place where pre-existing templates should have been taken as a
basis: In Intel syntax we want to consistently issue an "ambiguous
operand size" error when a size-less memory operand is specified for an
insn where register use alone isn't sufficient for disambiguation.
|
|
Just like all Size64 insns are marked Cpu64, all Size32 insns ought to
be marked Cpu386.
|
|
First of all rename the meanwhile misleading Opcode_SIMD_FloatD, as it
has also been used for KMOV* and BNDMOV. Then simplify the condition
selecting which form if "reversing" to use - except for the MOV to/from
control/debug/test registers all extended opcode space insns use bit 0
(rather than bit 1) to indicate the direction (from/to memory) of an
operation. With that, D can simply be set on the first of the two
templates, while the other can be dropped.
|
|
By mistake it was permitted to be used from the very introduction of XOP
support.
|
|
Without it in 16-bit mode a pointless operand size prefix would be
emitted.
|
|
It's entirely unclear why some of the KeyLocker insns had NoRex64 on
them - there's nothing here which could cause emission of REX.W (except
of course a user-specified "rex.w", which we ought to honor anyway).
|
|
A standalone (without SAE) StaticRounding attribute is meaningless, and
indeed all other similar insns have ATTSyntax there instead. I can only
assume this was some strange copy-and-paste mistake.
|
|
I clearly screwed up in 6ff00b5e12e7 ("x86/Intel: correct permitted
operand sizes for AVX512 scatter/gather") giving all AVX512F scatter
insns Dword element size. Update testcases (also their gather parts),
utilizing that there previously were two identical lines each (for no
apparent reason).
|
|
Both forms were missing VexW0 (thus allowing Evex.W=1 to be encoded by
suitable means, which would cause #UD). The memory operand form further
was using the wrong Masking value, thus allowing zeroing-masking to be
encoded for the store form (which would again cause #UD).
|
|
This saves quite a number of shift instructions: The "operands" field
can now be retrieved by just masking (no shift), and extracting the
"extension_opcode" field now only requires a (signed) right shift, with
no prereq left one. (Of course there may be architectures where, in a
cross build, there might be no difference at all, e.g. when there are
suitable bitfield extraction insns.)
|
|
This once again allows to reduce redundancy in (and size of) the opcode
table.
Don't go as far as also making D work on the two 5-operand XOP insns:
This would significantly complicate the code, as there the first
(immediate) operand would need special treatment in several places.
Note that the .s suffix isn't being enabled to have any effect, for
being deprecated. Whereas neither {load} nor {store} pseudo prefixes
make sense here, as the respective operands are inputs (loads) only
anyway, regardless of order. Hence there is (as before) no way for the
programmer to request the alternative encoding to be used for register-
only insns.
Note further that it is always the first original template which is
retained (and altered), to make sure the same encoding as before is
used for register-only insns. This has the slightly odd (but pre-
existing) effect of XOP register-only insns having XOP.W clear, but FMA4
ones having VEX.W set.
|
|
The only case where 64-bit code uses non-sign-extended (can also be
considered zero-extended) displacements is when an address size override
is in place for a memory operand (i.e. particularly excluding
displacements of direct branches, which - if at all - are controlled by
operand size, and then are still sign-extended, just from 16 bits).
Hence the distinction in templates is unnecessary, allowing code to be
simplified in a number of places. The only place where logic becomes
more complicated is when signed-ness of relocations is determined in
output_disp().
The other caveat is that Disp64 cannot be specified anymore in an insn
template at the same time as Disp32. Unlike for non-64-bit mode,
templates don't specify displacements for both possible addressing
modes; the necessary adjustment to the expected ones has already been
done in match_template() anyway (but of course the logic there needs
tweaking now). Hence the single template so far doing so is split.
|
|
Presumably this being there was a result of taking CALL as a reference
when adding the RTM insns. But with No_qSuf the attribute has no effect.
|
|
As a preparatory step to allowing proper non-operand forms of specifying
embedded rounding / SAE, convert the internal representation to non-
operand form. While retaining properties (and in a few cases perhaps
providing more meaningful diagnostics), this means doing away with a few
hundred standalone templates, thus - as a nice side effect - reducing
memory consumption / cache occupancy.
|
|
This also was mistakenly flagged as Evex.128.
|
|
These were mistakenly flagged as Evex.128. Getting the LLIG status right
for insns allowing for SAE is a prereq for planned further work.
|
|
Like VFPCLASSP{S,D} it has only a single operand allowing multiple
sizes, hence there are no pairs of operands to check for consistent
size.
|
|
By slightly relaxing the checking in operand_type_register_match() we
can fold the vector shift insns with an XMM source as well. While
strictly speaking an overlap in just one size (see the code comment) is
not enough (both operands could have multiple sizes with just a single
common one), this is good enough for all templates we have, or which
could sensibly / usefully appear (within the scope of the present
operand matching model).
Tightening this a little would be possible, but would require broadcast
related information to be passed into the function.
|
|
They have only a single operand allowing multiple sizes, hence there are
no pairs of operands to check for consistent size.
|
|
Like for AVX512VL we can make the handling of operand sizes a little
more flexible to allow reducing the number of templates we have.
|
|
This was only rudimentary support anyway; none of the sub-architecture
specific insns were ever supported.
|
|
To avoid issues like that addressed by 6e3e5c9e4181 ("x86: extend SSE
check to PCLMULQDQ, AES, and GFNI insns"), base the check on opcode
attributes and operand types.
|
|
With the introduction of CpuPOPCNT the NoAVX attribute has become
meaningless for POPCNT.
|
|
As already indicated in a remark when introducing these templates, the
"commutative" attribute is ignored for legacy encoding templates. Hence
it is possible to shorten a number of templates by specifying C directly
rather than through a template parameter. I think this helps readability
a bit.
|
|
The operand ordering portion of the mnemonics repeats, causing a flurry
of almost identical templates. Abstract this out.
|
|
The result of running etc/update-copyright.py --this-year, fixing all
the files whose mode is changed by the script, plus a build with
--enable-maintainer-mode --enable-cgen-maint=yes, then checking
out */po/*.pot which we don't update frequently.
The copy of cgen was with commit d1dd5fcc38ead reverted as that commit
breaks building of bfp opcodes files.
|
|
Intel AVX512 FP16 instructions use maps 3, 5 and 6. Maps 5 and 6 use 3 bits
in the EVEX.mmm field (0b101, 0b110). Map 5 is for instructions that were FP32
in map 1 (0Fxx). Map 6 is for instructions that were FP32 in map 2 (0F38xx).
There are some exceptions to this rule. Some things in map 1 (0Fxx) with imm8
operands predated our current conventions; those instructions moved to map 3.
FP32 things in map 3 (0F3Axx) found new opcodes in map3 for FP16 because map3
is very sparsely populated. Most of the FP16 instructions share opcodes and
prefix (EVEX.pp) bits with the related FP32 operations.
Intel AVX512 FP16 instructions has new displacements scaling rules, please refer
to the public software developer manual for detail information.
gas/
2021-08-05 Igor Tsimbalist <igor.v.tsimbalist@intel.com>
H.J. Lu <hongjiu.lu@intel.com>
Wei Xiao <wei3.xiao@intel.com>
Lili Cui <lili.cui@intel.com>
* config/tc-i386.c (struct Broadcast_Operation): Adjust comment.
(cpu_arch): Add .avx512_fp16.
(cpu_noarch): Add noavx512_fp16.
(pte): Add evexmap5 and evexmap6.
(build_evex_prefix): Handle EVEXMAP5 and EVEXMAP6.
(check_VecOperations): Handle {1to32}.
(check_VecOperands): Handle CheckRegNumb.
(check_word_reg): Handle Toqword.
(i386_error): Add invalid_dest_and_src_register_set.
(match_template): Handle invalid_dest_and_src_register_set.
* doc/c-i386.texi: Document avx512_fp16, noavx512_fp16.
opcodes/
2021-08-05 Igor Tsimbalist <igor.v.tsimbalist@intel.com>
H.J. Lu <hongjiu.lu@intel.com>
Wei Xiao <wei3.xiao@intel.com>
Lili Cui <lili.cui@intel.com>
* i386-dis.c (EXwScalarS): New.
(EXxh): Ditto.
(EXxhc): Ditto.
(EXxmmqh): Ditto.
(EXxmmqdh): Ditto.
(EXEvexXwb): Ditto.
(DistinctDest_Fixup): Ditto.
(enum): Add xh_mode, evex_half_bcst_xmmqh_mode, evex_half_bcst_xmmqdh_mode
and w_swap_mode.
(enum): Add PREFIX_EVEX_0F3A08_W_0, PREFIX_EVEX_0F3A0A_W_0,
PREFIX_EVEX_0F3A26, PREFIX_EVEX_0F3A27, PREFIX_EVEX_0F3A56,
PREFIX_EVEX_0F3A57, PREFIX_EVEX_0F3A66, PREFIX_EVEX_0F3A67,
PREFIX_EVEX_0F3AC2, PREFIX_EVEX_MAP5_10, PREFIX_EVEX_MAP5_11,
PREFIX_EVEX_MAP5_1D, PREFIX_EVEX_MAP5_2A, PREFIX_EVEX_MAP5_2C,
PREFIX_EVEX_MAP5_2D, PREFIX_EVEX_MAP5_2E, PREFIX_EVEX_MAP5_2F,
PREFIX_EVEX_MAP5_51, PREFIX_EVEX_MAP5_58, PREFIX_EVEX_MAP5_59,
PREFIX_EVEX_MAP5_5A_W_0, PREFIX_EVEX_MAP5_5A_W_1,
PREFIX_EVEX_MAP5_5B_W_0, PREFIX_EVEX_MAP5_5B_W_1,
PREFIX_EVEX_MAP5_5C, PREFIX_EVEX_MAP5_5D, PREFIX_EVEX_MAP5_5E,
PREFIX_EVEX_MAP5_5F, PREFIX_EVEX_MAP5_78, PREFIX_EVEX_MAP5_79,
PREFIX_EVEX_MAP5_7A, PREFIX_EVEX_MAP5_7B, PREFIX_EVEX_MAP5_7C,
PREFIX_EVEX_MAP5_7D_W_0, PREFIX_EVEX_MAP6_13, PREFIX_EVEX_MAP6_56,
PREFIX_EVEX_MAP6_57, PREFIX_EVEX_MAP6_D6, PREFIX_EVEX_MAP6_D7
(enum): Add EVEX_MAP5 and EVEX_MAP6.
(enum): Add EVEX_W_MAP5_5A, EVEX_W_MAP5_5B,
EVEX_W_MAP5_78_P_0, EVEX_W_MAP5_78_P_2, EVEX_W_MAP5_79_P_0,
EVEX_W_MAP5_79_P_2, EVEX_W_MAP5_7A_P_2, EVEX_W_MAP5_7A_P_3,
EVEX_W_MAP5_7B_P_2, EVEX_W_MAP5_7C_P_0, EVEX_W_MAP5_7C_P_2,
EVEX_W_MAP5_7D, EVEX_W_MAP6_13_P_0, EVEX_W_MAP6_13_P_2,
(get_valid_dis386): Properly handle new instructions.
(intel_operand_size): Handle new modes.
(OP_E_memory): Ditto.
(OP_EX): Ditto.
* i386-dis-evex.h: Updated for AVX512_FP16.
* i386-dis-evex-mod.h: Updated for AVX512_FP16.
* i386-dis-evex-prefix.h: Updated for AVX512_FP16.
* i386-dis-evex-reg.h : Updated for AVX512_FP16.
* i386-dis-evex-w.h : Updated for AVX512_FP16.
* i386-gen.c (cpu_flag_init): Add CPU_AVX512_FP16_FLAGS,
and CPU_ANY_AVX512_FP16_FLAGS. Update CPU_ANY_AVX512F_FLAGS
and CPU_ANY_AVX512BW_FLAGS.
(cpu_flags): Add CpuAVX512_FP16.
(opcode_modifiers): Add DistinctDest.
* i386-opc.h (enum): (AVX512_FP16): New.
(i386_opcode_modifier): Add reqdistinctreg.
(i386_cpu_flags): Add cpuavx512_fp16.
(EVEXMAP5): Defined as a macro.
(EVEXMAP6): Ditto.
* i386-opc.tbl: Add Intel AVX512_FP16 instructions.
* i386-init.h: Regenerated.
* i386-tbl.h: Ditto.
|
|
Also change the x86 disassembler to disassemble 0xf1 as int1, instead of
icebp.
gas/
PR gas/28088
* testsuite/gas/i386/opcode.s: Add int1.
* testsuite/gas/i386/x86-64-opcode.s: Add int1, int3 and int.
* testsuite/gas/i386/opcode-intel.d: Updated.
* testsuite/gas/i386/opcode-suffix.d: Likewise.
* testsuite/gas/i386/opcode.d: Likewise.
* testsuite/gas/i386/x86-64-opcode.d: Likewise.
opcodes/
PR gas/28088
* i386-dis.c (dis386): Replace icebp with int1.
* i386-opc.tbl: Add int1.
* i386-tbl.h: Regenerate.
|
|
Over the years I've seen a number of instances where people used
lea (%reg1), %reg2
or
lea symbol, %reg
despite the same thing being expressable via MOV. Since additionally
LEA often has restrictions towards the ports it can be issued to, while
MOV typically gets dealt with simply by register renaming, transform to
MOV when possible (without growing opcode size and without altering
involved relocation types).
Note that for Mach-O the new 64-bit testcases would fail (for
BFD_RELOC_X86_64_32S not having a representation), and hence get skipped
there.
|
|
st(1) ... st(7) will never be looked up in the hash table, so there's no
point inserting the entries. It's also not really necessary to do a 2nd
hash lookup after parsing the register number, nor is there a real
reason for having both st and st(0) entries. Plus we can easily do away
with the need for st to be first in the table.
|
|
For a long time there hasn't been a need anymore to keep together all
templates with identical mnemonics. Move the MOVQ and MOVABS ones next
to their MOV counterparts. Move the string forms of CMPSD and MOVSD next
to their CMPS / MOVS counterparts. Re-arrange what so fgar was the SSE3
section.
This makes reasonably obvious that MONITOR/MWAIT aren't suitable to
cover by CpuSSE3, but adjusting this is left for another time.
|
|
In commit 79dec6b7baa2 ("x86-64: optimize certain commutative
VEX-encoded insns") I missed the fact that there being subtraction
involved here doesn't matter, as absolute differences get summed up.
|
|
This way not only the overall (source) table size shrinks by quite a
bit and the risk of related templates going out of sync with one another
gets lowered, but also (dis)similarities between neighboring templates
become easier to spot.
Note that for certain SSE2AVX templates this results in benign attribute
changes:
- LDMXCSR and STMXCSR: NoAVX gets set,
- MOVMSKPS, PMOVMSKB, PEXTR{B,W} (register destination), and PINSR{B,W}
(register source): IgnoreSize and NoRex64 get set,
- CVT{DQ,PS}2PD, CVTSD2SS, MOVMSKPD, MOVDDUP, PMOV{S,Z}X{BW,WD,DQ}, and
ROUNDSD: NoRex64 gets set,
- CVTSS2SD, INSERTPS, PEXTRW (memory destination), PINSRW (memory
source), and PMOV{S,Z}X{BD,WQ,BQ}: IgnoreSize gets set.
Similarly the "normal" (non-SSE2AVX)
- non-64-bit CVTS{I,S}2SD forms get NoRex64 set,
- CMP{EQ,ORD,NEQ,UNORD}{P,S}{S,D} forms get C set,
all again in a benign way.
The remaining differences in the generated table are due to re-ordering
of entries in the course of being folded into templates.
|
|
Just like is already done for VEX/XOP/EVEX encoded insns, record the
encoding space information in the respective opcode modifier field. Do
this again without changing the source table, but rather by deriving the
values from their existing source representation.
|
|
While all of MMX, SSE, and SSE2 are included in "generic64", they can be
individually disabled. There are two MOVQ forms lacking respective
attributes. While the MMX one would get refused anyway (due to MMX
registers not recognized with .nommx), the assembler did happily accept
the SSE2 form. Add respective CPU settings to both, paralleling what the
MOVD counterparts have.
|
|
For INVLPGB the operand count was wrong (besides %edx there's also %ecx
which is an input to the insn). In this case I see little sense in
retaining the bogus 2-operand template. Plus swapping of the operands
wasn't properly suppressed for Intel syntax.
For PVALIDATE, RMPADJUST, and RMPUPDATE bogus single operand templates
were specified. These get retained, as the address operand is the only
one really needed to expressed non-default address size, but only for
compatibility reasons. Proper multi-operand insn get introduced and the
testcases get adjusted / extended accordingly.
While at it also drop the redundant definition of __amd64__ - we already
have x86_64 defined (or not) to distinguish 64-bit and non-64-bit cases.
|
|
In the majority of cases we can easily determine the length from the
encoding, irrespective of whether a prefix is specified there as well.
We further don't even need to record the value in the table entries, as
it's easy enough to determine it (without any guesswork, unless an insn
with major opcode 00 appeared that requires a 2nd opcode byte to be
specified explicitly) when installing the chosen template for further
processing.
Should an encoding appear which
- has a major opcode byte of 66, F3, or F2,
- requires a 2nd opcode byte to be specified explicitly,
- doesn't have a mandatory prefix
we'd need to convert all templates presently encoding a mandatory prefix
this way to the Prefix_0X<nn> model to eliminate the respective guessing
i386-gen does.
|
|
Just like is already done for legacy encoded insns, record the mandatory
prefix information in the respective opcode modifier field. Do this
without changing the source table, but rather by deriving the values from
their existing source representation.
|
|
This is in preparation of opcode_length going away as a field in the
templates. Identify pseudo prefixes by a base opcode of zero instead:
No real prefix has an opcode of zero. This at the same time allows
dropping a curious special case from i386-gen.
Since most attributes are identical for all pseudo prefixes, take the
opportunity and also template them.
|
|
In preparation to use PREFIX_0X<nn> attributes also in VEX/XOP/EVEX
encoding templates, renumber the pseudo-enumerators such that their
values can then also be used directly in the respective prefix bit
fields.
|