diff options
author | Richard Sandiford <richard.sandiford@arm.com> | 2019-11-08 08:32:19 +0000 |
---|---|---|
committer | Richard Sandiford <rsandifo@gcc.gnu.org> | 2019-11-08 08:32:19 +0000 |
commit | 09eb042a8a8ee16e8f23085a175be25c8ef68820 (patch) | |
tree | 3642ab9cc004670e3d3e8263db46861bfc11363d /gcc/doc | |
parent | 47cc2d4917c7cb351e561dba5768deaa2d42bf8b (diff) | |
download | gcc-09eb042a8a8ee16e8f23085a175be25c8ef68820.zip gcc-09eb042a8a8ee16e8f23085a175be25c8ef68820.tar.gz gcc-09eb042a8a8ee16e8f23085a175be25c8ef68820.tar.bz2 |
Generalise gather and scatter optabs
The gather and scatter optabs required the vector offset to be
the integer equivalent of the vector mode being loaded or stored.
This patch generalises them so that the two vectors can have different
element sizes, although they still need to have the same number of
elements.
One consequence of this is that it's possible (if unlikely)
for two IFN_GATHER_LOADs to have the same arguments but different
return types. E.g. the same scalar base and vector of 32-bit offsets
could be used to load 8-bit elements and to load 16-bit elements.
From just looking at the arguments, we could wrongly deduce that
they're equivalent.
I know we saw this happen at one point with IFN_WHILE_ULT,
and we dealt with it there by passing a zero of the return type
as an extra argument. Doing the same here also makes the load
and store functions have the same argument assignment.
For now this patch should be a no-op, but later SVE patches take
advantage of the new flexibility.
2019-11-08 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* optabs.def (gather_load_optab, mask_gather_load_optab)
(scatter_store_optab, mask_scatter_store_optab): Turn into
conversion optabs, with the offset mode given explicitly.
* doc/md.texi: Update accordingly.
* config/aarch64/aarch64-sve-builtins-base.cc
(svld1_gather_impl::expand): Likewise.
(svst1_scatter_impl::expand): Likewise.
* internal-fn.c (gather_load_direct, scatter_store_direct): Likewise.
(expand_scatter_store_optab_fn): Likewise.
(direct_gather_load_optab_supported_p): Likewise.
(direct_scatter_store_optab_supported_p): Likewise.
(expand_gather_load_optab_fn): Likewise. Expect the mask argument
to be argument 4.
(internal_fn_mask_index): Return 4 for IFN_MASK_GATHER_LOAD.
(internal_gather_scatter_fn_supported_p): Replace the offset sign
argument with the offset vector type. Require the two vector
types to have the same number of elements but allow their element
sizes to be different. Treat the optabs as conversion optabs.
* internal-fn.h (internal_gather_scatter_fn_supported_p): Update
prototype accordingly.
* optabs-query.c (supports_at_least_one_mode_p): Replace with...
(supports_vec_convert_optab_p): ...this new function.
(supports_vec_gather_load_p): Update accordingly.
(supports_vec_scatter_store_p): Likewise.
* tree-vectorizer.h (vect_gather_scatter_fn_p): Take a vec_info.
Replace the offset sign and bits parameters with a scalar type tree.
* tree-vect-data-refs.c (vect_gather_scatter_fn_p): Likewise.
Pass back the offset vector type instead of the scalar element type.
Allow the offset to be wider than the memory elements. Search for
an offset type that the target supports, stopping once we've
reached the maximum of the element size and pointer size.
Update call to internal_gather_scatter_fn_supported_p.
(vect_check_gather_scatter): Update calls accordingly.
When testing a new scale before knowing the final offset type,
check whether the scale is supported for any signed or unsigned
offset type. Check whether the target supports the source and
target types of a conversion before deciding whether to look
through the conversion. Record the chosen offset_vectype.
* tree-vect-patterns.c (vect_get_gather_scatter_offset_type): Delete.
(vect_recog_gather_scatter_pattern): Get the scalar offset type
directly from the gs_info's offset_vectype instead. Pass a zero
of the result type to IFN_GATHER_LOAD and IFN_MASK_GATHER_LOAD.
* tree-vect-stmts.c (check_load_store_masking): Update call to
internal_gather_scatter_fn_supported_p, passing the offset vector
type recorded in the gs_info.
(vect_truncate_gather_scatter_offset): Update call to
vect_check_gather_scatter, leaving it to search for a valid
offset vector type.
(vect_use_strided_gather_scatters_p): Convert the offset to the
element type of the gs_info's offset_vectype.
(vect_get_gather_scatter_ops): Get the offset vector type directly
from the gs_info.
(vect_get_strided_load_store_ops): Likewise.
(vectorizable_load): Pass a zero of the result type to IFN_GATHER_LOAD
and IFN_MASK_GATHER_LOAD.
* config/aarch64/aarch64-sve.md (gather_load<mode>): Rename to...
(gather_load<mode><v_int_equiv>): ...this.
(mask_gather_load<mode>): Rename to...
(mask_gather_load<mode><v_int_equiv>): ...this.
(scatter_store<mode>): Rename to...
(scatter_store<mode><v_int_equiv>): ...this.
(mask_scatter_store<mode>): Rename to...
(mask_scatter_store<mode><v_int_equiv>): ...this.
From-SVN: r277949
Diffstat (limited to 'gcc/doc')
-rw-r--r-- | gcc/doc/md.texi | 34 |
1 files changed, 17 insertions, 17 deletions
diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi index 19d6893..87bbeb4 100644 --- a/gcc/doc/md.texi +++ b/gcc/doc/md.texi @@ -4959,12 +4959,12 @@ for (j = 0; j < GET_MODE_NUNITS (@var{n}); j++) This pattern is not allowed to @code{FAIL}. -@cindex @code{gather_load@var{m}} instruction pattern -@item @samp{gather_load@var{m}} +@cindex @code{gather_load@var{m}@var{n}} instruction pattern +@item @samp{gather_load@var{m}@var{n}} Load several separate memory locations into a vector of mode @var{m}. -Operand 1 is a scalar base address and operand 2 is a vector of -offsets from that base. Operand 0 is a destination vector with the -same number of elements as the offset. For each element index @var{i}: +Operand 1 is a scalar base address and operand 2 is a vector of mode @var{n} +containing offsets from that base. Operand 0 is a destination vector with +the same number of elements as @var{n}. For each element index @var{i}: @itemize @bullet @item @@ -4981,20 +4981,20 @@ load the value at that address into element @var{i} of operand 0. The value of operand 3 does not matter if the offsets are already address width. -@cindex @code{mask_gather_load@var{m}} instruction pattern -@item @samp{mask_gather_load@var{m}} -Like @samp{gather_load@var{m}}, but takes an extra mask operand as +@cindex @code{mask_gather_load@var{m}@var{n}} instruction pattern +@item @samp{mask_gather_load@var{m}@var{n}} +Like @samp{gather_load@var{m}@var{n}}, but takes an extra mask operand as operand 5. Bit @var{i} of the mask is set if element @var{i} of the result should be loaded from memory and clear if element @var{i} of the result should be set to zero. -@cindex @code{scatter_store@var{m}} instruction pattern -@item @samp{scatter_store@var{m}} +@cindex @code{scatter_store@var{m}@var{n}} instruction pattern +@item @samp{scatter_store@var{m}@var{n}} Store a vector of mode @var{m} into several distinct memory locations. -Operand 0 is a scalar base address and operand 1 is a vector of offsets -from that base. Operand 4 is the vector of values that should be stored, -which has the same number of elements as the offset. For each element -index @var{i}: +Operand 0 is a scalar base address and operand 1 is a vector of mode +@var{n} containing offsets from that base. Operand 4 is the vector of +values that should be stored, which has the same number of elements as +@var{n}. For each element index @var{i}: @itemize @bullet @item @@ -5011,9 +5011,9 @@ store element @var{i} of operand 4 to that address. The value of operand 2 does not matter if the offsets are already address width. -@cindex @code{mask_scatter_store@var{m}} instruction pattern -@item @samp{mask_scatter_store@var{m}} -Like @samp{scatter_store@var{m}}, but takes an extra mask operand as +@cindex @code{mask_scatter_store@var{m}@var{n}} instruction pattern +@item @samp{mask_scatter_store@var{m}@var{n}} +Like @samp{scatter_store@var{m}@var{n}}, but takes an extra mask operand as operand 5. Bit @var{i} of the mask is set if element @var{i} of the result should be stored to memory. |