aboutsummaryrefslogtreecommitdiff
path: root/llvm/docs
diff options
context:
space:
mode:
Diffstat (limited to 'llvm/docs')
-rw-r--r--llvm/docs/ProgrammersManual.rst190
-rw-r--r--llvm/docs/ReleaseNotes.md6
-rw-r--r--llvm/docs/YamlIO.rst136
3 files changed, 169 insertions, 163 deletions
diff --git a/llvm/docs/ProgrammersManual.rst b/llvm/docs/ProgrammersManual.rst
index 68490c8..9ddeebd 100644
--- a/llvm/docs/ProgrammersManual.rst
+++ b/llvm/docs/ProgrammersManual.rst
@@ -932,7 +932,7 @@ In some contexts, certain types of errors are known to be benign. For example,
when walking an archive, some clients may be happy to skip over badly formatted
object files rather than terminating the walk immediately. Skipping badly
formatted objects could be achieved using an elaborate handler method, but the
-Error.h header provides two utilities that make this idiom much cleaner: the
+``Error.h`` header provides two utilities that make this idiom much cleaner: the
type inspection method, ``isA``, and the ``consumeError`` function:
.. code-block:: c++
@@ -1073,7 +1073,7 @@ relatively natural use of C++ iterator/loop idioms.
.. _function_apis:
More information on Error and its related utilities can be found in the
-Error.h header file.
+``Error.h`` header file.
Passing functions and other callable objects
--------------------------------------------
@@ -1224,7 +1224,7 @@ Then you can run your pass like this:
Of course, in practice, you should only set ``DEBUG_TYPE`` at the top of a file,
to specify the debug type for the entire module. Be careful that you only do
-this after including Debug.h and not around any #include of headers. Also, you
+this after including ``Debug.h`` and not around any #include of headers. Also, you
should use names more meaningful than "foo" and "bar", because there is no
system in place to ensure that names do not conflict. If two different modules
use the same string, they will all be turned on when the name is specified.
@@ -1579,18 +1579,18 @@ llvm/ADT/SmallVector.h
``SmallVector<Type, N>`` is a simple class that looks and smells just like
``vector<Type>``: it supports efficient iteration, lays out elements in memory
order (so you can do pointer arithmetic between elements), supports efficient
-push_back/pop_back operations, supports efficient random access to its elements,
+``push_back``/``pop_back`` operations, supports efficient random access to its elements,
etc.
-The main advantage of SmallVector is that it allocates space for some number of
-elements (N) **in the object itself**. Because of this, if the SmallVector is
+The main advantage of ``SmallVector`` is that it allocates space for some number of
+elements (N) **in the object itself**. Because of this, if the ``SmallVector`` is
dynamically smaller than N, no malloc is performed. This can be a big win in
cases where the malloc/free call is far more expensive than the code that
fiddles around with the elements.
This is good for vectors that are "usually small" (e.g. the number of
predecessors/successors of a block is usually less than 8). On the other hand,
-this makes the size of the SmallVector itself large, so you don't want to
+this makes the size of the ``SmallVector`` itself large, so you don't want to
allocate lots of them (doing so will waste a lot of space). As such,
SmallVectors are most useful when on the stack.
@@ -1600,21 +1600,21 @@ omitting the ``N``). This will choose a default number of
inlined elements reasonable for allocation on the stack (for example, trying
to keep ``sizeof(SmallVector<T>)`` around 64 bytes).
-SmallVector also provides a nice portable and efficient replacement for
+``SmallVector`` also provides a nice portable and efficient replacement for
``alloca``.
-SmallVector has grown a few other minor advantages over std::vector, causing
+``SmallVector`` has grown a few other minor advantages over ``std::vector``, causing
``SmallVector<Type, 0>`` to be preferred over ``std::vector<Type>``.
-#. std::vector is exception-safe, and some implementations have pessimizations
- that copy elements when SmallVector would move them.
+#. ``std::vector`` is exception-safe, and some implementations have pessimizations
+ that copy elements when ``SmallVector`` would move them.
-#. SmallVector understands ``std::is_trivially_copyable<Type>`` and uses realloc aggressively.
+#. ``SmallVector`` understands ``std::is_trivially_copyable<Type>`` and uses realloc aggressively.
-#. Many LLVM APIs take a SmallVectorImpl as an out parameter (see the note
+#. Many LLVM APIs take a ``SmallVectorImpl`` as an out parameter (see the note
below).
-#. SmallVector with N equal to 0 is smaller than std::vector on 64-bit
+#. ``SmallVector`` with N equal to 0 is smaller than ``std::vector`` on 64-bit
platforms, since it uses ``unsigned`` (instead of ``void*``) for its size
and capacity.
@@ -1698,11 +1698,11 @@ non-ordered manner.
^^^^^^^^
``std::vector<T>`` is well loved and respected. However, ``SmallVector<T, 0>``
-is often a better option due to the advantages listed above. std::vector is
+is often a better option due to the advantages listed above. ``std::vector`` is
still useful when you need to store more than ``UINT32_MAX`` elements or when
interfacing with code that expects vectors :).
-One worthwhile note about std::vector: avoid code like this:
+One worthwhile note about ``std::vector``: avoid code like this:
.. code-block:: c++
@@ -1749,10 +1749,10 @@ extremely high constant factor, particularly for small data types.
``std::list`` also only supports bidirectional iteration, not random access
iteration.
-In exchange for this high cost, std::list supports efficient access to both ends
+In exchange for this high cost, ``std::list`` supports efficient access to both ends
of the list (like ``std::deque``, but unlike ``std::vector`` or
``SmallVector``). In addition, the iterator invalidation characteristics of
-std::list are stronger than that of a vector class: inserting or removing an
+``std::list`` are stronger than that of a vector class: inserting or removing an
element into the list does not invalidate iterator or pointers to other elements
in the list.
@@ -1895,7 +1895,7 @@ Note that it is generally preferred to *not* pass strings around as ``const
char*``'s. These have a number of problems, including the fact that they
cannot represent embedded nul ("\0") characters, and do not have a length
available efficiently. The general replacement for '``const char*``' is
-StringRef.
+``StringRef``.
For more information on choosing string containers for APIs, please see
:ref:`Passing Strings <string_apis>`.
@@ -1905,41 +1905,41 @@ For more information on choosing string containers for APIs, please see
llvm/ADT/StringRef.h
^^^^^^^^^^^^^^^^^^^^
-The StringRef class is a simple value class that contains a pointer to a
+The ``StringRef`` class is a simple value class that contains a pointer to a
character and a length, and is quite related to the :ref:`ArrayRef
<dss_arrayref>` class (but specialized for arrays of characters). Because
-StringRef carries a length with it, it safely handles strings with embedded nul
+``StringRef`` carries a length with it, it safely handles strings with embedded nul
characters in it, getting the length does not require a strlen call, and it even
has very convenient APIs for slicing and dicing the character range that it
represents.
-StringRef is ideal for passing simple strings around that are known to be live,
-either because they are C string literals, std::string, a C array, or a
-SmallVector. Each of these cases has an efficient implicit conversion to
-StringRef, which doesn't result in a dynamic strlen being executed.
+``StringRef`` is ideal for passing simple strings around that are known to be live,
+either because they are C string literals, ``std::string``, a C array, or a
+``SmallVector``. Each of these cases has an efficient implicit conversion to
+``StringRef``, which doesn't result in a dynamic ``strlen`` being executed.
-StringRef has a few major limitations which make more powerful string containers
+``StringRef`` has a few major limitations which make more powerful string containers
useful:
-#. You cannot directly convert a StringRef to a 'const char*' because there is
- no way to add a trailing nul (unlike the .c_str() method on various stronger
+#. You cannot directly convert a ``StringRef`` to a 'const char*' because there is
+ no way to add a trailing nul (unlike the ``.c_str()`` method on various stronger
classes).
-#. StringRef doesn't own or keep alive the underlying string bytes.
+#. ``StringRef`` doesn't own or keep alive the underlying string bytes.
As such it can easily lead to dangling pointers, and is not suitable for
- embedding in datastructures in most cases (instead, use an std::string or
+ embedding in datastructures in most cases (instead, use an ``std::string`` or
something like that).
-#. For the same reason, StringRef cannot be used as the return value of a
- method if the method "computes" the result string. Instead, use std::string.
+#. For the same reason, ``StringRef`` cannot be used as the return value of a
+ method if the method "computes" the result string. Instead, use ``std::string``.
-#. StringRef's do not allow you to mutate the pointed-to string bytes and it
+#. ``StringRef``'s do not allow you to mutate the pointed-to string bytes and it
doesn't allow you to insert or remove bytes from the range. For editing
operations like this, it interoperates with the :ref:`Twine <dss_twine>`
class.
Because of its strengths and limitations, it is very common for a function to
-take a StringRef and for a method on an object to return a StringRef that points
+take a ``StringRef`` and for a method on an object to return a ``StringRef`` that points
into some string that it owns.
.. _dss_twine:
@@ -1979,25 +1979,25 @@ behavior and will probably crash:
const Twine &Tmp = X + "." + Twine(i);
foo(Tmp);
-... because the temporaries are destroyed before the call. That said, Twine's
-are much more efficient than intermediate std::string temporaries, and they work
-really well with StringRef. Just be aware of their limitations.
+... because the temporaries are destroyed before the call. That said, ``Twine``'s
+are much more efficient than intermediate ``std::string`` temporaries, and they work
+really well with ``StringRef``. Just be aware of their limitations.
.. _dss_smallstring:
llvm/ADT/SmallString.h
^^^^^^^^^^^^^^^^^^^^^^
-SmallString is a subclass of :ref:`SmallVector <dss_smallvector>` that adds some
-convenience APIs like += that takes StringRef's. SmallString avoids allocating
+``SmallString`` is a subclass of :ref:`SmallVector <dss_smallvector>` that adds some
+convenience APIs like += that takes ``StringRef``'s. ``SmallString`` avoids allocating
memory in the case when the preallocated space is enough to hold its data, and
it calls back to general heap allocation when required. Since it owns its data,
it is very safe to use and supports full mutation of the string.
-Like SmallVector's, the big downside to SmallString is their sizeof. While they
+Like ``SmallVector``'s, the big downside to ``SmallString`` is their sizeof. While they
are optimized for small strings, they themselves are not particularly small.
This means that they work great for temporary scratch buffers on the stack, but
-should not generally be put into the heap: it is very rare to see a SmallString
+should not generally be put into the heap: it is very rare to see a ``SmallString``
as the member of a frequently-allocated heap data structure or returned
by-value.
@@ -2006,18 +2006,18 @@ by-value.
std::string
^^^^^^^^^^^
-The standard C++ std::string class is a very general class that (like
-SmallString) owns its underlying data. sizeof(std::string) is very reasonable
+The standard C++ ``std::string`` class is a very general class that (like
+``SmallString``) owns its underlying data. sizeof(std::string) is very reasonable
so it can be embedded into heap data structures and returned by-value. On the
-other hand, std::string is highly inefficient for inline editing (e.g.
+other hand, ``std::string`` is highly inefficient for inline editing (e.g.
concatenating a bunch of stuff together) and because it is provided by the
standard library, its performance characteristics depend a lot of the host
standard library (e.g. libc++ and MSVC provide a highly optimized string class,
GCC contains a really slow implementation).
-The major disadvantage of std::string is that almost every operation that makes
+The major disadvantage of ``std::string`` is that almost every operation that makes
them larger can allocate memory, which is slow. As such, it is better to use
-SmallVector or Twine as a scratch buffer, but then use std::string to persist
+``SmallVector`` or ``Twine`` as a scratch buffer, but then use ``std::string`` to persist
the result.
.. _ds_set:
@@ -2035,8 +2035,8 @@ A sorted 'vector'
^^^^^^^^^^^^^^^^^
If you intend to insert a lot of elements, then do a lot of queries, a great
-approach is to use an std::vector (or other sequential container) with
-std::sort+std::unique to remove duplicates. This approach works really well if
+approach is to use an ``std::vector`` (or other sequential container) with
+``std::sort``+``std::unique`` to remove duplicates. This approach works really well if
your usage pattern has these two distinct phases (insert then query), and can be
coupled with a good choice of :ref:`sequential container <ds_sequential>`.
@@ -2102,11 +2102,11 @@ copy-construction, which :ref:`SmallSet <dss_smallset>` and :ref:`SmallPtrSet
llvm/ADT/DenseSet.h
^^^^^^^^^^^^^^^^^^^
-DenseSet is a simple quadratically probed hash table. It excels at supporting
+``DenseSet`` is a simple quadratically probed hash table. It excels at supporting
small values: it uses a single allocation to hold all of the pairs that are
-currently inserted in the set. DenseSet is a great way to unique small values
+currently inserted in the set. ``DenseSet`` is a great way to unique small values
that are not simple pointers (use :ref:`SmallPtrSet <dss_smallptrset>` for
-pointers). Note that DenseSet has the same requirements for the value type that
+pointers). Note that ``DenseSet`` has the same requirements for the value type that
:ref:`DenseMap <dss_densemap>` has.
.. _dss_sparseset:
@@ -2128,12 +2128,12 @@ data structures.
llvm/ADT/SparseMultiSet.h
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-SparseMultiSet adds multiset behavior to SparseSet, while retaining SparseSet's
-desirable attributes. Like SparseSet, it typically uses a lot of memory, but
+``SparseMultiSet`` adds multiset behavior to ``SparseSet``, while retaining ``SparseSet``'s
+desirable attributes. Like ``SparseSet``, it typically uses a lot of memory, but
provides operations that are almost as fast as a vector. Typical keys are
physical registers, virtual registers, or numbered basic blocks.
-SparseMultiSet is useful for algorithms that need very fast
+``SparseMultiSet`` is useful for algorithms that need very fast
clear/find/insert/erase of the entire collection, and iteration over sets of
elements sharing a key. It is often a more efficient choice than using composite
data structures (e.g. vector-of-vectors, map-of-vectors). It is not intended for
@@ -2144,10 +2144,10 @@ building composite data structures.
llvm/ADT/FoldingSet.h
^^^^^^^^^^^^^^^^^^^^^
-FoldingSet is an aggregate class that is really good at uniquing
+``FoldingSet`` is an aggregate class that is really good at uniquing
expensive-to-create or polymorphic objects. It is a combination of a chained
hash table with intrusive links (uniqued objects are required to inherit from
-FoldingSetNode) that uses :ref:`SmallVector <dss_smallvector>` as part of its ID
+``FoldingSetNode``) that uses :ref:`SmallVector <dss_smallvector>` as part of its ID
process.
Consider a case where you want to implement a "getOrCreateFoo" method for a
@@ -2157,14 +2157,14 @@ operands), but we don't want to 'new' a node, then try inserting it into a set
only to find out it already exists, at which point we would have to delete it
and return the node that already exists.
-To support this style of client, FoldingSet perform a query with a
-FoldingSetNodeID (which wraps SmallVector) that can be used to describe the
+To support this style of client, ``FoldingSet`` perform a query with a
+``FoldingSetNodeID`` (which wraps ``SmallVector``) that can be used to describe the
element that we want to query for. The query either returns the element
matching the ID or it returns an opaque ID that indicates where insertion should
take place. Construction of the ID usually does not require heap traffic.
-Because FoldingSet uses intrusive links, it can support polymorphic objects in
-the set (for example, you can have SDNode instances mixed with LoadSDNodes).
+Because ``FoldingSet`` uses intrusive links, it can support polymorphic objects in
+the set (for example, you can have ``SDNode`` instances mixed with ``LoadSDNodes``).
Because the elements are individually allocated, pointers to the elements are
stable: inserting or removing elements does not invalidate any pointers to other
elements.
@@ -2175,7 +2175,7 @@ elements.
^^^^^
``std::set`` is a reasonable all-around set class, which is decent at many
-things but great at nothing. std::set allocates memory for each element
+things but great at nothing. ``std::set`` allocates memory for each element
inserted (thus it is very malloc intensive) and typically stores three pointers
per element in the set (thus adding a large amount of per-element space
overhead). It offers guaranteed log(n) performance, which is not particularly
@@ -2183,12 +2183,12 @@ fast from a complexity standpoint (particularly if the elements of the set are
expensive to compare, like strings), and has extremely high constant factors for
lookup, insertion and removal.
-The advantages of std::set are that its iterators are stable (deleting or
+The advantages of ``std::set`` are that its iterators are stable (deleting or
inserting an element from the set does not affect iterators or pointers to other
elements) and that iteration over the set is guaranteed to be in sorted order.
If the elements in the set are large, then the relative overhead of the pointers
and malloc traffic is not a big deal, but if the elements of the set are small,
-std::set is almost never a good choice.
+``std::set`` is almost never a good choice.
.. _dss_setvector:
@@ -2242,11 +2242,11 @@ produces a lot of malloc traffic. It should be avoided.
llvm/ADT/ImmutableSet.h
^^^^^^^^^^^^^^^^^^^^^^^
-ImmutableSet is an immutable (functional) set implementation based on an AVL
+``ImmutableSet`` is an immutable (functional) set implementation based on an AVL
tree. Adding or removing elements is done through a Factory object and results
-in the creation of a new ImmutableSet object. If an ImmutableSet already exists
+in the creation of a new ``ImmutableSet`` object. If an ``ImmutableSet`` already exists
with the given contents, then the existing one is returned; equality is compared
-with a FoldingSetNodeID. The time and space complexity of add or remove
+with a ``FoldingSetNodeID``. The time and space complexity of add or remove
operations is logarithmic in the size of the original set.
There is no method for returning an element of the set, you can only check for
@@ -2257,11 +2257,11 @@ membership.
Other Set-Like Container Options
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The STL provides several other options, such as std::multiset and
-std::unordered_set. We never use containers like unordered_set because
+The STL provides several other options, such as ``std::multiset`` and
+``std::unordered_set``. We never use containers like ``unordered_set`` because
they are generally very expensive (each insertion requires a malloc).
-std::multiset is useful if you're not interested in elimination of duplicates,
+``std::multiset`` is useful if you're not interested in elimination of duplicates,
but has all the drawbacks of :ref:`std::set <dss_set>`. A sorted vector
(where you don't delete duplicate entries) or some other approach is almost
always better.
@@ -2282,7 +2282,7 @@ A sorted 'vector'
If your usage pattern follows a strict insert-then-query approach, you can
trivially use the same approach as :ref:`sorted vectors for set-like containers
<dss_sortedvectorset>`. The only difference is that your query function (which
-uses std::lower_bound to get efficient log(n) lookup) should only compare the
+uses ``std::lower_bound`` to get efficient log(n) lookup) should only compare the
key, not both the key and value. This yields the same advantages as sorted
vectors for sets.
@@ -2293,11 +2293,11 @@ llvm/ADT/StringMap.h
Strings are commonly used as keys in maps, and they are difficult to support
efficiently: they are variable length, inefficient to hash and compare when
-long, expensive to copy, etc. StringMap is a specialized container designed to
+long, expensive to copy, etc. ``StringMap`` is a specialized container designed to
cope with these issues. It supports mapping an arbitrary range of bytes to an
arbitrary other object.
-The StringMap implementation uses a quadratically-probed hash table, where the
+The ``StringMap`` implementation uses a quadratically-probed hash table, where the
buckets store a pointer to the heap allocated entries (and some other stuff).
The entries in the map must be heap allocated because the strings are variable
length. The string data (key) and the element object (value) are stored in the
@@ -2305,26 +2305,26 @@ same allocation with the string data immediately after the element object.
This container guarantees the "``(char*)(&Value+1)``" points to the key string
for a value.
-The StringMap is very fast for several reasons: quadratic probing is very cache
+The ``StringMap`` is very fast for several reasons: quadratic probing is very cache
efficient for lookups, the hash value of strings in buckets is not recomputed
-when looking up an element, StringMap rarely has to touch the memory for
+when looking up an element, ``StringMap`` rarely has to touch the memory for
unrelated objects when looking up a value (even when hash collisions happen),
hash table growth does not recompute the hash values for strings already in the
table, and each pair in the map is store in a single allocation (the string data
is stored in the same allocation as the Value of a pair).
-StringMap also provides query methods that take byte ranges, so it only ever
+``StringMap`` also provides query methods that take byte ranges, so it only ever
copies a string if a value is inserted into the table.
-StringMap iteration order, however, is not guaranteed to be deterministic, so
-any uses which require that should instead use a std::map.
+``StringMap`` iteration order, however, is not guaranteed to be deterministic, so
+any uses which require that should instead use a ``std::map``.
.. _dss_indexmap:
llvm/ADT/IndexedMap.h
^^^^^^^^^^^^^^^^^^^^^
-IndexedMap is a specialized container for mapping small dense integers (or
+``IndexedMap`` is a specialized container for mapping small dense integers (or
values that can be mapped to small dense integers) to some other type. It is
internally implemented as a vector with a mapping function that maps the keys
to the dense integer range.
@@ -2338,27 +2338,27 @@ virtual register ID).
llvm/ADT/DenseMap.h
^^^^^^^^^^^^^^^^^^^
-DenseMap is a simple quadratically probed hash table. It excels at supporting
+``DenseMap`` is a simple quadratically probed hash table. It excels at supporting
small keys and values: it uses a single allocation to hold all of the pairs
-that are currently inserted in the map. DenseMap is a great way to map
+that are currently inserted in the map. ``DenseMap`` is a great way to map
pointers to pointers, or map other small types to each other.
-There are several aspects of DenseMap that you should be aware of, however.
-The iterators in a DenseMap are invalidated whenever an insertion occurs,
-unlike map. Also, because DenseMap allocates space for a large number of
+There are several aspects of ``DenseMap`` that you should be aware of, however.
+The iterators in a ``DenseMap`` are invalidated whenever an insertion occurs,
+unlike ``map``. Also, because ``DenseMap`` allocates space for a large number of
key/value pairs (it starts with 64 by default), it will waste a lot of space if
your keys or values are large. Finally, you must implement a partial
-specialization of DenseMapInfo for the key that you want, if it isn't already
-supported. This is required to tell DenseMap about two special marker values
+specialization of ``DenseMapInfo`` for the key that you want, if it isn't already
+supported. This is required to tell ``DenseMap`` about two special marker values
(which can never be inserted into the map) that it needs internally.
-DenseMap's find_as() method supports lookup operations using an alternate key
+``DenseMap``'s ``find_as()`` method supports lookup operations using an alternate key
type. This is useful in cases where the normal key type is expensive to
-construct, but cheap to compare against. The DenseMapInfo is responsible for
+construct, but cheap to compare against. The ``DenseMapInfo`` is responsible for
defining the appropriate comparison and hashing methods for each alternate key
type used.
-DenseMap.h also contains a SmallDenseMap variant, that similar to
+``DenseMap.h`` also contains a ``SmallDenseMap`` variant, that similar to
:ref:`SmallVector <dss_smallvector>` performs no heap allocation until the
number of elements in the template parameter N are exceeded.
@@ -2404,12 +2404,12 @@ further additions.
<map>
^^^^^
-std::map has similar characteristics to :ref:`std::set <dss_set>`: it uses a
+``std::map`` has similar characteristics to :ref:`std::set <dss_set>`: it uses a
single allocation per pair inserted into the map, it offers log(n) lookup with
an extremely large constant factor, imposes a space penalty of 3 pointers per
pair in the map, etc.
-std::map is most useful when your keys or values are very large, if you need to
+``std::map`` is most useful when your keys or values are very large, if you need to
iterate over the collection in sorted order, or if you need stable iterators
into the map (i.e. they don't get invalidated if an insertion or deletion of
another element takes place).
@@ -2419,7 +2419,7 @@ another element takes place).
llvm/ADT/MapVector.h
^^^^^^^^^^^^^^^^^^^^
-``MapVector<KeyT,ValueT>`` provides a subset of the DenseMap interface. The
+``MapVector<KeyT,ValueT>`` provides a subset of the ``DenseMap`` interface. The
main difference is that the iteration order is guaranteed to be the insertion
order, making it an easy (but somewhat expensive) solution for non-deterministic
iteration over maps of pointers.
@@ -2463,12 +2463,12 @@ operations is logarithmic in the size of the original map.
Other Map-Like Container Options
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The STL provides several other options, such as std::multimap and
-std::unordered_map. We never use containers like unordered_map because
+The STL provides several other options, such as ``std::multimap`` and
+``std::unordered_map``. We never use containers like ``unordered_map`` because
they are generally very expensive (each insertion requires a malloc).
-std::multimap is useful if you want to map a key to multiple values, but has all
-the drawbacks of std::map. A sorted vector or some other approach is almost
+``std::multimap`` is useful if you want to map a key to multiple values, but has all
+the drawbacks of ``std::map``. A sorted vector or some other approach is almost
always better.
.. _ds_bit:
diff --git a/llvm/docs/ReleaseNotes.md b/llvm/docs/ReleaseNotes.md
index 48d2ef1..021f321 100644
--- a/llvm/docs/ReleaseNotes.md
+++ b/llvm/docs/ReleaseNotes.md
@@ -68,6 +68,12 @@ Changes to TableGen
Changes to Interprocedural Optimizations
----------------------------------------
+Changes to Vectorizers
+----------------------------------------
+
+* Added initial support for copyable elements in SLP, which models copyable
+ elements as add <element>, 0, i.e. uses identity constants for missing lanes.
+
Changes to the AArch64 Backend
------------------------------
diff --git a/llvm/docs/YamlIO.rst b/llvm/docs/YamlIO.rst
index 7137c56..420adb8 100644
--- a/llvm/docs/YamlIO.rst
+++ b/llvm/docs/YamlIO.rst
@@ -92,7 +92,7 @@ corresponding denormalization step.
YAML I/O uses a non-invasive, traits based design. YAML I/O defines some
abstract base templates. You specialize those templates on your data types.
For instance, if you have an enumerated type FooBar you could specialize
-ScalarEnumerationTraits on that type and define the enumeration() method:
+ScalarEnumerationTraits on that type and define the ``enumeration()`` method:
.. code-block:: c++
@@ -113,7 +113,7 @@ values and the YAML string representation is only in one place.
This assures that the code for writing and parsing of YAML stays in sync.
To specify a YAML mappings, you define a specialization on
-llvm::yaml::MappingTraits.
+``llvm::yaml::MappingTraits``.
If your native data structure happens to be a struct that is already normalized,
then the specialization is simple. For example:
@@ -131,9 +131,9 @@ then the specialization is simple. For example:
};
-A YAML sequence is automatically inferred if you data type has begin()/end()
-iterators and a push_back() method. Therefore any of the STL containers
-(such as std::vector<>) will automatically translate to YAML sequences.
+A YAML sequence is automatically inferred if you data type has ``begin()``/``end()``
+iterators and a ``push_back()`` method. Therefore any of the STL containers
+(such as ``std::vector<>``) will automatically translate to YAML sequences.
Once you have defined specializations for your data types, you can
programmatically use YAML I/O to write a YAML document:
@@ -195,8 +195,8 @@ Error Handling
==============
When parsing a YAML document, if the input does not match your schema (as
-expressed in your XxxTraits<> specializations). YAML I/O
-will print out an error message and your Input object's error() method will
+expressed in your ``XxxTraits<>`` specializations). YAML I/O
+will print out an error message and your Input object's ``error()`` method will
return true. For instance the following document:
.. code-block:: yaml
@@ -265,8 +265,8 @@ operators to and from the base type. For example:
LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyBarFlags)
This generates two classes MyFooFlags and MyBarFlags which you can use in your
-native data structures instead of uint32_t. They are implicitly
-converted to and from uint32_t. The point of creating these unique types
+native data structures instead of ``uint32_t``. They are implicitly
+converted to and from ``uint32_t``. The point of creating these unique types
is that you can now specify traits on them to get different YAML conversions.
Hex types
@@ -280,15 +280,15 @@ format used by the built-in integer types:
* Hex16
* Hex8
-You can use llvm::yaml::Hex32 instead of uint32_t and the only different will
+You can use ``llvm::yaml::Hex32`` instead of ``uint32_t`` and the only different will
be that when YAML I/O writes out that type it will be formatted in hexadecimal.
ScalarEnumerationTraits
-----------------------
YAML I/O supports translating between in-memory enumerations and a set of string
-values in YAML documents. This is done by specializing ScalarEnumerationTraits<>
-on your enumeration type and define an enumeration() method.
+values in YAML documents. This is done by specializing ``ScalarEnumerationTraits<>``
+on your enumeration type and define an ``enumeration()`` method.
For instance, suppose you had an enumeration of CPUs and a struct with it as
a field:
@@ -333,9 +333,9 @@ as a field type:
};
When reading YAML, if the string found does not match any of the strings
-specified by enumCase() methods, an error is automatically generated.
+specified by ``enumCase()`` methods, an error is automatically generated.
When writing YAML, if the value being written does not match any of the values
-specified by the enumCase() methods, a runtime assertion is triggered.
+specified by the ``enumCase()`` methods, a runtime assertion is triggered.
BitValue
@@ -442,10 +442,10 @@ Sometimes for readability a scalar needs to be formatted in a custom way. For
instance your internal data structure may use an integer for time (seconds since
some epoch), but in YAML it would be much nicer to express that integer in
some time format (e.g. 4-May-2012 10:30pm). YAML I/O has a way to support
-custom formatting and parsing of scalar types by specializing ScalarTraits<> on
+custom formatting and parsing of scalar types by specializing ``ScalarTraits<>`` on
your data type. When writing, YAML I/O will provide the native type and
-your specialization must create a temporary llvm::StringRef. When reading,
-YAML I/O will provide an llvm::StringRef of scalar and your specialization
+your specialization must create a temporary ``llvm::StringRef``. When reading,
+YAML I/O will provide an ``llvm::StringRef`` of scalar and your specialization
must convert that to your native data type. An outline of a custom scalar type
looks like:
@@ -482,15 +482,15 @@ literal block notation, just like the example shown below:
Second line
The YAML I/O library provides support for translating between YAML block scalars
-and specific C++ types by allowing you to specialize BlockScalarTraits<> on
+and specific C++ types by allowing you to specialize ``BlockScalarTraits<>`` on
your data type. The library doesn't provide any built-in support for block
-scalar I/O for types like std::string and llvm::StringRef as they are already
+scalar I/O for types like ``std::string`` and ``llvm::StringRef`` as they are already
supported by YAML I/O and use the ordinary scalar notation by default.
BlockScalarTraits specializations are very similar to the
ScalarTraits specialization - YAML I/O will provide the native type and your
-specialization must create a temporary llvm::StringRef when writing, and
-it will also provide an llvm::StringRef that has the value of that block scalar
+specialization must create a temporary ``llvm::StringRef`` when writing, and
+it will also provide an ``llvm::StringRef`` that has the value of that block scalar
and your specialization must convert that to your native data type when reading.
An example of a custom type with an appropriate specialization of
BlockScalarTraits is shown below:
@@ -524,7 +524,7 @@ Mappings
========
To be translated to or from a YAML mapping for your type T you must specialize
-llvm::yaml::MappingTraits on T and implement the "void mapping(IO &io, T&)"
+``llvm::yaml::MappingTraits`` on T and implement the "void mapping(IO &io, T&)"
method. If your native data structures use pointers to a class everywhere,
you can specialize on the class pointer. Examples:
@@ -585,7 +585,7 @@ No Normalization
The ``mapping()`` method is responsible, if needed, for normalizing and
denormalizing. In a simple case where the native data structure requires no
-normalization, the mapping method just uses mapOptional() or mapRequired() to
+normalization, the mapping method just uses ``mapOptional()`` or ``mapRequired()`` to
bind the struct's fields to YAML key names. For example:
.. code-block:: c++
@@ -605,11 +605,11 @@ bind the struct's fields to YAML key names. For example:
Normalization
----------------
-When [de]normalization is required, the mapping() method needs a way to access
+When [de]normalization is required, the ``mapping()`` method needs a way to access
normalized values as fields. To help with this, there is
-a template MappingNormalization<> which you can then use to automatically
+a template ``MappingNormalization<>`` which you can then use to automatically
do the normalization and denormalization. The template is used to create
-a local variable in your mapping() method which contains the normalized keys.
+a local variable in your ``mapping()`` method which contains the normalized keys.
Suppose you have native data type
Polar which specifies a position in polar coordinates (distance, angle):
@@ -629,7 +629,7 @@ is, you want the yaml to look like:
x: 10.3
y: -4.7
-You can support this by defining a MappingTraits that normalizes the polar
+You can support this by defining a ``MappingTraits`` that normalizes the polar
coordinates to x,y coordinates when writing YAML and denormalizes x,y
coordinates into polar when reading YAML.
@@ -667,47 +667,47 @@ coordinates into polar when reading YAML.
};
When writing YAML, the local variable "keys" will be a stack allocated
-instance of NormalizedPolar, constructed from the supplied polar object which
-initializes it x and y fields. The mapRequired() methods then write out the x
+instance of ``NormalizedPolar``, constructed from the supplied polar object which
+initializes it x and y fields. The ``mapRequired()`` methods then write out the x
and y values as key/value pairs.
When reading YAML, the local variable "keys" will be a stack allocated instance
-of NormalizedPolar, constructed by the empty constructor. The mapRequired
+of ``NormalizedPolar``, constructed by the empty constructor. The ``mapRequired()``
methods will find the matching key in the YAML document and fill in the x and y
-fields of the NormalizedPolar object keys. At the end of the mapping() method
-when the local keys variable goes out of scope, the denormalize() method will
+fields of the ``NormalizedPolar`` object keys. At the end of the ``mapping()`` method
+when the local keys variable goes out of scope, the ``denormalize()`` method will
automatically be called to convert the read values back to polar coordinates,
-and then assigned back to the second parameter to mapping().
+and then assigned back to the second parameter to ``mapping()``.
In some cases, the normalized class may be a subclass of the native type and
-could be returned by the denormalize() method, except that the temporary
+could be returned by the ``denormalize()`` method, except that the temporary
normalized instance is stack allocated. In these cases, the utility template
-MappingNormalizationHeap<> can be used instead. It just like
-MappingNormalization<> except that it heap allocates the normalized object
-when reading YAML. It never destroys the normalized object. The denormalize()
+``MappingNormalizationHeap<>`` can be used instead. It just like
+``MappingNormalization<>`` except that it heap allocates the normalized object
+when reading YAML. It never destroys the normalized object. The ``denormalize()``
method can this return "this".
Default values
--------------
-Within a mapping() method, calls to io.mapRequired() mean that that key is
+Within a ``mapping()`` method, calls to ``io.mapRequired()`` mean that that key is
required to exist when parsing YAML documents, otherwise YAML I/O will issue an
error.
-On the other hand, keys registered with io.mapOptional() are allowed to not
+On the other hand, keys registered with ``io.mapOptional()`` are allowed to not
exist in the YAML document being read. So what value is put in the field
for those optional keys?
There are two steps to how those optional fields are filled in. First, the
-second parameter to the mapping() method is a reference to a native class. That
+second parameter to the ``mapping()`` method is a reference to a native class. That
native class must have a default constructor. Whatever value the default
constructor initially sets for an optional field will be that field's value.
-Second, the mapOptional() method has an optional third parameter. If provided
-it is the value that mapOptional() should set that field to if the YAML document
+Second, the ``mapOptional()`` method has an optional third parameter. If provided
+it is the value that ``mapOptional()`` should set that field to if the YAML document
does not have that key.
There is one important difference between those two ways (default constructor
-and third parameter to mapOptional). When YAML I/O generates a YAML document,
-if the mapOptional() third parameter is used, if the actual value being written
+and third parameter to ``mapOptional()``). When YAML I/O generates a YAML document,
+if the ``mapOptional()`` third parameter is used, if the actual value being written
is the same as (using ==) the default value, then that key/value is not written.
@@ -715,14 +715,14 @@ Order of Keys
--------------
When writing out a YAML document, the keys are written in the order that the
-calls to mapRequired()/mapOptional() are made in the mapping() method. This
+calls to ``mapRequired()``/``mapOptional()`` are made in the ``mapping()`` method. This
gives you a chance to write the fields in an order that a human reader of
the YAML document would find natural. This may be different that the order
of the fields in the native class.
When reading in a YAML document, the keys in the document can be in any order,
-but they are processed in the order that the calls to mapRequired()/mapOptional()
-are made in the mapping() method. That enables some interesting
+but they are processed in the order that the calls to ``mapRequired()``/``mapOptional()``
+are made in the ``mapping()`` method. That enables some interesting
functionality. For instance, if the first field bound is the cpu and the second
field bound is flags, and the flags are cpu specific, you can programmatically
switch how the flags are converted to and from YAML based on the cpu.
@@ -761,7 +761,7 @@ model. Recently, we added support to YAML I/O for checking/setting the optional
tag on a map. Using this functionality it is even possible to support different
mappings, as long as they are convertible.
-To check a tag, inside your mapping() method you can use io.mapTag() to specify
+To check a tag, inside your ``mapping()`` method you can use ``io.mapTag()`` to specify
what the tag should be. This will also add that tag when writing yaml.
Validation
@@ -834,7 +834,7 @@ Sequence
========
To be translated to or from a YAML sequence for your type T you must specialize
-llvm::yaml::SequenceTraits on T and implement two methods:
+``llvm::yaml::SequenceTraits`` on T and implement two methods:
``size_t size(IO &io, T&)`` and
``T::value_type& element(IO &io, T&, size_t indx)``. For example:
@@ -846,10 +846,10 @@ llvm::yaml::SequenceTraits on T and implement two methods:
static MySeqEl &element(IO &io, MySeq &list, size_t index) { ... }
};
-The size() method returns how many elements are currently in your sequence.
-The element() method returns a reference to the i'th element in the sequence.
-When parsing YAML, the element() method may be called with an index one bigger
-than the current size. Your element() method should allocate space for one
+The ``size()`` method returns how many elements are currently in your sequence.
+The ``element()`` method returns a reference to the i'th element in the sequence.
+When parsing YAML, the ``element()`` method may be called with an index one bigger
+than the current size. Your ``element()`` method should allocate space for one
more element (using default constructor if element is a C++ object) and returns
a reference to that new allocated space.
@@ -881,10 +881,10 @@ configuration.
Utility Macros
--------------
-Since a common source of sequences is std::vector<>, YAML I/O provides macros:
-LLVM_YAML_IS_SEQUENCE_VECTOR() and LLVM_YAML_IS_FLOW_SEQUENCE_VECTOR() which
-can be used to easily specify SequenceTraits<> on a std::vector type. YAML
-I/O does not partial specialize SequenceTraits on std::vector<> because that
+Since a common source of sequences is ``std::vector<>``, YAML I/O provides macros:
+``LLVM_YAML_IS_SEQUENCE_VECTOR()`` and ``LLVM_YAML_IS_FLOW_SEQUENCE_VECTOR()`` which
+can be used to easily specify ``SequenceTraits<>`` on a ``std::vector`` type. YAML
+I/O does not partial specialize ``SequenceTraits`` on ``std::vector<>`` because that
would force all vectors to be sequences. An example use of the macros:
.. code-block:: c++
@@ -906,7 +906,7 @@ have need for multiple documents. The top level node in their YAML schema
will be a mapping or sequence. For those cases, the following is not needed.
But for cases where you do want multiple documents, you can specify a
trait for you document list type. The trait has the same methods as
-SequenceTraits but is named DocumentListTraits. For example:
+``SequenceTraits`` but is named ``DocumentListTraits``. For example:
.. code-block:: c++
@@ -919,7 +919,7 @@ SequenceTraits but is named DocumentListTraits. For example:
User Context Data
=================
-When an llvm::yaml::Input or llvm::yaml::Output object is created their
+When an ``llvm::yaml::Input`` or ``llvm::yaml::Output`` object is created their
constructors take an optional "context" parameter. This is a pointer to
whatever state information you might need.
@@ -927,8 +927,8 @@ For instance, in a previous example we showed how the conversion type for a
flags field could be determined at runtime based on the value of another field
in the mapping. But what if an inner mapping needs to know some field value
of an outer mapping? That is where the "context" parameter comes in. You
-can set values in the context in the outer map's mapping() method and
-retrieve those values in the inner map's mapping() method.
+can set values in the context in the outer map's ``mapping()`` method and
+retrieve those values in the inner map's ``mapping()`` method.
The context value is just a void*. All your traits which use the context
and operate on your native data types, need to agree what the context value
@@ -939,9 +939,9 @@ traits use to shared context sensitive information.
Output
======
-The llvm::yaml::Output class is used to generate a YAML document from your
+The ``llvm::yaml::Output`` class is used to generate a YAML document from your
in-memory data structures, using traits defined on your data types.
-To instantiate an Output object you need an llvm::raw_ostream, an optional
+To instantiate an Output object you need an ``llvm::raw_ostream``, an optional
context pointer and an optional wrapping column:
.. code-block:: c++
@@ -957,7 +957,7 @@ streaming as YAML is a mapping, scalar, or sequence, then Output assumes you
are generating one document and wraps the mapping output
with "``---``" and trailing "``...``".
-The WrapColumn parameter will cause the flow mappings and sequences to
+The ``WrapColumn`` parameter will cause the flow mappings and sequences to
line-wrap when they go over the supplied column. Pass 0 to completely
suppress the wrapping.
@@ -980,7 +980,7 @@ The above could produce output like:
...
On the other hand, if the top level data structure you are streaming as YAML
-has a DocumentListTraits specialization, then Output walks through each element
+has a ``DocumentListTraits`` specialization, then Output walks through each element
of your DocumentList and generates a "---" before the start of each element
and ends with a "...".
@@ -1008,9 +1008,9 @@ The above could produce output like:
Input
=====
-The llvm::yaml::Input class is used to parse YAML document(s) into your native
+The ``llvm::yaml::Input`` class is used to parse YAML document(s) into your native
data structures. To instantiate an Input
-object you need a StringRef to the entire YAML file, and optionally a context
+object you need a ``StringRef`` to the entire YAML file, and optionally a context
pointer:
.. code-block:: c++
@@ -1024,7 +1024,7 @@ the document(s). If you expect there might be multiple YAML documents in
one file, you'll need to specialize DocumentListTraits on a list of your
document type and stream in that document list type. Otherwise you can
just stream in the document type. Also, you can check if there was
-any syntax errors in the YAML be calling the error() method on the Input
+any syntax errors in the YAML be calling the ``error()`` method on the Input
object. For example:
.. code-block:: c++