diff options
Diffstat (limited to 'llvm/docs')
-rw-r--r-- | llvm/docs/AMDGPUUsage.rst | 45 | ||||
-rw-r--r-- | llvm/docs/CodingStandards.rst | 36 | ||||
-rw-r--r-- | llvm/docs/DirectX/RootSignatures.rst | 245 | ||||
-rw-r--r-- | llvm/docs/DirectXUsage.rst | 1 | ||||
-rw-r--r-- | llvm/docs/GettingStarted.rst | 2 | ||||
-rw-r--r-- | llvm/docs/LangRef.rst | 38 | ||||
-rw-r--r-- | llvm/docs/ProgrammersManual.rst | 190 | ||||
-rw-r--r-- | llvm/docs/ReleaseNotes.md | 207 | ||||
-rw-r--r-- | llvm/docs/TestingGuide.rst | 4 | ||||
-rw-r--r-- | llvm/docs/YamlIO.rst | 214 |
10 files changed, 533 insertions, 449 deletions
diff --git a/llvm/docs/AMDGPUUsage.rst b/llvm/docs/AMDGPUUsage.rst index 1935763..d13f95b 100644 --- a/llvm/docs/AMDGPUUsage.rst +++ b/llvm/docs/AMDGPUUsage.rst @@ -677,7 +677,7 @@ the device used to execute the code match the features enabled when generating the code. A mismatch of features may result in incorrect execution, or a reduction in performance. -The target features supported by each processor is listed in +The target features supported by each processor are listed in :ref:`amdgpu-processors`. Target features are controlled by exactly one of the following Clang @@ -783,7 +783,7 @@ description. The AMDGPU target specific information is: Is an AMDGPU processor or alternative processor name specified in :ref:`amdgpu-processor-table`. The non-canonical form target ID allows both the primary processor and alternative processor names. The canonical form - target ID only allow the primary processor name. + target ID only allows the primary processor name. **target-feature** Is a target feature name specified in :ref:`amdgpu-target-features-table` that @@ -793,7 +793,7 @@ description. The AMDGPU target specific information is: ``--offload-arch``. Each target feature must appear at most once in a target ID. The non-canonical form target ID allows the target features to be specified in any order. The canonical form target ID requires the target - features to be specified in alphabetic order. + features to be specified in alphabetical order. .. _amdgpu-target-id-v2-v3: @@ -886,7 +886,7 @@ supported for the ``amdgcn`` target. setup (see :ref:`amdgpu-amdhsa-kernel-prolog-m0`). To convert between a private or group address space address (termed a segment - address) and a flat address the base address of the corresponding aperture + address) and a flat address, the base address of the corresponding aperture can be used. For GFX7-GFX8 these are available in the :ref:`amdgpu-amdhsa-hsa-aql-queue` the address of which can be obtained with Queue Ptr SGPR (see :ref:`amdgpu-amdhsa-initial-kernel-execution-state`). For @@ -1186,7 +1186,7 @@ The AMDGPU backend implements the following LLVM IR intrinsics. :ref:`llvm.stackrestore.p5 <int_stackrestore>` Implemented, must use the alloca address space. :ref:`llvm.get.fpmode.i32 <int_get_fpmode>` The natural floating-point mode type is i32. This - implemented by extracting relevant bits out of the MODE + is implemented by extracting relevant bits out of the MODE register with s_getreg_b32. The first 10 bits are the core floating-point mode. Bits 12:18 are the exception mask. On gfx9+, bit 23 is FP16_OVFL. Bitfields not @@ -1266,14 +1266,14 @@ The AMDGPU backend implements the following LLVM IR intrinsics. llvm.amdgcn.permlane16 Provides direct access to v_permlane16_b32. Performs arbitrary gather-style operation within a row (16 contiguous lanes) of the second input operand. - The third and fourth inputs must be scalar values. these are combined into + The third and fourth inputs must be scalar values. These are combined into a single 64-bit value representing lane selects used to swizzle within each row. Currently implemented for i16, i32, float, half, bfloat, <2 x i16>, <2 x half>, <2 x bfloat>, i64, double, pointers, multiples of the 32-bit vectors. llvm.amdgcn.permlanex16 Provides direct access to v_permlanex16_b32. Performs arbitrary gather-style operation across two rows of the second input operand (each row is 16 contiguous - lanes). The third and fourth inputs must be scalar values. these are combined + lanes). The third and fourth inputs must be scalar values. These are combined into a single 64-bit value representing lane selects used to swizzle within each row. Currently implemented for i16, i32, float, half, bfloat, <2 x i16>, <2 x half>, <2 x bfloat>, i64, double, pointers, multiples of the 32-bit vectors. @@ -1285,31 +1285,31 @@ The AMDGPU backend implements the following LLVM IR intrinsics. 32-bit vectors. llvm.amdgcn.udot2 Provides direct access to v_dot2_u32_u16 across targets which - support such instructions. This performs unsigned dot product + support such instructions. This performs an unsigned dot product with two v2i16 operands, summed with the third i32 operand. The i1 fourth operand is used to clamp the output. llvm.amdgcn.udot4 Provides direct access to v_dot4_u32_u8 across targets which - support such instructions. This performs unsigned dot product + support such instructions. This performs an unsigned dot product with two i32 operands (holding a vector of 4 8bit values), summed with the third i32 operand. The i1 fourth operand is used to clamp the output. llvm.amdgcn.udot8 Provides direct access to v_dot8_u32_u4 across targets which - support such instructions. This performs unsigned dot product + support such instructions. This performs an unsigned dot product with two i32 operands (holding a vector of 8 4bit values), summed with the third i32 operand. The i1 fourth operand is used to clamp the output. llvm.amdgcn.sdot2 Provides direct access to v_dot2_i32_i16 across targets which - support such instructions. This performs signed dot product + support such instructions. This performs a signed dot product with two v2i16 operands, summed with the third i32 operand. The i1 fourth operand is used to clamp the output. When applicable (e.g. no clamping), this is lowered into v_dot2c_i32_i16 for targets which support it. llvm.amdgcn.sdot4 Provides direct access to v_dot4_i32_i8 across targets which - support such instructions. This performs signed dot product + support such instructions. This performs a signed dot product with two i32 operands (holding a vector of 4 8bit values), summed with the third i32 operand. The i1 fourth operand is used to clamp the output. @@ -1321,7 +1321,7 @@ The AMDGPU backend implements the following LLVM IR intrinsics. of this instruction for gfx11 targets. llvm.amdgcn.sdot8 Provides direct access to v_dot8_u32_u4 across targets which - support such instructions. This performs signed dot product + support such instructions. This performs a signed dot product with two i32 operands (holding a vector of 8 4bit values), summed with the third i32 operand. The i1 fourth operand is used to clamp the output. @@ -1401,7 +1401,7 @@ The AMDGPU backend implements the following LLVM IR intrinsics. llvm.amdgcn.atomic.cond.sub.u32 Provides direct access to flat_atomic_cond_sub_u32, global_atomic_cond_sub_u32 and ds_cond_sub_u32 based on address space on gfx12 targets. This - performs subtraction only if the memory value is greater than or + performs a subtraction only if the memory value is greater than or equal to the data value. llvm.amdgcn.s.barrier.signal.isfirst Provides access to the s_barrier_signal_first instruction; @@ -1646,7 +1646,7 @@ The AMDGPU backend supports the following LLVM IR attributes. llvm.amdgcn.queue.ptr intrinsic. Note that unlike the other ABI hint attributes, the queue pointer may be required in situations where the intrinsic call does not directly appear in the program. Some subtargets - require the queue pointer for to handle some addrspacecasts, as well + require the queue pointer to handle some addrspacecasts, as well as the llvm.amdgcn.is.shared, llvm.amdgcn.is.private, llvm.trap, and llvm.debug intrinsics. @@ -1947,7 +1947,7 @@ The following describes all emitted function resource usage symbols: callees, contains an indirect call ===================================== ========= ========================================= =============================================================================== -Futhermore, three symbols are additionally emitted describing the compilation +Furthermore, three symbols are additionally emitted describing the compilation unit's worst case (i.e, maxima) ``num_vgpr``, ``num_agpr``, and ``numbered_sgpr`` which may be referenced and used by the aforementioned symbolic expressions. These three symbols are ``amdgcn.max_num_vgpr``, @@ -6358,10 +6358,13 @@ also have to wait on all global memory operations, which is unnecessary. :doc:`Memory Model Relaxation Annotations <MemoryModelRelaxationAnnotations>` can be used as an optimization hint for fences to solve this problem. -The AMDGPU backend recognizes the following tags on fences: +The AMDGPU backend recognizes the following tags on fences to control which address +space a fence can synchronize: -- ``amdgpu-as:local`` - fence only the local address space -- ``amdgpu-as:global``- fence only the global address space +- ``amdgpu-synchronize-as:local`` - for the local address space +- ``amdgpu-synchronize-as:global``- for the global address space + +Multiple tags can be used at the same time to synchronize with more than one address space. .. note:: @@ -17948,7 +17951,7 @@ set architecture (ISA) version of the assembly program. "AMD" and *arch* should always be equal to "AMDGPU". By default, the assembler will derive the ISA version, *vendor*, and *arch* -from the value of the -mcpu option that is passed to the assembler. +from the value of the ``-mcpu`` option that is passed to the assembler. .. _amdgpu-amdhsa-assembler-directive-amdgpu_hsa_kernel: @@ -17972,7 +17975,7 @@ default value for all keys is 0, with the following exceptions: - *amd_kernel_code_version_minor* defaults to 2. - *amd_machine_kind* defaults to 1. - *amd_machine_version_major*, *machine_version_minor*, and - *amd_machine_version_stepping* are derived from the value of the -mcpu option + *amd_machine_version_stepping* are derived from the value of the ``-mcpu`` option that is passed to the assembler. - *kernel_code_entry_byte_offset* defaults to 256. - *wavefront_size* defaults 6 for all targets before GFX10. For GFX10 onwards diff --git a/llvm/docs/CodingStandards.rst b/llvm/docs/CodingStandards.rst index 732227b..2dc3d77 100644 --- a/llvm/docs/CodingStandards.rst +++ b/llvm/docs/CodingStandards.rst @@ -1594,20 +1594,25 @@ Restrict Visibility ^^^^^^^^^^^^^^^^^^^ Functions and variables should have the most restricted visibility possible. + For class members, that means using appropriate ``private``, ``protected``, or -``public`` keyword to restrict their access. For non-member functions, variables, -and classes, that means restricting visibility to a single ``.cpp`` file if it's -not referenced outside that file. +``public`` keyword to restrict their access. + +For non-member functions, variables, and classes, that means restricting +visibility to a single ``.cpp`` file if it is not referenced outside that file. Visibility of file-scope non-member variables and functions can be restricted to the current translation unit by using either the ``static`` keyword or an anonymous -namespace. Anonymous namespaces are a great language feature that tells the C++ +namespace. + +Anonymous namespaces are a great language feature that tells the C++ compiler that the contents of the namespace are only visible within the current translation unit, allowing more aggressive optimization and eliminating the -possibility of symbol name collisions. Anonymous namespaces are to C++ as -``static`` is to C functions and global variables. While ``static`` is available -in C++, anonymous namespaces are more general: they can make entire classes -private to a file. +possibility of symbol name collisions. + +Anonymous namespaces are to C++ as ``static`` is to C functions and global +variables. While ``static`` is available in C++, anonymous namespaces are more +general: they can make entire classes private to a file. The problem with anonymous namespaces is that they naturally want to encourage indentation of their body, and they reduce locality of reference: if you see a @@ -1653,10 +1658,17 @@ Avoid putting declarations other than classes into anonymous namespaces: } // namespace -When you are looking at "``runHelper``" in the middle of a large C++ file, -you have no immediate way to tell if this function is local to the file. In -contrast, when the function is marked static, you don't need to cross-reference -faraway places in the file to tell that the function is local. +When you are looking at ``runHelper`` in the middle of a large C++ file, +you have no immediate way to tell if this function is local to the file. + +In contrast, when the function is marked static, you don't need to cross-reference +faraway places in the file to tell that the function is local: + +.. code-block:: c++ + + static void runHelper() { + ... + } Don't Use Braces on Simple Single-Statement Bodies of if/else/loop Statements ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/llvm/docs/DirectX/RootSignatures.rst b/llvm/docs/DirectX/RootSignatures.rst new file mode 100644 index 0000000..e328b4a --- /dev/null +++ b/llvm/docs/DirectX/RootSignatures.rst @@ -0,0 +1,245 @@ +=============== +Root Signatures +=============== + +.. contents:: + :local: + +.. toctree:: + :hidden: + +Overview +======== + +A root signature is used to describe what resources a shader needs access to +and how they're organized and bound in the pipeline. The DirectX Container +(DXContainer) contains a root signature part (RTS0), which stores this +information in a binary format. To assist with the construction of, and +interaction with, a root signature is represented as metadata +(``dx.rootsignatures`` ) in the LLVM IR. The metadata can then be converted to +its binary form, as defined in +`llvm/include/llvm/llvm/Frontend/HLSL/RootSignatureMetadata.h +<https://github.com/llvm/llvm-project/blob/main/llvm/include/llvm/Frontend/HLSL/RootSignatureMetadata.h>`_. +This document serves as a reference for the metadata representation of a root +signature for users to interface with. + +Metadata Representation +======================= + +Consider the reference root signature, then the following sections describe the +metadata representation of this root signature and the corresponding operands. + +.. code-block:: HLSL + + RootFlags(ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT), + RootConstants(b0, space = 1, num32Constants = 3), + CBV(b1, flags = 0), + StaticSampler( + filter = FILTER_MIN_MAG_POINT_MIP_LINEAR, + addressU = TEXTURE_ADDRESS_BORDER, + ), + DescriptorTable( + visibility = VISIBILITY_ALL, + SRV(t0, flags = DATA_STATIC_WHILE_SET_AT_EXECUTE), + UAV( + numDescriptors = 5, u1, space = 10, offset = 5, + flags = DATA_VOLATILE + ) + ) + +.. note:: + + A root signature does not necessarily have a unique metadata representation. + Futher, a malformed root signature can be represented in the metadata format, + (eg. mixing Sampler and non-Sampler descriptor ranges), and so it is the + user's responsibility to verify that it is a well-formed root signature. + +Named Root Signature Table +========================== + +.. code-block:: LLVM + + !dx.rootsignatures = !{!0} + +A named metadata node, ``dx.rootsignatures``` is used to identify the root +signature table. The table itself is a list of references to function/root +signature pairs. + +Function/Root Signature Pair +============================ + +.. code-block:: LLVM + + !1 = !{ptr @main, !2, i32 2 } + +The function/root signature associates a function (the first operand) with a +reference to a root signature (the second operand). The root signature version +(the third operand) used for validation logic and binary format follows. + +Root Signature +============== + +.. code-block:: LLVM + + !2 = !{ !3, !4, !5, !6, !7 } + +The root signature itself simply consists of a list of references to its root +signature elements. + +Root Signature Element +====================== + +A root signature element is identified by the first operand, which is a string. +The following root signature elements are defined: + +================= ====================== +Identifier String Root Signature Element +================= ====================== +"RootFlags" Root Flags +"RootConstants" Root Constants +"RootCBV" Root Descriptor +"RootSRV" Root Descriptor +"RootUAV" Root Descriptor +"StaticSampler" Static Sampler +"DescriptorTable" Descriptor Table +================= ====================== + +Below is listed the representation for each type of root signature element. + +Root Flags +========== + +.. code-block:: LLVM + + !3 = { !"RootFlags", i32 1 } + +======================= ==== +Description Type +======================= ==== +`Root Signature Flags`_ i32 +======================= ==== + +.. _Root Signature Flags: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_root_signature_flags + +Root Constants +============== + +.. code-block:: LLVM + + !4 = { !"RootConstants", i32 0, i32 1, i32 2, i32 3 } + +==================== ==== +Description Type +==================== ==== +`Shader Visibility`_ i32 +Shader Register i32 +Register Space i32 +Number 32-bit Values i32 +==================== ==== + +.. _Shader Visibility: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_shader_visibility + +Root Descriptor +=============== + +As noted in the table above, the first operand will denote the type of +root descriptor. + +.. code-block:: LLVM + + !5 = { !"RootCBV", i32 0, i32 1, i32 0, i32 0 } + +======================== ==== +Description Type +======================== ==== +`Shader Visibility`_ i32 +Shader Register i32 +Register Space i32 +`Root Descriptor Flags`_ i32 +======================== ==== + +.. _Root Descriptor Flags: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_root_descriptor_flags + +Static Sampler +============== + +.. code-block:: LLVM + + !6 = !{ !"StaticSampler", i32 1, i32 4, ... }; remaining operands omitted for space + +==================== ===== +Description Type +==================== ===== +`Filter`_ i32 +`AddressU`_ i32 +`AddressV`_ i32 +`AddressW`_ i32 +MipLODBias float +MaxAnisotropy i32 +`ComparisonFunc`_ i32 +`BorderColor`_ i32 +MinLOD float +MaxLOD float +ShaderRegister i32 +RegisterSpace i32 +`Shader Visibility`_ i32 +==================== ===== + +.. _Filter: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_filter +.. _AddressU: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_texture_address_mode +.. _AddressV: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_texture_address_mode +.. _AddressW: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_texture_address_mode +.. _ComparisonFunc: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_comparison_func> +.. _BorderColor: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_static_border_color> + +Descriptor Table +================ + +A descriptor table consists of a visibility and the remaining operands are a +list of references to its descriptor ranges. + +.. note:: + + The term Descriptor Table Clause is synonymous with Descriptor Range when + referencing the implementation details. + +.. code-block:: LLVM + + !7 = { !"DescriptorTable", i32 0, !8, !9 } + +========================= ================ +Description Type +========================= ================ +`Shader Visibility`_ i32 +Descriptor Range Elements Descriptor Range +========================= ================ + + +Descriptor Range +================ + +Similar to a root descriptor, the first operand will denote the type of +descriptor range. It is one of the following types: + +- "CBV" +- "SRV" +- "UAV" +- "Sampler" + +.. code-block:: LLVM + + !8 = !{ !"SRV", i32 1, i32 0, i32 0, i32 -1, i32 4 } + !9 = !{ !"UAV", i32 5, i32 1, i32 10, i32 5, i32 2 } + +============================== ==== +Description Type +============================== ==== +Number of Descriptors in Range i32 +Shader Register i32 +Register Space i32 +`Offset`_ i32 +`Descriptor Range Flags`_ i32 +============================== ==== + +.. _Offset: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ns-d3d12-d3d12_descriptor_range +.. _Descriptor Range Flags: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_descriptor_range_flags diff --git a/llvm/docs/DirectXUsage.rst b/llvm/docs/DirectXUsage.rst index 4d8f49b..1d964e6 100644 --- a/llvm/docs/DirectXUsage.rst +++ b/llvm/docs/DirectXUsage.rst @@ -17,6 +17,7 @@ User Guide for the DirectX Target DirectX/DXILArchitecture DirectX/DXILOpTableGenDesign DirectX/DXILResources + DirectX/RootSignatures Introduction ============ diff --git a/llvm/docs/GettingStarted.rst b/llvm/docs/GettingStarted.rst index 3036dae..e4dbb64b 100644 --- a/llvm/docs/GettingStarted.rst +++ b/llvm/docs/GettingStarted.rst @@ -240,8 +240,10 @@ Linux x86\ :sup:`1` GCC, Clang Linux amd64 GCC, Clang Linux ARM GCC, Clang Linux AArch64 GCC, Clang +Linux LoongArch GCC, Clang Linux Mips GCC, Clang Linux PowerPC GCC, Clang +Linux RISC-V GCC, Clang Linux SystemZ GCC, Clang Solaris V9 (Ultrasparc) GCC DragonFlyBSD amd64 GCC, Clang diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst index 822e761..bac13cc 100644 --- a/llvm/docs/LangRef.rst +++ b/llvm/docs/LangRef.rst @@ -280,9 +280,9 @@ linkage: linkage are linked together, the two global arrays are appended together. This is the LLVM, typesafe, equivalent of having the system linker append together "sections" with identical names when - .o files are linked. + ``.o`` files are linked. - Unfortunately this doesn't correspond to any feature in .o files, so it + Unfortunately this doesn't correspond to any feature in ``.o`` files, so it can only be used for variables like ``llvm.global_ctors`` which llvm interprets specially. @@ -371,7 +371,7 @@ added in the future: This calling convention supports `tail call optimization <CodeGenerator.html#tail-call-optimization>`_ but requires - both the caller and callee are using it. + both the caller and callee to use it. "``cc 11``" - The HiPE calling convention This calling convention has been implemented specifically for use by the `High-Performance Erlang @@ -447,7 +447,7 @@ added in the future: R11. R11 can be used as a scratch register. Furthermore it also preserves all floating-point registers (XMMs/YMMs). - - On AArch64 the callee preserve all general purpose registers, except + - On AArch64 the callee preserves all general purpose registers, except X0-X8 and X16-X18. Furthermore it also preserves lower 128 bits of V8-V31 SIMD floating point registers. Not allowed with ``nest``. @@ -890,7 +890,7 @@ Syntax:: [gc] [prefix Constant] [prologue Constant] [personality Constant] (!name !N)* { ... } -The argument list is a comma separated sequence of arguments where each +The argument list is a comma-separated sequence of arguments where each argument is of the following form: Syntax:: @@ -1011,7 +1011,7 @@ some can only be checked when producing an object file: IFuncs ------- -IFuncs, like as aliases, don't create any new data or func. They are just a new +IFuncs, like aliases, don't create any new data or func. They are just a new symbol that is resolved at runtime by calling a resolver function. On ELF platforms, IFuncs are resolved by the dynamic linker at load time. On @@ -1211,7 +1211,7 @@ Currently, only the following parameter attributes are defined: the callee (for a return value). ``noext`` This indicates to the code generator that the parameter or return - value has the high bits undefined, as for a struct in register, and + value has the high bits undefined, as for a struct in a register, and therefore does not need to be sign or zero extended. This is the same as default behavior and is only actually used (by some targets) to validate that one of the attributes is always present. @@ -1252,7 +1252,7 @@ Currently, only the following parameter attributes are defined: on the stack. This implies the pointer is dereferenceable up to the storage size of the type. - It is not generally permissible to introduce a write to an + It is not generally permissible to introduce a write to a ``byref`` pointer. The pointer may have any address space and may be read only. @@ -1393,7 +1393,7 @@ Currently, only the following parameter attributes are defined: storage for any other object accessible to the caller. ``captures(...)`` - This attributes restrict the ways in which the callee may capture the + This attribute restricts the ways in which the callee may capture the pointer. This is not a valid attribute for return values. This attribute applies only to the particular copy of the pointer passed in this argument. @@ -1615,7 +1615,7 @@ Currently, only the following parameter attributes are defined: assigning this parameter or return value to a stack slot during calling convention lowering. The enforcement of the specified alignment is target-dependent, as target-specific calling convention rules may override - this value. This attribute serves the purpose of carrying language specific + this value. This attribute serves the purpose of carrying language-specific alignment information that is not mapped to base types in the backend (for example, over-alignment specification through language attributes). @@ -1993,7 +1993,7 @@ For example: ``cold`` This attribute indicates that this function is rarely called. When computing edge weights, basic blocks post-dominated by a cold - function call are also considered to be cold; and, thus, given low + function call are also considered to be cold and, thus, given a low weight. .. _attr_convergent: @@ -3356,6 +3356,19 @@ behavior is undefined: - the size of all allocated objects must be non-negative and not exceed the largest signed integer that fits into the index type. +Allocated objects that are created with operations recognized by LLVM (such as +:ref:`alloca <i_alloca>`, heap allocation functions marked as such, and global +variables) may *not* change their size. (``realloc``-style operations do not +change the size of an existing allocated object; instead, they create a new +allocated object. Even if the object is at the same location as the old one, old +pointers cannot be used to access this new object.) However, allocated objects +can also be created by means not recognized by LLVM, e.g. by directly calling +``mmap``. Those allocated objects are allowed to grow to the right (i.e., +keeping the same base address, but increasing their size) while maintaining the +validity of existing pointers, as long as they always satisfy the properties +described above. Currently, allocated objects are not permitted to grow to the +left or to shrink, nor can they have holes. + .. _objectlifetime: Object Lifetime @@ -11928,6 +11941,9 @@ if the ``getelementptr`` has any non-zero indices, the following rules apply: :ref:`based <pointeraliasing>` on. This means that it points into that allocated object, or to its end. Note that the object does not have to be live anymore; being in-bounds of a deallocated object is sufficient. + If the allocated object can grow, then the relevant size for being *in + bounds* is the maximal size the object could have while satisfying the + allocated object rules, not its current size. * During the successive addition of offsets to the address, the resulting pointer must remain *in bounds* of the allocated object at each step. diff --git a/llvm/docs/ProgrammersManual.rst b/llvm/docs/ProgrammersManual.rst index 68490c8..9ddeebd 100644 --- a/llvm/docs/ProgrammersManual.rst +++ b/llvm/docs/ProgrammersManual.rst @@ -932,7 +932,7 @@ In some contexts, certain types of errors are known to be benign. For example, when walking an archive, some clients may be happy to skip over badly formatted object files rather than terminating the walk immediately. Skipping badly formatted objects could be achieved using an elaborate handler method, but the -Error.h header provides two utilities that make this idiom much cleaner: the +``Error.h`` header provides two utilities that make this idiom much cleaner: the type inspection method, ``isA``, and the ``consumeError`` function: .. code-block:: c++ @@ -1073,7 +1073,7 @@ relatively natural use of C++ iterator/loop idioms. .. _function_apis: More information on Error and its related utilities can be found in the -Error.h header file. +``Error.h`` header file. Passing functions and other callable objects -------------------------------------------- @@ -1224,7 +1224,7 @@ Then you can run your pass like this: Of course, in practice, you should only set ``DEBUG_TYPE`` at the top of a file, to specify the debug type for the entire module. Be careful that you only do -this after including Debug.h and not around any #include of headers. Also, you +this after including ``Debug.h`` and not around any #include of headers. Also, you should use names more meaningful than "foo" and "bar", because there is no system in place to ensure that names do not conflict. If two different modules use the same string, they will all be turned on when the name is specified. @@ -1579,18 +1579,18 @@ llvm/ADT/SmallVector.h ``SmallVector<Type, N>`` is a simple class that looks and smells just like ``vector<Type>``: it supports efficient iteration, lays out elements in memory order (so you can do pointer arithmetic between elements), supports efficient -push_back/pop_back operations, supports efficient random access to its elements, +``push_back``/``pop_back`` operations, supports efficient random access to its elements, etc. -The main advantage of SmallVector is that it allocates space for some number of -elements (N) **in the object itself**. Because of this, if the SmallVector is +The main advantage of ``SmallVector`` is that it allocates space for some number of +elements (N) **in the object itself**. Because of this, if the ``SmallVector`` is dynamically smaller than N, no malloc is performed. This can be a big win in cases where the malloc/free call is far more expensive than the code that fiddles around with the elements. This is good for vectors that are "usually small" (e.g. the number of predecessors/successors of a block is usually less than 8). On the other hand, -this makes the size of the SmallVector itself large, so you don't want to +this makes the size of the ``SmallVector`` itself large, so you don't want to allocate lots of them (doing so will waste a lot of space). As such, SmallVectors are most useful when on the stack. @@ -1600,21 +1600,21 @@ omitting the ``N``). This will choose a default number of inlined elements reasonable for allocation on the stack (for example, trying to keep ``sizeof(SmallVector<T>)`` around 64 bytes). -SmallVector also provides a nice portable and efficient replacement for +``SmallVector`` also provides a nice portable and efficient replacement for ``alloca``. -SmallVector has grown a few other minor advantages over std::vector, causing +``SmallVector`` has grown a few other minor advantages over ``std::vector``, causing ``SmallVector<Type, 0>`` to be preferred over ``std::vector<Type>``. -#. std::vector is exception-safe, and some implementations have pessimizations - that copy elements when SmallVector would move them. +#. ``std::vector`` is exception-safe, and some implementations have pessimizations + that copy elements when ``SmallVector`` would move them. -#. SmallVector understands ``std::is_trivially_copyable<Type>`` and uses realloc aggressively. +#. ``SmallVector`` understands ``std::is_trivially_copyable<Type>`` and uses realloc aggressively. -#. Many LLVM APIs take a SmallVectorImpl as an out parameter (see the note +#. Many LLVM APIs take a ``SmallVectorImpl`` as an out parameter (see the note below). -#. SmallVector with N equal to 0 is smaller than std::vector on 64-bit +#. ``SmallVector`` with N equal to 0 is smaller than ``std::vector`` on 64-bit platforms, since it uses ``unsigned`` (instead of ``void*``) for its size and capacity. @@ -1698,11 +1698,11 @@ non-ordered manner. ^^^^^^^^ ``std::vector<T>`` is well loved and respected. However, ``SmallVector<T, 0>`` -is often a better option due to the advantages listed above. std::vector is +is often a better option due to the advantages listed above. ``std::vector`` is still useful when you need to store more than ``UINT32_MAX`` elements or when interfacing with code that expects vectors :). -One worthwhile note about std::vector: avoid code like this: +One worthwhile note about ``std::vector``: avoid code like this: .. code-block:: c++ @@ -1749,10 +1749,10 @@ extremely high constant factor, particularly for small data types. ``std::list`` also only supports bidirectional iteration, not random access iteration. -In exchange for this high cost, std::list supports efficient access to both ends +In exchange for this high cost, ``std::list`` supports efficient access to both ends of the list (like ``std::deque``, but unlike ``std::vector`` or ``SmallVector``). In addition, the iterator invalidation characteristics of -std::list are stronger than that of a vector class: inserting or removing an +``std::list`` are stronger than that of a vector class: inserting or removing an element into the list does not invalidate iterator or pointers to other elements in the list. @@ -1895,7 +1895,7 @@ Note that it is generally preferred to *not* pass strings around as ``const char*``'s. These have a number of problems, including the fact that they cannot represent embedded nul ("\0") characters, and do not have a length available efficiently. The general replacement for '``const char*``' is -StringRef. +``StringRef``. For more information on choosing string containers for APIs, please see :ref:`Passing Strings <string_apis>`. @@ -1905,41 +1905,41 @@ For more information on choosing string containers for APIs, please see llvm/ADT/StringRef.h ^^^^^^^^^^^^^^^^^^^^ -The StringRef class is a simple value class that contains a pointer to a +The ``StringRef`` class is a simple value class that contains a pointer to a character and a length, and is quite related to the :ref:`ArrayRef <dss_arrayref>` class (but specialized for arrays of characters). Because -StringRef carries a length with it, it safely handles strings with embedded nul +``StringRef`` carries a length with it, it safely handles strings with embedded nul characters in it, getting the length does not require a strlen call, and it even has very convenient APIs for slicing and dicing the character range that it represents. -StringRef is ideal for passing simple strings around that are known to be live, -either because they are C string literals, std::string, a C array, or a -SmallVector. Each of these cases has an efficient implicit conversion to -StringRef, which doesn't result in a dynamic strlen being executed. +``StringRef`` is ideal for passing simple strings around that are known to be live, +either because they are C string literals, ``std::string``, a C array, or a +``SmallVector``. Each of these cases has an efficient implicit conversion to +``StringRef``, which doesn't result in a dynamic ``strlen`` being executed. -StringRef has a few major limitations which make more powerful string containers +``StringRef`` has a few major limitations which make more powerful string containers useful: -#. You cannot directly convert a StringRef to a 'const char*' because there is - no way to add a trailing nul (unlike the .c_str() method on various stronger +#. You cannot directly convert a ``StringRef`` to a 'const char*' because there is + no way to add a trailing nul (unlike the ``.c_str()`` method on various stronger classes). -#. StringRef doesn't own or keep alive the underlying string bytes. +#. ``StringRef`` doesn't own or keep alive the underlying string bytes. As such it can easily lead to dangling pointers, and is not suitable for - embedding in datastructures in most cases (instead, use an std::string or + embedding in datastructures in most cases (instead, use an ``std::string`` or something like that). -#. For the same reason, StringRef cannot be used as the return value of a - method if the method "computes" the result string. Instead, use std::string. +#. For the same reason, ``StringRef`` cannot be used as the return value of a + method if the method "computes" the result string. Instead, use ``std::string``. -#. StringRef's do not allow you to mutate the pointed-to string bytes and it +#. ``StringRef``'s do not allow you to mutate the pointed-to string bytes and it doesn't allow you to insert or remove bytes from the range. For editing operations like this, it interoperates with the :ref:`Twine <dss_twine>` class. Because of its strengths and limitations, it is very common for a function to -take a StringRef and for a method on an object to return a StringRef that points +take a ``StringRef`` and for a method on an object to return a ``StringRef`` that points into some string that it owns. .. _dss_twine: @@ -1979,25 +1979,25 @@ behavior and will probably crash: const Twine &Tmp = X + "." + Twine(i); foo(Tmp); -... because the temporaries are destroyed before the call. That said, Twine's -are much more efficient than intermediate std::string temporaries, and they work -really well with StringRef. Just be aware of their limitations. +... because the temporaries are destroyed before the call. That said, ``Twine``'s +are much more efficient than intermediate ``std::string`` temporaries, and they work +really well with ``StringRef``. Just be aware of their limitations. .. _dss_smallstring: llvm/ADT/SmallString.h ^^^^^^^^^^^^^^^^^^^^^^ -SmallString is a subclass of :ref:`SmallVector <dss_smallvector>` that adds some -convenience APIs like += that takes StringRef's. SmallString avoids allocating +``SmallString`` is a subclass of :ref:`SmallVector <dss_smallvector>` that adds some +convenience APIs like += that takes ``StringRef``'s. ``SmallString`` avoids allocating memory in the case when the preallocated space is enough to hold its data, and it calls back to general heap allocation when required. Since it owns its data, it is very safe to use and supports full mutation of the string. -Like SmallVector's, the big downside to SmallString is their sizeof. While they +Like ``SmallVector``'s, the big downside to ``SmallString`` is their sizeof. While they are optimized for small strings, they themselves are not particularly small. This means that they work great for temporary scratch buffers on the stack, but -should not generally be put into the heap: it is very rare to see a SmallString +should not generally be put into the heap: it is very rare to see a ``SmallString`` as the member of a frequently-allocated heap data structure or returned by-value. @@ -2006,18 +2006,18 @@ by-value. std::string ^^^^^^^^^^^ -The standard C++ std::string class is a very general class that (like -SmallString) owns its underlying data. sizeof(std::string) is very reasonable +The standard C++ ``std::string`` class is a very general class that (like +``SmallString``) owns its underlying data. sizeof(std::string) is very reasonable so it can be embedded into heap data structures and returned by-value. On the -other hand, std::string is highly inefficient for inline editing (e.g. +other hand, ``std::string`` is highly inefficient for inline editing (e.g. concatenating a bunch of stuff together) and because it is provided by the standard library, its performance characteristics depend a lot of the host standard library (e.g. libc++ and MSVC provide a highly optimized string class, GCC contains a really slow implementation). -The major disadvantage of std::string is that almost every operation that makes +The major disadvantage of ``std::string`` is that almost every operation that makes them larger can allocate memory, which is slow. As such, it is better to use -SmallVector or Twine as a scratch buffer, but then use std::string to persist +``SmallVector`` or ``Twine`` as a scratch buffer, but then use ``std::string`` to persist the result. .. _ds_set: @@ -2035,8 +2035,8 @@ A sorted 'vector' ^^^^^^^^^^^^^^^^^ If you intend to insert a lot of elements, then do a lot of queries, a great -approach is to use an std::vector (or other sequential container) with -std::sort+std::unique to remove duplicates. This approach works really well if +approach is to use an ``std::vector`` (or other sequential container) with +``std::sort``+``std::unique`` to remove duplicates. This approach works really well if your usage pattern has these two distinct phases (insert then query), and can be coupled with a good choice of :ref:`sequential container <ds_sequential>`. @@ -2102,11 +2102,11 @@ copy-construction, which :ref:`SmallSet <dss_smallset>` and :ref:`SmallPtrSet llvm/ADT/DenseSet.h ^^^^^^^^^^^^^^^^^^^ -DenseSet is a simple quadratically probed hash table. It excels at supporting +``DenseSet`` is a simple quadratically probed hash table. It excels at supporting small values: it uses a single allocation to hold all of the pairs that are -currently inserted in the set. DenseSet is a great way to unique small values +currently inserted in the set. ``DenseSet`` is a great way to unique small values that are not simple pointers (use :ref:`SmallPtrSet <dss_smallptrset>` for -pointers). Note that DenseSet has the same requirements for the value type that +pointers). Note that ``DenseSet`` has the same requirements for the value type that :ref:`DenseMap <dss_densemap>` has. .. _dss_sparseset: @@ -2128,12 +2128,12 @@ data structures. llvm/ADT/SparseMultiSet.h ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -SparseMultiSet adds multiset behavior to SparseSet, while retaining SparseSet's -desirable attributes. Like SparseSet, it typically uses a lot of memory, but +``SparseMultiSet`` adds multiset behavior to ``SparseSet``, while retaining ``SparseSet``'s +desirable attributes. Like ``SparseSet``, it typically uses a lot of memory, but provides operations that are almost as fast as a vector. Typical keys are physical registers, virtual registers, or numbered basic blocks. -SparseMultiSet is useful for algorithms that need very fast +``SparseMultiSet`` is useful for algorithms that need very fast clear/find/insert/erase of the entire collection, and iteration over sets of elements sharing a key. It is often a more efficient choice than using composite data structures (e.g. vector-of-vectors, map-of-vectors). It is not intended for @@ -2144,10 +2144,10 @@ building composite data structures. llvm/ADT/FoldingSet.h ^^^^^^^^^^^^^^^^^^^^^ -FoldingSet is an aggregate class that is really good at uniquing +``FoldingSet`` is an aggregate class that is really good at uniquing expensive-to-create or polymorphic objects. It is a combination of a chained hash table with intrusive links (uniqued objects are required to inherit from -FoldingSetNode) that uses :ref:`SmallVector <dss_smallvector>` as part of its ID +``FoldingSetNode``) that uses :ref:`SmallVector <dss_smallvector>` as part of its ID process. Consider a case where you want to implement a "getOrCreateFoo" method for a @@ -2157,14 +2157,14 @@ operands), but we don't want to 'new' a node, then try inserting it into a set only to find out it already exists, at which point we would have to delete it and return the node that already exists. -To support this style of client, FoldingSet perform a query with a -FoldingSetNodeID (which wraps SmallVector) that can be used to describe the +To support this style of client, ``FoldingSet`` perform a query with a +``FoldingSetNodeID`` (which wraps ``SmallVector``) that can be used to describe the element that we want to query for. The query either returns the element matching the ID or it returns an opaque ID that indicates where insertion should take place. Construction of the ID usually does not require heap traffic. -Because FoldingSet uses intrusive links, it can support polymorphic objects in -the set (for example, you can have SDNode instances mixed with LoadSDNodes). +Because ``FoldingSet`` uses intrusive links, it can support polymorphic objects in +the set (for example, you can have ``SDNode`` instances mixed with ``LoadSDNodes``). Because the elements are individually allocated, pointers to the elements are stable: inserting or removing elements does not invalidate any pointers to other elements. @@ -2175,7 +2175,7 @@ elements. ^^^^^ ``std::set`` is a reasonable all-around set class, which is decent at many -things but great at nothing. std::set allocates memory for each element +things but great at nothing. ``std::set`` allocates memory for each element inserted (thus it is very malloc intensive) and typically stores three pointers per element in the set (thus adding a large amount of per-element space overhead). It offers guaranteed log(n) performance, which is not particularly @@ -2183,12 +2183,12 @@ fast from a complexity standpoint (particularly if the elements of the set are expensive to compare, like strings), and has extremely high constant factors for lookup, insertion and removal. -The advantages of std::set are that its iterators are stable (deleting or +The advantages of ``std::set`` are that its iterators are stable (deleting or inserting an element from the set does not affect iterators or pointers to other elements) and that iteration over the set is guaranteed to be in sorted order. If the elements in the set are large, then the relative overhead of the pointers and malloc traffic is not a big deal, but if the elements of the set are small, -std::set is almost never a good choice. +``std::set`` is almost never a good choice. .. _dss_setvector: @@ -2242,11 +2242,11 @@ produces a lot of malloc traffic. It should be avoided. llvm/ADT/ImmutableSet.h ^^^^^^^^^^^^^^^^^^^^^^^ -ImmutableSet is an immutable (functional) set implementation based on an AVL +``ImmutableSet`` is an immutable (functional) set implementation based on an AVL tree. Adding or removing elements is done through a Factory object and results -in the creation of a new ImmutableSet object. If an ImmutableSet already exists +in the creation of a new ``ImmutableSet`` object. If an ``ImmutableSet`` already exists with the given contents, then the existing one is returned; equality is compared -with a FoldingSetNodeID. The time and space complexity of add or remove +with a ``FoldingSetNodeID``. The time and space complexity of add or remove operations is logarithmic in the size of the original set. There is no method for returning an element of the set, you can only check for @@ -2257,11 +2257,11 @@ membership. Other Set-Like Container Options ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The STL provides several other options, such as std::multiset and -std::unordered_set. We never use containers like unordered_set because +The STL provides several other options, such as ``std::multiset`` and +``std::unordered_set``. We never use containers like ``unordered_set`` because they are generally very expensive (each insertion requires a malloc). -std::multiset is useful if you're not interested in elimination of duplicates, +``std::multiset`` is useful if you're not interested in elimination of duplicates, but has all the drawbacks of :ref:`std::set <dss_set>`. A sorted vector (where you don't delete duplicate entries) or some other approach is almost always better. @@ -2282,7 +2282,7 @@ A sorted 'vector' If your usage pattern follows a strict insert-then-query approach, you can trivially use the same approach as :ref:`sorted vectors for set-like containers <dss_sortedvectorset>`. The only difference is that your query function (which -uses std::lower_bound to get efficient log(n) lookup) should only compare the +uses ``std::lower_bound`` to get efficient log(n) lookup) should only compare the key, not both the key and value. This yields the same advantages as sorted vectors for sets. @@ -2293,11 +2293,11 @@ llvm/ADT/StringMap.h Strings are commonly used as keys in maps, and they are difficult to support efficiently: they are variable length, inefficient to hash and compare when -long, expensive to copy, etc. StringMap is a specialized container designed to +long, expensive to copy, etc. ``StringMap`` is a specialized container designed to cope with these issues. It supports mapping an arbitrary range of bytes to an arbitrary other object. -The StringMap implementation uses a quadratically-probed hash table, where the +The ``StringMap`` implementation uses a quadratically-probed hash table, where the buckets store a pointer to the heap allocated entries (and some other stuff). The entries in the map must be heap allocated because the strings are variable length. The string data (key) and the element object (value) are stored in the @@ -2305,26 +2305,26 @@ same allocation with the string data immediately after the element object. This container guarantees the "``(char*)(&Value+1)``" points to the key string for a value. -The StringMap is very fast for several reasons: quadratic probing is very cache +The ``StringMap`` is very fast for several reasons: quadratic probing is very cache efficient for lookups, the hash value of strings in buckets is not recomputed -when looking up an element, StringMap rarely has to touch the memory for +when looking up an element, ``StringMap`` rarely has to touch the memory for unrelated objects when looking up a value (even when hash collisions happen), hash table growth does not recompute the hash values for strings already in the table, and each pair in the map is store in a single allocation (the string data is stored in the same allocation as the Value of a pair). -StringMap also provides query methods that take byte ranges, so it only ever +``StringMap`` also provides query methods that take byte ranges, so it only ever copies a string if a value is inserted into the table. -StringMap iteration order, however, is not guaranteed to be deterministic, so -any uses which require that should instead use a std::map. +``StringMap`` iteration order, however, is not guaranteed to be deterministic, so +any uses which require that should instead use a ``std::map``. .. _dss_indexmap: llvm/ADT/IndexedMap.h ^^^^^^^^^^^^^^^^^^^^^ -IndexedMap is a specialized container for mapping small dense integers (or +``IndexedMap`` is a specialized container for mapping small dense integers (or values that can be mapped to small dense integers) to some other type. It is internally implemented as a vector with a mapping function that maps the keys to the dense integer range. @@ -2338,27 +2338,27 @@ virtual register ID). llvm/ADT/DenseMap.h ^^^^^^^^^^^^^^^^^^^ -DenseMap is a simple quadratically probed hash table. It excels at supporting +``DenseMap`` is a simple quadratically probed hash table. It excels at supporting small keys and values: it uses a single allocation to hold all of the pairs -that are currently inserted in the map. DenseMap is a great way to map +that are currently inserted in the map. ``DenseMap`` is a great way to map pointers to pointers, or map other small types to each other. -There are several aspects of DenseMap that you should be aware of, however. -The iterators in a DenseMap are invalidated whenever an insertion occurs, -unlike map. Also, because DenseMap allocates space for a large number of +There are several aspects of ``DenseMap`` that you should be aware of, however. +The iterators in a ``DenseMap`` are invalidated whenever an insertion occurs, +unlike ``map``. Also, because ``DenseMap`` allocates space for a large number of key/value pairs (it starts with 64 by default), it will waste a lot of space if your keys or values are large. Finally, you must implement a partial -specialization of DenseMapInfo for the key that you want, if it isn't already -supported. This is required to tell DenseMap about two special marker values +specialization of ``DenseMapInfo`` for the key that you want, if it isn't already +supported. This is required to tell ``DenseMap`` about two special marker values (which can never be inserted into the map) that it needs internally. -DenseMap's find_as() method supports lookup operations using an alternate key +``DenseMap``'s ``find_as()`` method supports lookup operations using an alternate key type. This is useful in cases where the normal key type is expensive to -construct, but cheap to compare against. The DenseMapInfo is responsible for +construct, but cheap to compare against. The ``DenseMapInfo`` is responsible for defining the appropriate comparison and hashing methods for each alternate key type used. -DenseMap.h also contains a SmallDenseMap variant, that similar to +``DenseMap.h`` also contains a ``SmallDenseMap`` variant, that similar to :ref:`SmallVector <dss_smallvector>` performs no heap allocation until the number of elements in the template parameter N are exceeded. @@ -2404,12 +2404,12 @@ further additions. <map> ^^^^^ -std::map has similar characteristics to :ref:`std::set <dss_set>`: it uses a +``std::map`` has similar characteristics to :ref:`std::set <dss_set>`: it uses a single allocation per pair inserted into the map, it offers log(n) lookup with an extremely large constant factor, imposes a space penalty of 3 pointers per pair in the map, etc. -std::map is most useful when your keys or values are very large, if you need to +``std::map`` is most useful when your keys or values are very large, if you need to iterate over the collection in sorted order, or if you need stable iterators into the map (i.e. they don't get invalidated if an insertion or deletion of another element takes place). @@ -2419,7 +2419,7 @@ another element takes place). llvm/ADT/MapVector.h ^^^^^^^^^^^^^^^^^^^^ -``MapVector<KeyT,ValueT>`` provides a subset of the DenseMap interface. The +``MapVector<KeyT,ValueT>`` provides a subset of the ``DenseMap`` interface. The main difference is that the iteration order is guaranteed to be the insertion order, making it an easy (but somewhat expensive) solution for non-deterministic iteration over maps of pointers. @@ -2463,12 +2463,12 @@ operations is logarithmic in the size of the original map. Other Map-Like Container Options ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The STL provides several other options, such as std::multimap and -std::unordered_map. We never use containers like unordered_map because +The STL provides several other options, such as ``std::multimap`` and +``std::unordered_map``. We never use containers like ``unordered_map`` because they are generally very expensive (each insertion requires a malloc). -std::multimap is useful if you want to map a key to multiple values, but has all -the drawbacks of std::map. A sorted vector or some other approach is almost +``std::multimap`` is useful if you want to map a key to multiple values, but has all +the drawbacks of ``std::map``. A sorted vector or some other approach is almost always better. .. _ds_bit: diff --git a/llvm/docs/ReleaseNotes.md b/llvm/docs/ReleaseNotes.md index bb1f88e..021f321 100644 --- a/llvm/docs/ReleaseNotes.md +++ b/llvm/docs/ReleaseNotes.md @@ -56,37 +56,9 @@ Makes programs 10x faster by doing Special New Thing. Changes to the LLVM IR ---------------------- -* It is no longer permitted to inspect the uses of ConstantData. Use - count APIs will behave as if they have no uses (i.e. use_empty() is - always true). - -* The `nocapture` attribute has been replaced by `captures(none)`. -* The constant expression variants of the following instructions have been - removed: - - * `mul` - -* Updated semantics of `llvm.type.checked.load.relative` to match that of - `llvm.load.relative`. -* Inline asm calls no longer accept ``label`` arguments. Use ``callbr`` instead. - -* Updated semantics of the `callbr` instruction to clarify that its - 'indirect labels' are not expected to be reached by indirect (as in - register-controlled) branch instructions, and therefore are not - guaranteed to start with a `bti` or `endbr64` instruction, where - those exist. - Changes to LLVM infrastructure ------------------------------ -* Removed support for target intrinsics being defined in the target directories - themselves (i.e., the `TargetIntrinsicInfo` class). -* Fix Microsoft demangling of string literals to be stricter - (#GH129970)) -* Added the support for ``fmaximum`` and ``fminimum`` in ``atomicrmw`` instruction. The - comparison is expected to match the behavior of ``llvm.maximum.*`` and - ``llvm.minimum.*`` respectively. - Changes to building LLVM ------------------------ @@ -96,34 +68,18 @@ Changes to TableGen Changes to Interprocedural Optimizations ---------------------------------------- +Changes to Vectorizers +---------------------------------------- + +* Added initial support for copyable elements in SLP, which models copyable + elements as add <element>, 0, i.e. uses identity constants for missing lanes. + Changes to the AArch64 Backend ------------------------------ -* Added the `execute-only` target feature, which indicates that the generated - program code doesn't contain any inline data, and there are no data accesses - to code sections. On ELF targets this property is indicated by the - `SHF_AARCH64_PURECODE` section flag. - ([#125687](https://github.com/llvm/llvm-project/pull/125687), - [#132196](https://github.com/llvm/llvm-project/pull/132196), - [#133084](https://github.com/llvm/llvm-project/pull/133084)) - Changes to the AMDGPU Backend ----------------------------- -* Enabled the - [FWD_PROGRESS bit](https://llvm.org/docs/AMDGPUUsage.html#code-object-v3-kernel-descriptor) - for all GFX ISAs greater or equal to 10, for the AMDHSA OS. - -* Bump the default `.amdhsa_code_object_version` to 6. ROCm 6.3 is required to run any program compiled with COV6. - -* Add a new `amdgcn.load.to.lds` intrinsic that wraps the existing global.load.lds -intrinsic and has the same semantics. This intrinsic allows using buffer fat pointers -(`ptr addrspace(7)`) as arguments, allowing loads to LDS from these pointers to be -represented in the IR without needing to use buffer resource intrinsics directly. -This intrinsic is exposed to Clang as `__builtin_amdgcn_load_to_lds`, though -buffer fat pointers are not yet enabled in Clang. Migration to this intrinsic is -optional, and there are no plans to deprecate `amdgcn.global.load.lds`. - Changes to the ARM Backend -------------------------- @@ -136,106 +92,27 @@ Changes to the DirectX Backend Changes to the Hexagon Backend ------------------------------ -* The default Hexagon architecture version in ELF object files produced by - the tools such as llvm-mc is changed to v68. This version will be set if - the user does not provide the CPU version in the command line. - Changes to the LoongArch Backend -------------------------------- -* Changing the default code model from `small` to `medium` for 64-bit. -* Added inline asm support for the `q` constraint. -* Added the `32s` target feature for LA32S ISA extensions. -* Added codegen support for atomic-ops (`cmpxchg`, `max`, `min`, `umax`, `umin`) on LA32. -* Added codegen support for the ILP32D calling convention. -* Added several codegen and vectorization optimizations. - Changes to the MIPS Backend --------------------------- -* `-mcpu=i6400` and `-mcpu=i6500` were added. - Changes to the PowerPC Backend ------------------------------ Changes to the RISC-V Backend ----------------------------- -* Adds experimental assembler support for the Qualcomm uC 'Xqcilb` (Long Branch) - extension. -* Adds experimental assembler support for the Qualcomm uC 'Xqcili` (Load Large Immediate) - extension. -* Adds experimental assembler support for the Qualcomm uC 'Xqcilia` (Large Immediate Arithmetic) - extension. -* Adds experimental assembler support for the Qualcomm uC 'Xqcibm` (Bit Manipulation) - extension. -* Adds experimental assembler support for the Qualcomm uC 'Xqcibi` (Branch Immediate) - extension. -* Adds experimental assembler and code generation support for the Qualcomm - 'Xqccmp' extension, which is a frame-pointer convention compatible version of - Zcmp. -* Added non-quadratic ``log-vrgather`` cost model for ``vrgather.vv`` instruction -* Adds experimental assembler support for the Qualcomm uC 'Xqcisim` (Simulation Hint) - extension. -* Adds experimental assembler support for the Qualcomm uC 'Xqcisync` (Sync Delay) - extension. -* Adds experimental assembler support for the Qualcomm uC 'Xqciio` (External Input Output) - extension. -* Adds assembler support for the 'Zilsd` (Load/Store Pair Instructions) - extension. -* Adds assembler support for the 'Zclsd` (Compressed Load/Store Pair Instructions) - extension. -* Adds experimental assembler support for Zvqdotq. -* Adds Support for Qualcomm's `qci-nest` and `qci-nonest` interrupt types, which - use instructions from `Xqciint` to save and restore some GPRs during interrupt - handlers. -* When the experimental extension `Xqcili` is enabled, `qc.e.li` and `qc.li` may - now be used to materialize immediates. -* Adds assembler support for ``.option exact``, which disables automatic compression, - and branch and linker relaxation. This can be disabled with ``.option noexact``, - which is also the default. -* `-mcpu=xiangshan-kunminghu` was added. -* `-mcpu=andes-n45` and `-mcpu=andes-nx45` were added. -* `-mcpu=andes-a45` and `-mcpu=andes-ax45` were added. -* Adds support for the 'Ziccamoc` (Main Memory Supports Atomics in Zacas) extension, which was introduced as an optional extension of the RISC-V Profiles specification. -* Adds experimental assembler support for SiFive CLIC CSRs, under the names - `Zsfmclic` for the M-mode registers and `Zsfsclic` for the S-mode registers. -* Adds Support for SiFive CLIC interrupt attributes, which automate writing CLIC - interrupt handlers without using inline assembly. -* Adds assembler support for the Andes `XAndesperf` (Andes Performance extension). -* `-mcpu=sifive-p870` was added. -* Adds assembler support for the Andes `XAndesvpackfph` (Andes Vector Packed FP16 extension). -* Adds assembler support for the Andes `XAndesvdot` (Andes Vector Dot Product extension). -* Adds assembler support for the standard `Q` (Quad-Precision Floating Point) - extension. -* Adds experimental assembler support for the SiFive Xsfmm* Attached Matrix - Extensions. -* `-mcpu=andes-a25` and `-mcpu=andes-ax25` were added. -* The `Shlcofideleg` extension was added. -* `-mcpu=sifive-x390` was added. -* `-mtune=andes-45-series` was added. -* Adds assembler support for the Andes `XAndesvbfhcvt` (Andes Vector BFLOAT16 Conversion extension). -* `-mcpu=andes-ax45mpv` was added. -* Removed -mattr=+no-rvc-hints that could be used to disable parsing and generation of RVC hints. -* Adds assembler support for the Andes `XAndesvsintload` (Andes Vector INT4 Load extension). -* Adds assembler support for the Andes `XAndesbfhcvt` (Andes Scalar BFLOAT16 Conversion extension). - Changes to the WebAssembly Backend ---------------------------------- Changes to the Windows Target ----------------------------- -* `fp128` is now passed indirectly, meaning it uses the same calling convention - as `i128`. - Changes to the X86 Backend -------------------------- -* `fp128` will now use `*f128` libcalls on 32-bit GNU targets as well. -* On x86-32, `fp128` and `i128` are now passed with the expected 16-byte stack - alignment. - Changes to the OCaml bindings ----------------------------- @@ -245,25 +122,6 @@ Changes to the Python bindings Changes to the C API -------------------- -* The following functions for creating constant expressions have been removed, - because the underlying constant expressions are no longer supported. Instead, - an instruction should be created using the `LLVMBuildXYZ` APIs, which will - constant fold the operands if possible and create an instruction otherwise: - - * `LLVMConstMul` - * `LLVMConstNUWMul` - * `LLVMConstNSWMul` - -* Added `LLVMConstDataArray` and `LLVMGetRawDataValues` to allow creating and - reading `ConstantDataArray` values without needing extra `LLVMValueRef`s for - individual elements. - -* Added ``LLVMDIBuilderCreateEnumeratorOfArbitraryPrecision`` for creating - debugging metadata of enumerators larger than 64 bits. - -* Added ``LLVMGetICmpSameSign`` and ``LLVMSetICmpSameSign`` for the `samesign` - flag on `icmp` instructions. - Changes to the CodeGen infrastructure ------------------------------------- @@ -276,62 +134,9 @@ Changes to the Debug Info Changes to the LLVM tools --------------------------------- -* llvm-objcopy now supports the `--update-section` flag for intermediate Mach-O object files. -* llvm-strip now supports continuing to process files on encountering an error. -* In llvm-objcopy/llvm-strip's ELF port, `--discard-locals` and `--discard-all` now allow and preserve symbols referenced by relocations. - ([#47468](https://github.com/llvm/llvm-project/issues/47468)) -* llvm-addr2line now supports a `+` prefix when specifying an address. -* Support for `SHT_LLVM_BB_ADDR_MAP` versions 0 and 1 has been dropped. -* llvm-objdump now supports the `--debug-inlined-funcs` flag, which prints the - locations of inlined functions alongside disassembly. The - `--debug-vars-indent` flag has also been renamed to `--debug-indent`. - Changes to LLDB --------------------------------- -* When building LLDB with Python support, the minimum version of Python is now - 3.8. -* LLDB now supports hardware watchpoints for AArch64 Windows targets. Windows - does not provide API to query the number of supported hardware watchpoints. - Therefore current implementation allows only 1 watchpoint, as tested with - Windows 11 on the Microsoft SQ2 and Snapdragon Elite X platforms. -* LLDB now steps through C++ thunks. This fixes an issue where previously, it - wouldn't step into multiple inheritance virtual functions. -* A statusline was added to command-line LLDB to show progress events and - information about the current state of the debugger at the bottom of the - terminal. This is on by default and can be configured using the - `show-statusline` and `statusline-format` settings. It is not currently - supported on Windows. -* The `min-gdbserver-port` and `max-gdbserver-port` options have been removed - from `lldb-server`'s platform mode. Since the changes to `lldb-server`'s port - handling in LLDB 20, these options have had no effect. -* LLDB now supports `process continue --reverse` when used with debug servers - supporting reverse execution, such as [rr](https://rr-project.org). - When using reverse execution, `process continue --forward` returns to the - forward execution. -* LLDB now supports RISC-V 32-bit ELF core files. -* LLDB now supports siginfo descriptions for Linux user-space signals. User space - signals will now have descriptions describing the method and sender. - ``` - stop reason = SIGSEGV: sent by tkill system call (sender pid=649752, uid=2667987) - ``` -* ELF Cores can now have their siginfo structures inspected using `thread siginfo`. -* LLDB now uses - [DIL](https://discourse.llvm.org/t/rfc-data-inspection-language/69893) as the - default implementation for 'frame variable'. This should not change the - behavior of 'frame variable' at all, at this time. To revert to using the - old implementation use: `settings set target.experimental.use-DIL false`. -* Disassembly of unknown instructions now produces `<unknown>` instead of - nothing at all -* Changed the format of opcode bytes to match llvm-objdump when disassembling - RISC-V code with `disassemble`'s `--byte` option. - - -### Changes to lldb-dap - -* Breakpoints can now be set for specific columns within a line. -* Function return value is now displayed on step-out. - Changes to BOLT --------------------------------- diff --git a/llvm/docs/TestingGuide.rst b/llvm/docs/TestingGuide.rst index b6dda6a..76b6b4e 100644 --- a/llvm/docs/TestingGuide.rst +++ b/llvm/docs/TestingGuide.rst @@ -152,12 +152,12 @@ can run the LLVM and Clang tests simultaneously using: % make check-all -To run the tests with Valgrind (Memcheck by default), use the ``LIT_ARGS`` make +To run the tests with Valgrind (Memcheck by default), use the ``LIT_OPTS`` make variable to pass the required options to lit. For example, you can use: .. code-block:: bash - % make check LIT_ARGS="-v --vg --vg-leak" + % make check LIT_OPTS="-v --vg --vg-leak" to enable testing with valgrind and with leak checking enabled. diff --git a/llvm/docs/YamlIO.rst b/llvm/docs/YamlIO.rst index 7137c56..c5079d8 100644 --- a/llvm/docs/YamlIO.rst +++ b/llvm/docs/YamlIO.rst @@ -8,10 +8,10 @@ YAML I/O Introduction to YAML ==================== -YAML is a human readable data serialization language. The full YAML language +YAML is a human-readable data serialization language. The full YAML language spec can be read at `yaml.org <http://www.yaml.org/spec/1.2/spec.html#Introduction>`_. The simplest form of -yaml is just "scalars", "mappings", and "sequences". A scalar is any number +YAML is just "scalars", "mappings", and "sequences". A scalar is any number or string. The pound/hash symbol (#) begins a comment line. A mapping is a set of key-value pairs where the key ends with a colon. For example: @@ -49,10 +49,10 @@ of mappings in which one of the mapping values is itself a sequence: - PowerPC - x86 -Sometime sequences are known to be short and the one entry per line is too -verbose, so YAML offers an alternate syntax for sequences called a "Flow +Sometimes sequences are known to be short and the one entry per line is too +verbose, so YAML offers an alternative syntax for sequences called a "Flow Sequence" in which you put comma separated sequence elements into square -brackets. The above example could then be simplified to : +brackets. The above example could then be simplified to: .. code-block:: yaml @@ -78,21 +78,21 @@ YAML I/O assumes you have some "native" data structures which you want to be able to dump as YAML and recreate from YAML. The first step is to try writing example YAML for your data structures. You may find after looking at possible YAML representations that a direct mapping of your data structures -to YAML is not very readable. Often the fields are not in the order that +to YAML is not very readable. Often, the fields are not in an order that a human would find readable. Or the same information is replicated in multiple locations, making it hard for a human to write such YAML correctly. In relational database theory there is a design step called normalization in which you reorganize fields and tables. The same considerations need to go into the design of your YAML encoding. But, you may not want to change -your existing native data structures. Therefore, when writing out YAML +your existing native data structures. Therefore, when writing out YAML, there may be a normalization step, and when reading YAML there would be a corresponding denormalization step. -YAML I/O uses a non-invasive, traits based design. YAML I/O defines some +YAML I/O uses a non-invasive, traits-based design. YAML I/O defines some abstract base templates. You specialize those templates on your data types. -For instance, if you have an enumerated type FooBar you could specialize -ScalarEnumerationTraits on that type and define the enumeration() method: +For instance, if you have an enumerated type ``FooBar`` you could specialize +``ScalarEnumerationTraits`` on that type and define the ``enumeration()`` method: .. code-block:: c++ @@ -107,13 +107,13 @@ ScalarEnumerationTraits on that type and define the enumeration() method: }; -As with all YAML I/O template specializations, the ScalarEnumerationTraits is used for +As with all YAML I/O template specializations, the ``ScalarEnumerationTraits`` is used for both reading and writing YAML. That is, the mapping between in-memory enum values and the YAML string representation is only in one place. This assures that the code for writing and parsing of YAML stays in sync. -To specify a YAML mappings, you define a specialization on -llvm::yaml::MappingTraits. +To specify YAML mappings, you define a specialization on +``llvm::yaml::MappingTraits``. If your native data structure happens to be a struct that is already normalized, then the specialization is simple. For example: @@ -131,9 +131,9 @@ then the specialization is simple. For example: }; -A YAML sequence is automatically inferred if you data type has begin()/end() -iterators and a push_back() method. Therefore any of the STL containers -(such as std::vector<>) will automatically translate to YAML sequences. +A YAML sequence is automatically inferred if your data type has ``begin()``/``end()`` +iterators and a ``push_back()`` method. Therefore any of the STL containers +(such as ``std::vector<>``) will automatically translate to YAML sequences. Once you have defined specializations for your data types, you can programmatically use YAML I/O to write a YAML document: @@ -195,9 +195,9 @@ Error Handling ============== When parsing a YAML document, if the input does not match your schema (as -expressed in your XxxTraits<> specializations). YAML I/O -will print out an error message and your Input object's error() method will -return true. For instance the following document: +expressed in your ``XxxTraits<>`` specializations). YAML I/O +will print out an error message and your Input object's ``error()`` method will +return true. For instance, the following document: .. code-block:: yaml @@ -244,8 +244,8 @@ The following types have built-in support in YAML I/O: * uint16_t * uint8_t -That is, you can use those types in fields of MappingTraits or as element type -in sequence. When reading, YAML I/O will validate that the string found +That is, you can use those types in fields of ``MappingTraits`` or as the element type +in a sequence. When reading, YAML I/O will validate that the string found is convertible to that type and error out if not. @@ -255,7 +255,7 @@ Given that YAML I/O is trait based, the selection of how to convert your data to YAML is based on the type of your data. But in C++ type matching, typedefs do not generate unique type names. That means if you have two typedefs of unsigned int, to YAML I/O both types look exactly like unsigned int. To -facilitate make unique type names, YAML I/O provides a macro which is used +facilitate making unique type names, YAML I/O provides a macro which is used like a typedef on built-in types, but expands to create a class with conversion operators to and from the base type. For example: @@ -265,13 +265,13 @@ operators to and from the base type. For example: LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyBarFlags) This generates two classes MyFooFlags and MyBarFlags which you can use in your -native data structures instead of uint32_t. They are implicitly -converted to and from uint32_t. The point of creating these unique types +native data structures instead of ``uint32_t``. They are implicitly +converted to and from ``uint32_t``. The point of creating these unique types is that you can now specify traits on them to get different YAML conversions. Hex types --------- -An example use of a unique type is that YAML I/O provides fixed sized unsigned +An example use of a unique type is that YAML I/O provides fixed-sized unsigned integers that are written with YAML I/O as hexadecimal instead of the decimal format used by the built-in integer types: @@ -280,15 +280,15 @@ format used by the built-in integer types: * Hex16 * Hex8 -You can use llvm::yaml::Hex32 instead of uint32_t and the only different will +You can use ``llvm::yaml::Hex32`` instead of ``uint32_t`` and the only difference will be that when YAML I/O writes out that type it will be formatted in hexadecimal. ScalarEnumerationTraits ----------------------- YAML I/O supports translating between in-memory enumerations and a set of string -values in YAML documents. This is done by specializing ScalarEnumerationTraits<> -on your enumeration type and define an enumeration() method. +values in YAML documents. This is done by specializing ``ScalarEnumerationTraits<>`` +on your enumeration type and defining an ``enumeration()`` method. For instance, suppose you had an enumeration of CPUs and a struct with it as a field: @@ -306,7 +306,7 @@ a field: }; To support reading and writing of this enumeration, you can define a -ScalarEnumerationTraits specialization on CPUs, which can then be used +``ScalarEnumerationTraits`` specialization on CPUs, which can then be used as a field type: .. code-block:: c++ @@ -333,9 +333,9 @@ as a field type: }; When reading YAML, if the string found does not match any of the strings -specified by enumCase() methods, an error is automatically generated. +specified by ``enumCase()`` methods, an error is automatically generated. When writing YAML, if the value being written does not match any of the values -specified by the enumCase() methods, a runtime assertion is triggered. +specified by the ``enumCase()`` methods, a runtime assertion is triggered. BitValue @@ -356,7 +356,7 @@ had the following bit flags defined: LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyFlags) -To support reading and writing of MyFlags, you specialize ScalarBitSetTraits<> +To support reading and writing of MyFlags, you specialize ``ScalarBitSetTraits<>`` on MyFlags and provide the bit values and their names. .. code-block:: c++ @@ -399,7 +399,7 @@ the above schema, a same valid YAML document is: name: Tom flags: [ pointy, flat ] -Sometimes a "flags" field might contains an enumeration part +Sometimes a "flags" field might contain an enumeration part defined by a bit-mask. .. code-block:: c++ @@ -415,7 +415,7 @@ defined by a bit-mask. flagsCPU2 = 16 }; -To support reading and writing such fields, you need to use the maskedBitSet() +To support reading and writing such fields, you need to use the ``maskedBitSet()`` method and provide the bit values, their names and the enumeration mask. .. code-block:: c++ @@ -438,14 +438,14 @@ to the flow sequence. Custom Scalar ------------- -Sometimes for readability a scalar needs to be formatted in a custom way. For -instance your internal data structure may use an integer for time (seconds since +Sometimes, for readability, a scalar needs to be formatted in a custom way. For +instance, your internal data structure may use an integer for time (seconds since some epoch), but in YAML it would be much nicer to express that integer in some time format (e.g. 4-May-2012 10:30pm). YAML I/O has a way to support -custom formatting and parsing of scalar types by specializing ScalarTraits<> on +custom formatting and parsing of scalar types by specializing ``ScalarTraits<>`` on your data type. When writing, YAML I/O will provide the native type and -your specialization must create a temporary llvm::StringRef. When reading, -YAML I/O will provide an llvm::StringRef of scalar and your specialization +your specialization must create a temporary ``llvm::StringRef``. When reading, +YAML I/O will provide an ``llvm::StringRef`` of scalar and your specialization must convert that to your native data type. An outline of a custom scalar type looks like: @@ -482,18 +482,18 @@ literal block notation, just like the example shown below: Second line The YAML I/O library provides support for translating between YAML block scalars -and specific C++ types by allowing you to specialize BlockScalarTraits<> on +and specific C++ types by allowing you to specialize ``BlockScalarTraits<>`` on your data type. The library doesn't provide any built-in support for block -scalar I/O for types like std::string and llvm::StringRef as they are already +scalar I/O for types like ``std::string`` and ``llvm::StringRef`` as they are already supported by YAML I/O and use the ordinary scalar notation by default. -BlockScalarTraits specializations are very similar to the -ScalarTraits specialization - YAML I/O will provide the native type and your -specialization must create a temporary llvm::StringRef when writing, and -it will also provide an llvm::StringRef that has the value of that block scalar +``BlockScalarTraits`` specializations are very similar to the +``ScalarTraits`` specialization - YAML I/O will provide the native type and your +specialization must create a temporary ``llvm::StringRef`` when writing, and +it will also provide an ``llvm::StringRef`` that has the value of that block scalar and your specialization must convert that to your native data type when reading. An example of a custom type with an appropriate specialization of -BlockScalarTraits is shown below: +``BlockScalarTraits`` is shown below: .. code-block:: c++ @@ -524,7 +524,7 @@ Mappings ======== To be translated to or from a YAML mapping for your type T you must specialize -llvm::yaml::MappingTraits on T and implement the "void mapping(IO &io, T&)" +``llvm::yaml::MappingTraits`` on T and implement the "void mapping(IO &io, T&)" method. If your native data structures use pointers to a class everywhere, you can specialize on the class pointer. Examples: @@ -585,7 +585,7 @@ No Normalization The ``mapping()`` method is responsible, if needed, for normalizing and denormalizing. In a simple case where the native data structure requires no -normalization, the mapping method just uses mapOptional() or mapRequired() to +normalization, the mapping method just uses ``mapOptional()`` or ``mapRequired()`` to bind the struct's fields to YAML key names. For example: .. code-block:: c++ @@ -605,11 +605,11 @@ bind the struct's fields to YAML key names. For example: Normalization ---------------- -When [de]normalization is required, the mapping() method needs a way to access +When [de]normalization is required, the ``mapping()`` method needs a way to access normalized values as fields. To help with this, there is -a template MappingNormalization<> which you can then use to automatically +a template ``MappingNormalization<>`` which you can then use to automatically do the normalization and denormalization. The template is used to create -a local variable in your mapping() method which contains the normalized keys. +a local variable in your ``mapping()`` method which contains the normalized keys. Suppose you have native data type Polar which specifies a position in polar coordinates (distance, angle): @@ -621,7 +621,7 @@ Polar which specifies a position in polar coordinates (distance, angle): float angle; }; -but you've decided the normalized YAML for should be in x,y coordinates. That +but you've decided the normalized YAML form should be in x,y coordinates. That is, you want the yaml to look like: .. code-block:: yaml @@ -629,7 +629,7 @@ is, you want the yaml to look like: x: 10.3 y: -4.7 -You can support this by defining a MappingTraits that normalizes the polar +You can support this by defining a ``MappingTraits`` that normalizes the polar coordinates to x,y coordinates when writing YAML and denormalizes x,y coordinates into polar when reading YAML. @@ -667,47 +667,47 @@ coordinates into polar when reading YAML. }; When writing YAML, the local variable "keys" will be a stack allocated -instance of NormalizedPolar, constructed from the supplied polar object which -initializes it x and y fields. The mapRequired() methods then write out the x +instance of ``NormalizedPolar``, constructed from the supplied polar object which +initializes it x and y fields. The ``mapRequired()`` methods then write out the x and y values as key/value pairs. When reading YAML, the local variable "keys" will be a stack allocated instance -of NormalizedPolar, constructed by the empty constructor. The mapRequired +of ``NormalizedPolar``, constructed by the empty constructor. The ``mapRequired()`` methods will find the matching key in the YAML document and fill in the x and y -fields of the NormalizedPolar object keys. At the end of the mapping() method -when the local keys variable goes out of scope, the denormalize() method will +fields of the ``NormalizedPolar`` object keys. At the end of the ``mapping()`` method +when the local keys variable goes out of scope, the ``denormalize()`` method will automatically be called to convert the read values back to polar coordinates, -and then assigned back to the second parameter to mapping(). +and then assigned back to the second parameter to ``mapping()``. In some cases, the normalized class may be a subclass of the native type and -could be returned by the denormalize() method, except that the temporary +could be returned by the ``denormalize()`` method, except that the temporary normalized instance is stack allocated. In these cases, the utility template -MappingNormalizationHeap<> can be used instead. It just like -MappingNormalization<> except that it heap allocates the normalized object -when reading YAML. It never destroys the normalized object. The denormalize() +``MappingNormalizationHeap<>`` can be used instead. It just like +``MappingNormalization<>`` except that it heap allocates the normalized object +when reading YAML. It never destroys the normalized object. The ``denormalize()`` method can this return "this". Default values -------------- -Within a mapping() method, calls to io.mapRequired() mean that that key is +Within a ``mapping()`` method, calls to ``io.mapRequired()`` mean that that key is required to exist when parsing YAML documents, otherwise YAML I/O will issue an error. -On the other hand, keys registered with io.mapOptional() are allowed to not +On the other hand, keys registered with ``io.mapOptional()`` are allowed to not exist in the YAML document being read. So what value is put in the field for those optional keys? There are two steps to how those optional fields are filled in. First, the -second parameter to the mapping() method is a reference to a native class. That +second parameter to the ``mapping()`` method is a reference to a native class. That native class must have a default constructor. Whatever value the default constructor initially sets for an optional field will be that field's value. -Second, the mapOptional() method has an optional third parameter. If provided -it is the value that mapOptional() should set that field to if the YAML document +Second, the ``mapOptional()`` method has an optional third parameter. If provided +it is the value that ``mapOptional()`` should set that field to if the YAML document does not have that key. There is one important difference between those two ways (default constructor -and third parameter to mapOptional). When YAML I/O generates a YAML document, -if the mapOptional() third parameter is used, if the actual value being written +and third parameter to ``mapOptional()``). When YAML I/O generates a YAML document, +if the ``mapOptional()`` third parameter is used, if the actual value being written is the same as (using ==) the default value, then that key/value is not written. @@ -715,14 +715,14 @@ Order of Keys -------------- When writing out a YAML document, the keys are written in the order that the -calls to mapRequired()/mapOptional() are made in the mapping() method. This +calls to ``mapRequired()``/``mapOptional()`` are made in the ``mapping()`` method. This gives you a chance to write the fields in an order that a human reader of the YAML document would find natural. This may be different that the order of the fields in the native class. When reading in a YAML document, the keys in the document can be in any order, -but they are processed in the order that the calls to mapRequired()/mapOptional() -are made in the mapping() method. That enables some interesting +but they are processed in the order that the calls to ``mapRequired()``/``mapOptional()`` +are made in the ``mapping()`` method. That enables some interesting functionality. For instance, if the first field bound is the cpu and the second field bound is flags, and the flags are cpu specific, you can programmatically switch how the flags are converted to and from YAML based on the cpu. @@ -761,20 +761,20 @@ model. Recently, we added support to YAML I/O for checking/setting the optional tag on a map. Using this functionality it is even possible to support different mappings, as long as they are convertible. -To check a tag, inside your mapping() method you can use io.mapTag() to specify -what the tag should be. This will also add that tag when writing yaml. +To check a tag, inside your ``mapping()`` method you can use ``io.mapTag()`` to specify +what the tag should be. This will also add that tag when writing YAML. Validation ---------- Sometimes in a YAML map, each key/value pair is valid, but the combination is not. This is similar to something having no syntax errors, but still having -semantic errors. To support semantic level checking, YAML I/O allows +semantic errors. To support semantic-level checking, YAML I/O allows an optional ``validate()`` method in a MappingTraits template specialization. When parsing YAML, the ``validate()`` method is call *after* all key/values in the map have been processed. Any error message returned by the ``validate()`` -method during input will be printed just a like a syntax error would be printed. +method during input will be printed just like a syntax error would be printed. When writing YAML, the ``validate()`` method is called *before* the YAML key/values are written. Any error during output will trigger an ``assert()`` because it is a programming error to have invalid struct values. @@ -827,14 +827,14 @@ add "static const bool flow = true;". For instance: static const bool flow = true; } -Flow mappings are subject to line wrapping according to the Output object +Flow mappings are subject to line wrapping according to the ``Output`` object configuration. Sequence ======== To be translated to or from a YAML sequence for your type T you must specialize -llvm::yaml::SequenceTraits on T and implement two methods: +``llvm::yaml::SequenceTraits`` on T and implement two methods: ``size_t size(IO &io, T&)`` and ``T::value_type& element(IO &io, T&, size_t indx)``. For example: @@ -846,11 +846,11 @@ llvm::yaml::SequenceTraits on T and implement two methods: static MySeqEl &element(IO &io, MySeq &list, size_t index) { ... } }; -The size() method returns how many elements are currently in your sequence. -The element() method returns a reference to the i'th element in the sequence. -When parsing YAML, the element() method may be called with an index one bigger -than the current size. Your element() method should allocate space for one -more element (using default constructor if element is a C++ object) and returns +The ``size()`` method returns how many elements are currently in your sequence. +The ``element()`` method returns a reference to the i'th element in the sequence. +When parsing YAML, the ``element()`` method may be called with an index one bigger +than the current size. Your ``element()`` method should allocate space for one +more element (using default constructor if element is a C++ object) and return a reference to that new allocated space. @@ -881,10 +881,10 @@ configuration. Utility Macros -------------- -Since a common source of sequences is std::vector<>, YAML I/O provides macros: -LLVM_YAML_IS_SEQUENCE_VECTOR() and LLVM_YAML_IS_FLOW_SEQUENCE_VECTOR() which -can be used to easily specify SequenceTraits<> on a std::vector type. YAML -I/O does not partial specialize SequenceTraits on std::vector<> because that +Since a common source of sequences is ``std::vector<>``, YAML I/O provides macros: +``LLVM_YAML_IS_SEQUENCE_VECTOR()`` and ``LLVM_YAML_IS_FLOW_SEQUENCE_VECTOR()`` which +can be used to easily specify ``SequenceTraits<>`` on a ``std::vector`` type. YAML +I/O does not partial specialize ``SequenceTraits`` on ``std::vector<>`` because that would force all vectors to be sequences. An example use of the macros: .. code-block:: c++ @@ -906,7 +906,7 @@ have need for multiple documents. The top level node in their YAML schema will be a mapping or sequence. For those cases, the following is not needed. But for cases where you do want multiple documents, you can specify a trait for you document list type. The trait has the same methods as -SequenceTraits but is named DocumentListTraits. For example: +``SequenceTraits`` but is named ``DocumentListTraits``. For example: .. code-block:: c++ @@ -919,29 +919,29 @@ SequenceTraits but is named DocumentListTraits. For example: User Context Data ================= -When an llvm::yaml::Input or llvm::yaml::Output object is created their -constructors take an optional "context" parameter. This is a pointer to +When an ``llvm::yaml::Input`` or ``llvm::yaml::Output`` object is created, its +constructor takes an optional "context" parameter. This is a pointer to whatever state information you might need. For instance, in a previous example we showed how the conversion type for a flags field could be determined at runtime based on the value of another field in the mapping. But what if an inner mapping needs to know some field value of an outer mapping? That is where the "context" parameter comes in. You -can set values in the context in the outer map's mapping() method and -retrieve those values in the inner map's mapping() method. +can set values in the context in the outer map's ``mapping()`` method and +retrieve those values in the inner map's ``mapping()`` method. -The context value is just a void*. All your traits which use the context +The context value is just a ``void*``. All your traits which use the context and operate on your native data types, need to agree what the context value actually is. It could be a pointer to an object or struct which your various -traits use to shared context sensitive information. +traits use to share context sensitive information. Output ====== -The llvm::yaml::Output class is used to generate a YAML document from your +The ``llvm::yaml::Output`` class is used to generate a YAML document from your in-memory data structures, using traits defined on your data types. -To instantiate an Output object you need an llvm::raw_ostream, an optional +To instantiate an ``Output`` object you need an ``llvm::raw_ostream``, an optional context pointer and an optional wrapping column: .. code-block:: c++ @@ -950,14 +950,14 @@ context pointer and an optional wrapping column: public: Output(llvm::raw_ostream &, void *context = NULL, int WrapColumn = 70); -Once you have an Output object, you can use the C++ stream operator on it +Once you have an ``Output`` object, you can use the C++ stream operator on it to write your native data as YAML. One thing to recall is that a YAML file can contain multiple "documents". If the top level data structure you are -streaming as YAML is a mapping, scalar, or sequence, then Output assumes you +streaming as YAML is a mapping, scalar, or sequence, then ``Output`` assumes you are generating one document and wraps the mapping output with "``---``" and trailing "``...``". -The WrapColumn parameter will cause the flow mappings and sequences to +The ``WrapColumn`` parameter will cause the flow mappings and sequences to line-wrap when they go over the supplied column. Pass 0 to completely suppress the wrapping. @@ -980,7 +980,7 @@ The above could produce output like: ... On the other hand, if the top level data structure you are streaming as YAML -has a DocumentListTraits specialization, then Output walks through each element +has a ``DocumentListTraits`` specialization, then Output walks through each element of your DocumentList and generates a "---" before the start of each element and ends with a "...". @@ -1008,9 +1008,9 @@ The above could produce output like: Input ===== -The llvm::yaml::Input class is used to parse YAML document(s) into your native -data structures. To instantiate an Input -object you need a StringRef to the entire YAML file, and optionally a context +The ``llvm::yaml::Input`` class is used to parse YAML document(s) into your native +data structures. To instantiate an ``Input`` +object you need a ``StringRef`` to the entire YAML file, and optionally a context pointer: .. code-block:: c++ @@ -1019,12 +1019,12 @@ pointer: public: Input(StringRef inputContent, void *context=NULL); -Once you have an Input object, you can use the C++ stream operator to read +Once you have an ``Input`` object, you can use the C++ stream operator to read the document(s). If you expect there might be multiple YAML documents in -one file, you'll need to specialize DocumentListTraits on a list of your +one file, you'll need to specialize ``DocumentListTraits`` on a list of your document type and stream in that document list type. Otherwise you can just stream in the document type. Also, you can check if there was -any syntax errors in the YAML be calling the error() method on the Input +any syntax errors in the YAML by calling the ``error()`` method on the ``Input`` object. For example: .. code-block:: c++ |