Age | Commit message (Collapse) | Author | Files | Lines |
|
Caused by an incorrect assertion.
Fixes #116440
|
|
E is already of Expr * and shares the same declaration among all these
cases.
|
|
This extends https://github.com/llvm/llvm-project/pull/138577 to more UBSan checks, by changing SanitizerDebugLocation (formerly SanitizerScope) to add annotations if enabled for the specified ordinals.
Annotations will use the ordinal name if there is exactly one ordinal specified in the SanitizerDebugLocation; otherwise, it will use the handler name.
Updates the tests from https://github.com/llvm/llvm-project/pull/141814.
---------
Co-authored-by: Vitaly Buka <vitalybuka@google.com>
|
|
It turns out that getVLASize() does not get you the size of a single
dimension of the VLA, it gets you the full count of all elements. This
caused _Countof to return invalid values on VLA ranks. Now switched to
using getVLAElements1D() instead, which only gets a single dimension.
Fixes #141409
|
|
This patch is part of a stack that teaches Clang to generate Key Instructions
metadata for C and C++.
RFC:
https://discourse.llvm.org/t/rfc-improving-is-stmt-placement-for-better-interactive-debugging/82668
The feature is only functional in LLVM if LLVM is built with CMake flag
LLVM_EXPERIMENTAL_KEY_INSTRUCTIONs. Eventually that flag will be removed.
|
|
-mrvv-vector-bits (#139190)
For i1 vectors, we used an i8 fixed vector as the storage type.
If the known minimum number of elements of the scalable vector type is
less than 8, we were doing the cast through memory. This used a load or
store from a fixed vector alloca. If is less than 8, DataLayout
indicates that the load/store reads/writes vscale bytes even if vscale
is known and vscale*X is less than or equal to 8. This means the load or
store is outside the bounds of the fixed size alloca as far as
DataLayout is concerned leading to undefined behavior.
This patch avoids this by widening the i1 scalable vector type with zero
elements until it is divisible by 8. This allows it be bitcasted to/from
an i8 scalable vector. We then insert or extract the i8 fixed vector
into this type.
Hopefully this enables #130973 to be accepted.
|
|
The passed indices have to be constant integers anyway, which we verify
before creating the ShuffleVectorExpr. Use the value we create there and
save the indices using a ConstantExpr instead. This way, we don't have
to evaluate the args every time we call getShuffleMaskIdx().
|
|
Allows the __ptrauth qualifier to be applied to pointer sized integer types,
updates Sema to ensure trivially copyable, etc correctly handle address
discriminated integers, and updates codegen to perform authentication
around arithmetic on the types.
|
|
It isn't used and is redundant with the result pointer type argument.
A more reasonable API would only have LangAS parameters, or IR parameters,
not both. Not all values have a meaningful value for this. I'm also
not sure why we have this at all, it's not overridden by any targets and
further simplification is possible.
|
|
Do not suppress the pointer overflow check for the `(i8*) nullptr + N`
idiom.
Related issue: https://github.com/llvm/llvm-project/issues/137833
|
|
When an HLSL Init list is producing a Scalar, handle OpaqueValueExprs in
the Init List with 'emitInitListOpaqueValues'
Copied from 'AggExprEmitter::VisitCXXParenListOrInitListExpr'
Closes #136408
---------
Co-authored-by: Chris B <beanz@abolishcrlf.org>
|
|
a uint64_t index. (#138324)
Most callers want a constant index. Instead of making every caller
create a ConstantInt, we can do it in IRBuilder. This is similar to
createInsertElement/createExtractElement.
|
|
The qualifier allows programmer to directly control how pointers are
signed when they are stored in a particular variable.
The qualifier takes three arguments: the signing key, a flag specifying
whether address discrimination should be used, and a non-negative
integer that is used for additional discrimination.
```
typedef void (*my_callback)(const void*);
my_callback __ptrauth(ptrauth_key_process_dependent_code, 1, 0xe27a) callback;
```
Co-Authored-By: John McCall rjmccall@apple.com
|
|
fixes #135122
SemaExpr.cpp - Make all doubles fail. Add sema support for float scalars
and vectors when language mode is HLSL.
CGExprScalar.cpp - Allow emit frem when language mode is HLSL.
|
|
Add missing CGFPOptionsRAII for fptoi and itofp cases
|
|
C2y adds the `_Countof` operator which returns the number of elements in
an array. As with `sizeof`, `_Countof` either accepts a parenthesized
type name or an expression. Its operand must be (of) an array type. When
passed a constant-size array operand, the operator is a constant
expression which is valid for use as an integer constant expression.
This is being exposed as an extension in earlier C language modes, but
not in C++. C++ already has `std::extent` and `std::size` to cover these
needs, so the operator doesn't seem to get the user enough benefit to
warrant carrying this as an extension.
Fixes #102836
|
|
Introduce a trait to determine the number of bindings that would be
produced by
```cpp
auto [...p] = expr;
```
This is necessary to implement P2300
(https://eel.is/c++draft/exec#snd.concepts-5), but can also be used to
implement a general get<N> function that supports aggregates
`__builtin_structured_binding_size` is a unary type trait that evaluates
to the number of bindings in a decomposition
If the argument cannot be decomposed, a sfinae-friendly error is
produced.
A type is considered a valid tuple if `std::tuple_size_v<T>` is a valid
expression, even if there is no valid `std::tuple_element`
specialization or suitable `get` function for that type.
Fixes #46049
|
|
This addresses two issues introduced by moving indirect args into an
explicit AS (please see
<https://github.com/llvm/llvm-project/pull/114062#issuecomment-2659829790>
and
<https://github.com/llvm/llvm-project/pull/114062#issuecomment-2661158477>):
1. Unconditionally stripping casts from a pre-allocated return slot was
incorrect / insufficient (this is illustrated by the
`amdgcn_sret_ctor.cpp` test);
2. Putting compiler manufactured sret args in a non default AS can lead
to a C-cast (surprisingly) requiring an AS cast (this is illustrated by
the `sret_cast_with_nonzero_alloca_as.cpp test).
The way we handle (2), by subverting CK_BitCast emission iff a sret arg
is involved, is quite naff, but I couldn't think of any other way to use
a non default indirect AS and make this case work (hopefully this is a
failure of imagination on my part).
|
|
This patch allows using fpfeatures pragmas with __builtin_convertvector:
- added TrailingObjects with FPOptionsOverride and methods for handling
it to ConvertVectorExpr
- added support for codegen, node dumping, and serialization of
fpfeatures contained in ConvertVectorExpr
|
|
Implement HLSL Aggregate Splat casting that handles splatting for arrays
and structs, and vectors if splatting from a vec1.
Closes #100609 and Closes #100619
Depends on #118842
|
|
#118842 (#126258)
Implement HLSLElementwiseCast excluding support for splat cases
Do not support casting types that contain bitfields.
Partly closes https://github.com/llvm/llvm-project/issues/100609 and
partly closes https://github.com/llvm/llvm-project/issues/100619
Re-land #118842 after fixing warning as an error, found by a buildbot.
|
|
Reverts llvm/llvm-project#118842
|
|
Implement HLSLElementwiseCast excluding support for splat cases
Do not support casting types that contain bitfields.
Partly closes #100609 and partly closes #100619
|
|
GCC supports three flags related to overflow behavior:
* `-fwrapv`: Makes signed integer overflow well-defined.
* `-fwrapv-pointer`: Makes pointer overflow well-defined.
* `-fno-strict-overflow`: Implies `-fwrapv -fwrapv-pointer`, making both
signed integer overflow and pointer overflow well-defined.
Clang currently only supports `-fno-strict-overflow` and `-fwrapv`, but
not `-fwrapv-pointer`.
This PR proposes to introduce `-fwrapv-pointer` and adjust the semantics
of `-fwrapv` to match GCC.
This allows signed integer overflow and pointer overflow to be
controlled independently, while `-fno-strict-overflow` still exists to
control both at the same time (and that option is consistent across GCC
and Clang).
|
|
(exactly one sanitizer is required) (#122511)
The `Checked` parameter of `CodeGenFunction::EmitCheck` is of type
`ArrayRef<std::pair<llvm::Value *, SanitizerMask>>`, which is overly
generalized: SanitizerMask can denote that zero or more sanitizers are
enabled, but `EmitCheck` requires that exactly one sanitizer is
specified in the SanitizerMask (e.g.,
`SanitizeTrap.has(Checked[i].second)` enforces that).
This patch replaces SanitizerMask with SanitizerOrdinal in the `Checked`
parameter of `EmitCheck` and code that transitively relies on it. This
should not affect the behavior of UBSan, but it has the advantages that:
- the code is clearer: it avoids ambiguity in EmitCheck about what to do
if multiple bits are set
- specifying the wrong number of sanitizers in `Checked[i].second` will
be detected as a compile-time error, rather than a runtime assertion
failure
Suggested by Vitaly in https://github.com/llvm/llvm-project/pull/122392
as an alternative to adding an explicit runtime assertion that the
SanitizerMask contains exactly one sanitizer.
|
|
N3322 makes NULL + 0 well-defined in C, matching the C++ semantics.
Adjust the pointer-overflow sanitizer to no longer report NULL + 0 as a
pointer overflow in any language mode. NULL + nonzero will of course
continue to be reported.
As N3322 is part of
https://www.open-std.org/jtc1/sc22/wg14/www/previous.html, and we never
performed any optimizations based on NULL + 0 being undefined in the
first place, I'm applying this change to all C versions.
|
|
`CounterPair` can hold `<uint32_t, uint32_t>` instead of current
`unsigned`, to hold also the counter number of SkipPath. For now, this
change provides the skeleton and only `CounterPair::Executed` is used.
Each counter number can have `None` to suppress emitting counter
increment. 2nd element `Skipped` is initialized as `None` by default,
since most `Stmt*` don't have a pair of counters.
This change also provides stubs for the verifier. I'll provide the impl
of verifier for `+Asserts` later.
`markStmtAsUsed(bool, Stmt*)` may be used to inform that other side
counter may not emitted.
`markStmtMaybeUsed(S)` may be used for the `Stmt` and its inner will be
excluded for emission in the case of skipping by constant folding. I put
it into places where I found.
`verifyCounterMap()` will check the coverage map and the counter map,
and can be used to report inconsistency.
These verifier methods shall be eliminated in `-Asserts`.
https://discourse.llvm.org/t/rfc-integrating-singlebytecoverage-with-branch-coverage/82492
|
|
Closes #119360.
This bug occurs when passing a VLA to `va_arg`. Since the return value
is inferred to be an array, it triggers
`ScalarExprEmitter::VisitCastExpr`, which converts it to a pointer and
subsequently calls `CodeGenFunction::EmitAggExpr`. At this point,
because the inferred type is an `AggExpr` instead of a `ScalarExpr`,
`ScalarExprEmitter::VisitVAArgExpr` is not invoked, and as a result,
`CodeGenFunction::EmitVariablyModifiedType` is also not called, leading
to the size of the VLA not being retrieved.
The solution is to move the call to
`CodeGenFunction::EmitVariablyModifiedType` into
`CodeGenFunction::EmitVAArg`, ensuring that the size of the VLA is
correctly obtained regardless of whether the expression is an `AggExpr`
or a `ScalarExpr`.
|
|
- Use `poison` instead of `undef` as a phi operand for an unreachable path (the predecessor
will not go the BB that uses the value of the phi).
- Call `@llvm.vector.insert` with a `poison` subvec when performing a
`bitcast` from a fixed vector to a scalable vector.
|
|
Previously, this hit an `llvm_unreachable()` assertion as the type of
`vec_t` did not exactly match `__SVInt8_t`, as it was wrapped in a
typedef.
Comparing the canonical types instead allows the types to match
correctly and avoids the crash.
Fixes #107609
|
|
|
|
w/ SSCLs (#107332)
[Related
RFC](https://discourse.llvm.org/t/rfc-support-globpattern-add-operator-to-invert-matches/80683/5?u=justinstitt)
### Summary
Implement type-based filtering via [Sanitizer Special Case
Lists](https://clang.llvm.org/docs/SanitizerSpecialCaseList.html) for
the arithmetic overflow and truncation sanitizers.
Currently, using the `type:` prefix with these sanitizers does nothing.
I've hooked up the SSCL parsing with Clang codegen so that we don't emit
the overflow/truncation checks if the arithmetic contains an ignored
type.
### Usefulness
You can craft ignorelists that ignore specific types that are expected
to overflow or wrap-around. For example, to ignore `my_type` from
`unsigned-integer-overflow` instrumentation:
```bash
$ cat ignorelist.txt
[unsigned-integer-overflow]
type:my_type=no_sanitize
$ cat foo.c
typedef unsigned long my_type;
void foo() {
my_type a = ULONG_MAX;
++a;
}
$ clang foo.c -fsanitize=unsigned-integer-overflow -fsanitize-ignorelist=ignorelist.txt ; ./a.out
// --> no sanitizer error
```
If a type is functionally intended to overflow, like
[refcount_t](https://kernsec.org/wiki/index.php/Kernel_Protections/refcount_t)
and its associated APIs in the Linux kernel, then this type filtering
would prove useful for reducing sanitizer noise. Currently, the Linux
kernel dealt with this by
[littering](https://elixir.bootlin.com/linux/v6.10.8/source/include/linux/refcount.h#L139
) `__attribute__((no_sanitize("signed-integer-overflow")))` annotations
on all the `refcount_t` APIs. I think this serves as an example of how a
codebase could be made cleaner. We could make custom types that are
filtered out in an ignorelist, allowing for types to be more expressive
-- without the need for annotations. This accomplishes a similar goal to
https://github.com/llvm/llvm-project/pull/86618.
Yet another use case for this type filtering is whitelisting. We could
ignore _all_ types, save a few.
```bash
$ cat ignorelist.txt
[implicit-signed-integer-truncation]
type:*=no_sanitize # ignore literally all types
type:short=sanitize # except `short`
$ cat bar.c
// compile with -fsanitize=implicit-signed-integer-truncation
void bar(int toobig) {
char a = toobig; // not instrumented
short b = toobig; // instrumented
}
```
### Other ways to accomplish the goal of sanitizer
allowlisting/whitelisting
* ignore list SSCL type support (this PR that you're reading)
* [my sanitize-allowlist
branch](https://github.com/llvm/llvm-project/compare/main...JustinStitt:llvm-project:sanitize-allowlist)
- this just implements a sibling flag `-fsanitize-allowlist=`, removing
some of the double negative logic present with `skip`/`ignore` when
trying to whitelist something.
* [Glob
Negation](https://discourse.llvm.org/t/rfc-support-globpattern-add-operator-to-invert-matches/80683)
- Implement a negation operator to the GlobPattern class so the
ignorelist query can use them to simulate allowlisting
Please let me know which of the three options we like best. They are not
necessarily mutually exclusive.
Here's [another related
PR](https://github.com/llvm/llvm-project/pull/86618) which implements a
`wraps` attribute. This can accomplish a similar goal to this PR but
requires in-source changes to codebases and also covers a wider variety
of integer definedness problems.
### CCs
@kees @vitalybuka @bwendling
---------
Signed-off-by: Justin Stitt <justinstitt@google.com>
|
|
Fixed: #113044
the type of `ArrayTypeTraitExpr` can be changed, use i32 directly is
incorrect.
---------
Co-authored-by: Eli Friedman <efriedma@quicinc.com>
|
|
The 'tile' clause shares quite a bit of the rules with 'collapse', so a
followup patch will add those tests/behaviors. This patch deals with
adding the AST node.
The 'tile' clause takes a series of integer constant expressions, or *.
The asterisk is now represented by a new OpenACCAsteriskSizeExpr node,
else this clause is very similar to others.
|
|
Since the migration to opaque pointers, CreateGlobalStringPtr()
is the same as CreateGlobalString(). Normalize to the latter.
|
|
HLSL allows implicit conversions to truncate vectors to scalar
pr-values. These conversions are scored as vector truncations and should
warn appropriately.
This change allows forming a truncation cast to a pr-value, but not an
l-value. Truncating a vector to a scalar is performed by loading the
first element of the vector and disregarding the remaining elements.
Fixes #102964
|
|
Data type conversion between fp16 and bf16 will generate fptrunc and
fpextend nodes, but they are actually bitcast nodes.
|
|
Add nuw attribute to inbounds GEPs where the expression used to form the
GEP is an addition of unsigned indices.
Relands #105496, which was reverted because it exposed a miscompilation
arising from #98608. This is now fixed by #106512.
|
|
`llvm::ConstantFP::get(llvm::LLVMContext&, APFloat(float))` always
returns a f32 constant.
Fix https://github.com/llvm/llvm-project/issues/107054.
|
|
Reverts llvm/llvm-project#105496
This patch breaks:
https://lab.llvm.org/buildbot/#/builders/25/builds/1952
https://lab.llvm.org/buildbot/#/builders/52/builds/1775
Somehow output is different with sanitizers.
Maybe non-determinism in the code?
|
|
Add nuw attribute to inbounds GEPs where the expression used to form the
GEP is an addition of unsigned indices.
|
|
(#105709)
From @vitalybuka's review on
https://github.com/llvm/llvm-project/pull/104889:
- [x] remove unused variable in tests
- [x] rename `post-decr-while` --> `unsigned-post-decr-while`
- [x] split `add-overflow-test` into `add-unsigned-overflow-test` and
`add-signed-overflow-test`
- [x] be more clear about defaults within docs
- [x] add table to docs
Here's a screenshot of the rendered table so you don't have to build the
html docs yourself to inspect the layout:

CCs: @vitalybuka
---------
Signed-off-by: Justin Stitt <justinstitt@google.com>
Co-authored-by: Vitaly Buka <vitalybuka@google.com>
|
|
As per [1] the indices for a matrix element access operator shall have
integral or unscoped enumeration types and be non-negative. At the
moment, the index expression is converted to SizeType irrespective of
the signedness of the index expression. This causes implicit sign
conversion warnings if any of the indices is signed.
As per the spec, using signed types as indices is allowed and should not
cause any warnings. If the index expression is signed, extend to
SignedSizeType to avoid the warning.
[1]
https://clang.llvm.org/docs/MatrixTypes.html#matrix-type-element-access-operator
PR: https://github.com/llvm/llvm-project/pull/103044
|
|
Introduce "-fsanitize-undefined-ignore-overflow-pattern=" which can
be used to disable sanitizer instrumentation for common overflow-dependent
code patterns.
For a wide selection of projects, proper overflow sanitization could
help catch bugs and solve security vulnerabilities. Unfortunately, in
some cases the integer overflow sanitizers are too noisy for their users
and are often left disabled. Providing users with a method to disable
sanitizer instrumentation of common patterns could mean more projects
actually utilize the sanitizers in the first place.
One such project that has opted to not use integer overflow (or
truncation) sanitizers is the Linux Kernel. There has been some
discussion[1] recently concerning mitigation strategies for unexpected
arithmetic overflow. This discussion is still ongoing and a succinct
article[2] accurately sums up the discussion. In summary, many Kernel
developers do not want to introduce more arithmetic wrappers when
most developers understand the code patterns as they are.
Patterns like:
if (base + offset < base) { ... }
or
while (i--) { ... }
or
#define SOME -1UL
are extremely common in a code base like the Linux Kernel. It is
perhaps too much to ask of kernel developers to use arithmetic wrappers
in these cases. For example:
while (wrapping_post_dec(i)) { ... }
which wraps some builtin would not fly. This would incur too many
changes to existing code; the code churn would be too much, at least too
much to justify turning on overflow sanitizers.
Currently, this commit tackles three pervasive idioms:
1. "if (a + b < a)" or some logically-equivalent re-ordering like "if (a > b + a)"
2. "while (i--)" (for unsigned) a post-decrement always overflows here
3. "-1UL, -2UL, etc" negation of unsigned constants will always overflow
The patterns that are excluded can be chosen from the following list:
- add-overflow-test
- post-decr-while
- negated-unsigned-const
These can be enabled with a comma-separated list:
-fsanitize-undefined-ignore-overflow-pattern=add-overflow-test,negated-unsigned-const
"all" or "none" may also be used to specify that all patterns should be
excluded or that none should be.
[1] https://lore.kernel.org/all/202404291502.612E0A10@keescook/
[2] https://lwn.net/Articles/979747/
CCs: @efriedma-quic @kees @jyknight @fmayer @vitalybuka
Signed-off-by: Justin Stitt <justinstitt@google.com>
Co-authored-by: Bill Wendling <morbo@google.com>
|
|
This reverts commit 9a666deecb9ff6ca3a6b12e6c2877e19b74b54da.
Reason: broke buildbots
e.g., fork-ubsan.test started failing at
https://lab.llvm.org/buildbot/#/builders/66/builds/2819/steps/9/logs/stdio
Clang :: CodeGen/compound-assign-overflow.c
Clang :: CodeGen/sanitize-atomic-int-overflow.c
started failing with https://lab.llvm.org/buildbot/#/builders/52/builds/1570
|
|
Introduce "-fsanitize-overflow-pattern-exclusion=" which can be used to
disable sanitizer instrumentation for common overflow-dependent code
patterns.
For a wide selection of projects, proper overflow sanitization could
help catch bugs and solve security vulnerabilities. Unfortunately, in
some cases the integer overflow sanitizers are too noisy for their users
and are often left disabled. Providing users with a method to disable
sanitizer instrumentation of common patterns could mean more projects
actually utilize the sanitizers in the first place.
One such project that has opted to not use integer overflow (or
truncation) sanitizers is the Linux Kernel. There has been some
discussion[1] recently concerning mitigation strategies for unexpected
arithmetic overflow. This discussion is still ongoing and a succinct
article[2] accurately sums up the discussion. In summary, many Kernel
developers do not want to introduce more arithmetic wrappers when
most developers understand the code patterns as they are.
Patterns like:
if (base + offset < base) { ... }
or
while (i--) { ... }
or
#define SOME -1UL
are extremely common in a code base like the Linux Kernel. It is
perhaps too much to ask of kernel developers to use arithmetic wrappers
in these cases. For example:
while (wrapping_post_dec(i)) { ... }
which wraps some builtin would not fly. This would incur too many
changes to existing code; the code churn would be too much, at least too
much to justify turning on overflow sanitizers.
Currently, this commit tackles three pervasive idioms:
1. "if (a + b < a)" or some logically-equivalent re-ordering like "if (a > b + a)"
2. "while (i--)" (for unsigned) a post-decrement always overflows here
3. "-1UL, -2UL, etc" negation of unsigned constants will always overflow
The patterns that are excluded can be chosen from the following list:
- add-overflow-test
- post-decr-while
- negated-unsigned-const
These can be enabled with a comma-separated list:
-fsanitize-overflow-pattern-exclusion=add-overflow-test,negated-unsigned-const
"all" or "none" may also be used to specify that all patterns should be
excluded or that none should be.
[1] https://lore.kernel.org/all/202404291502.612E0A10@keescook/
[2] https://lwn.net/Articles/979747/
CCs: @efriedma-quic @kees @jyknight @fmayer @vitalybuka
Signed-off-by: Justin Stitt <justinstitt@google.com>
Co-authored-by: Bill Wendling <morbo@google.com>
|
|
Use this to replace the emission of the amdgpu-unsafe-fp-atomics
attribute in favor of per-instruction metadata. In the future
new fine grained controls should be introduced that also cover
the integer cases.
Add a wrapper around CreateAtomicRMW that appends the metadata,
and update a few use contexts to use it.
|
|
Re-signing occurs when function type discrimination is enabled and a
function pointer is converted to another function pointer type that
requires signing using a different discriminator. A function pointer is
re-signed using discriminator zero when it's converted to a pointer to a
non-function type such as `void*`.
---------
Co-authored-by: Ahmed Bougacha <ahmed@bougacha.org>
Co-authored-by: John McCall <rjmccall@apple.com>
|
|
There are two problems with _BitInt prior to this patch:
1. For at least some values of N, we cannot use LLVM's iN for the type
of struct elements, array elements, allocas, global variables, and so
on, because the LLVM layout for that type does not match the high-level
layout of _BitInt(N).
Example: Currently for i128:128 targets correct implementation is
possible either for __int128 or for _BitInt(129+) with lowering to iN,
but not both, since we have now correct implementation of __int128 in
place after a21abc7.
When this happens, opaque [M x i8] types used, where M =
sizeof(_BitInt(N)).
2. LLVM doesn't guarantee any particular extension behavior for integer
types that aren't a multiple of 8. For this reason, all _BitInt types
are now have in-memory representation that is a whole number of bytes.
I.e. for example _BitInt(17) now will have memory layout type i32.
This patch also introduces concept of load/store type and adds an API to
CodeGenTypes that returns the IR type that should be used for load and
store operations. This is particularly useful for the case when a
_BitInt ends up having array of bytes as memory layout type. For
_BitInt(N), let M = sizeof(_BitInt(N)), and let BITS = M * 8. Loads and
stores of iM would both (1) produce far better code from the backends
and (2) be far more optimizable by IR passes than loads and stores of [M
x i8].
Fixes https://github.com/llvm/llvm-project/issues/85139
Fixes https://github.com/llvm/llvm-project/issues/83419
---------
Co-authored-by: John McCall <rjmccall@gmail.com>
|
|
This commit implements the entirety of the now-accepted [N3017
-Preprocessor
Embed](https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3017.htm) and
its sister C++ paper [p1967](https://wg21.link/p1967). It implements
everything in the specification, and includes an implementation that
drastically improves the time it takes to embed data in specific
scenarios (the initialization of character type arrays). The mechanisms
used to do this are used under the "as-if" rule, and in general when the
system cannot detect it is initializing an array object in a variable
declaration, will generate EmbedExpr AST node which will be expanded by
AST consumers (CodeGen or constant expression evaluators) or expand
embed directive as a comma expression.
This reverts commit
https://github.com/llvm/llvm-project/commit/682d461d5a231cee54d65910e6341769419a67d7.
---------
Co-authored-by: The Phantom Derpstorm <phdofthehouse@gmail.com>
Co-authored-by: Aaron Ballman <aaron@aaronballman.com>
Co-authored-by: cor3ntin <corentinjabot@gmail.com>
Co-authored-by: H. Vetinari <h.vetinari@gmx.com>
|