aboutsummaryrefslogtreecommitdiff
path: root/clang/lib/CodeGen/CodeGenFunction.h
AgeCommit message (Collapse)AuthorFilesLines
2024-02-22[AArch64] Implement __builtin_cpu_supports, compiler-rt tests. (#82378)Pavel Iliin1-1/+1
The patch complements https://github.com/llvm/llvm-project/pull/68919 and adds AArch64 support for builtin `__builtin_cpu_supports("feature1+...+featureN")` which return true if all specified CPU features in argument are detected. Also compiler-rt aarch64 native run tests for features detection mechanism were added and 'cpu_model' check was fixed after its refactor merged https://github.com/llvm/llvm-project/pull/75635 Original RFC was https://reviews.llvm.org/D153153
2024-02-13[OpenACC] Implement AST for OpenACC Compute Constructs (#81188)Erich Keane1-0/+10
'serial', 'parallel', and 'kernel' constructs are all considered 'Compute' constructs. This patch creates the AST type, plus the required infrastructure for such a type, plus some base types that will be useful in the future for breaking this up. The only difference between the three is the 'kind'( plus some minor clause legalization rules, but those can be differentiated easily enough), so rather than representing them as separate AST nodes, it seems to make sense to make them the same. Additionally, no clause AST functionality is being implemented yet, as that fits better in a separate patch, and this is enough to get the 'naked' constructs implemented. This is otherwise an 'NFC' patch, as it doesn't alter execution at all, so there aren't any tests. I did this to break up the review workload and to get feedback on the layout.
2024-02-11[clang][NFC] Annotate `CodeGenFunction.h` with `preferred_type`Vlad Serebrennikov1-1/+4
This helps debuggers to display values in bit-fields in a more helpful way.
2024-01-16[Clang] Implement the 'counted_by' attribute (#76348)Bill Wendling1-0/+22
The 'counted_by' attribute is used on flexible array members. The argument for the attribute is the name of the field member holding the count of elements in the flexible array. This information is used to improve the results of the array bound sanitizer and the '__builtin_dynamic_object_size' builtin. The 'count' field member must be within the same non-anonymous, enclosing struct as the flexible array member. For example: ``` struct bar; struct foo { int count; struct inner { struct { int count; /* The 'count' referenced by 'counted_by' */ }; struct { /* ... */ struct bar *array[] __attribute__((counted_by(count))); }; } baz; }; ``` This example specifies that the flexible array member 'array' has the number of elements allocated for it in 'count': ``` struct bar; struct foo { size_t count; /* ... */ struct bar *array[] __attribute__((counted_by(count))); }; ``` This establishes a relationship between 'array' and 'count'; specifically that 'p->array' must have *at least* 'p->count' number of elements available. It's the user's responsibility to ensure that this relationship is maintained throughout changes to the structure. In the following, the allocated array erroneously has fewer elements than what's specified by 'p->count'. This would result in an out-of-bounds access not not being detected: ``` struct foo *p; void foo_alloc(size_t count) { p = malloc(MAX(sizeof(struct foo), offsetof(struct foo, array[0]) + count * sizeof(struct bar *))); p->count = count + 42; } ``` The next example updates 'p->count', breaking the relationship requirement that 'p->array' must have at least 'p->count' number of elements available: ``` void use_foo(int index, int val) { p->count += 42; p->array[index] = val; /* The sanitizer can't properly check this access */ } ``` In this example, an update to 'p->count' maintains the relationship requirement: ``` void use_foo(int index, int val) { if (p->count == 0) return; --p->count; p->array[index] = val; } ```
2024-01-15Revert "[Clang] Implement the 'counted_by' attribute (#76348)"Rashmi Mudduluru1-22/+0
This reverts commit 164f85db876e61cf4a3c34493ed11e8f5820f968.
2024-01-10[Clang] Implement the 'counted_by' attribute (#76348)Bill Wendling1-0/+22
The 'counted_by' attribute is used on flexible array members. The argument for the attribute is the name of the field member holding the count of elements in the flexible array. This information is used to improve the results of the array bound sanitizer and the '__builtin_dynamic_object_size' builtin. The 'count' field member must be within the same non-anonymous, enclosing struct as the flexible array member. For example: ``` struct bar; struct foo { int count; struct inner { struct { int count; /* The 'count' referenced by 'counted_by' */ }; struct { /* ... */ struct bar *array[] __attribute__((counted_by(count))); }; } baz; }; ``` This example specifies that the flexible array member 'array' has the number of elements allocated for it in 'count': ``` struct bar; struct foo { size_t count; /* ... */ struct bar *array[] __attribute__((counted_by(count))); }; ``` This establishes a relationship between 'array' and 'count'; specifically that 'p->array' must have *at least* 'p->count' number of elements available. It's the user's responsibility to ensure that this relationship is maintained throughout changes to the structure. In the following, the allocated array erroneously has fewer elements than what's specified by 'p->count'. This would result in an out-of-bounds access not not being detected: ``` struct foo *p; void foo_alloc(size_t count) { p = malloc(MAX(sizeof(struct foo), offsetof(struct foo, array[0]) + count * sizeof(struct bar *))); p->count = count + 42; } ``` The next example updates 'p->count', breaking the relationship requirement that 'p->array' must have at least 'p->count' number of elements available: ``` void use_foo(int index, int val) { p->count += 42; p->array[index] = val; /* The sanitizer can't properly check this access */ } ``` In this example, an update to 'p->count' maintains the relationship requirement: ``` void use_foo(int index, int val) { if (p->count == 0) return; --p->count; p->array[index] = val; } ```
2024-01-10Revert "[Clang] Implement the 'counted_by' attribute (#76348)"Nico Weber1-22/+0
This reverts commit fefdef808c230c79dca2eb504490ad0f17a765a5. Breaks check-clang, see https://github.com/llvm/llvm-project/pull/76348#issuecomment-1886029515 Also revert follow-on "[Clang] Update 'counted_by' documentation" This reverts commit 4a3fb9ce27dda17e97341f28005a28836c909cfc.
2024-01-10[Clang] Implement the 'counted_by' attribute (#76348)Bill Wendling1-0/+22
The 'counted_by' attribute is used on flexible array members. The argument for the attribute is the name of the field member holding the count of elements in the flexible array. This information is used to improve the results of the array bound sanitizer and the '__builtin_dynamic_object_size' builtin. The 'count' field member must be within the same non-anonymous, enclosing struct as the flexible array member. For example: ``` struct bar; struct foo { int count; struct inner { struct { int count; /* The 'count' referenced by 'counted_by' */ }; struct { /* ... */ struct bar *array[] __attribute__((counted_by(count))); }; } baz; }; ``` This example specifies that the flexible array member 'array' has the number of elements allocated for it in 'count': ``` struct bar; struct foo { size_t count; /* ... */ struct bar *array[] __attribute__((counted_by(count))); }; ``` This establishes a relationship between 'array' and 'count'; specifically that 'p->array' must have *at least* 'p->count' number of elements available. It's the user's responsibility to ensure that this relationship is maintained throughout changes to the structure. In the following, the allocated array erroneously has fewer elements than what's specified by 'p->count'. This would result in an out-of-bounds access not not being detected: ``` struct foo *p; void foo_alloc(size_t count) { p = malloc(MAX(sizeof(struct foo), offsetof(struct foo, array[0]) + count * sizeof(struct bar *))); p->count = count + 42; } ``` The next example updates 'p->count', breaking the relationship requirement that 'p->array' must have at least 'p->count' number of elements available: ``` void use_foo(int index, int val) { p->count += 42; p->array[index] = val; /* The sanitizer can't properly check this access */ } ``` In this example, an update to 'p->count' maintains the relationship requirement: ``` void use_foo(int index, int val) { if (p->count == 0) return; --p->count; p->array[index] = val; } ```
2024-01-04[Coverage][clang] Enable MC/DC Support in LLVM Source-based Code Coverage (3/3)Alan Phipps1-1/+57
Part 3 of 3. This includes the MC/DC clang front-end components. Differential Revision: https://reviews.llvm.org/D138849
2023-12-18Revert counted_by attribute feature (#75857)Bill Wendling1-16/+0
There are many issues that popped up with the counted_by feature. The patch #73730 has grown too large and approval is blocking Linux testing. Includes reverts of: commit 769bc11f684d ("[Clang] Implement the 'counted_by' attribute (#68750)") commit bc09ec696209 ("[CodeGen] Revamp counted_by calculations (#70606)") commit 1a09cfb2f35d ("[Clang] counted_by attr can apply only to C99 flexible array members (#72347)") commit a76adfb992c6 ("[NFC][Clang] Refactor code to calculate flexible array member size (#72790)") commit d8447c78ab16 ("[Clang] Correct handling of negative and out-of-bounds indices (#71877)") Partial commit b31cd07de5b7 ("[Clang] Regenerate test checks (NFC)") Closes #73168 Closes #75173
2023-11-19[NFC][Clang] Refactor code to calculate flexible array member size (#72790)Bill Wendling1-0/+3
The code that calculates the flexible array member size is big enough to warrant its own method.
2023-11-10 [OpenMP] Rework handling of global ctor/dtors in OpenMP (#71739)Joseph Huber1-0/+5
Summary: This patch reworks how we handle global constructors in OpenMP. Previously, we emitted individual kernels that were all registered and called individually. In order to provide more generic support, this patch moves all handling of this to the target backend and the runtime plugin. This has the benefit of supporting the GNU extensions for constructors an destructors, removing a class of failures related to shared library destruction order, and allows targets other than OpenMP to use the same support without needing to change the frontend. This is primarily done by calling kernels that the backend emits to iterate a list of ctor / dtor functions. For x64, this is automatic and we get it for free with the standard `dlopen` handling. For AMDGPU, we emit `amdgcn.device.init` and `amdgcn.device.fini` functions which handle everything atuomatically and simply need to be called. For NVPTX, a patch https://github.com/llvm/llvm-project/pull/71549 provides the kernels to call, but the runtime needs to set up the array manually by pulling out all the known constructor / destructor functions. One concession that this patch requires is the change that for GPU targets in OpenMP offloading we will use `llvm.global_dtors` instead of using `atexit`. This is because `atexit` is a separate runtime function that does not mesh well with the handling we're trying to do here. This should be equivalent in all cases except for cases where we would need to destruct manually such as: ``` struct S { ~S() { foo(); } }; void foo() { static S s; } ``` However this is broken in many other ways on the GPU, so it is not regressing any support, simply increasing the scope of what we can handle. This changes the handling of ctors / dtors. This patch now outputs a information message regarding the deprecation if the old format is used. This will be completely removed in a later release. Depends on: https://github.com/llvm/llvm-project/pull/71549
2023-11-09[CodeGen] Revamp counted_by calculations (#70606)Bill Wendling1-3/+10
Break down the counted_by calculations so that they correctly handle anonymous structs, which are specified internally as IndirectFieldDecls. Improves the calculation of __bdos on a different field member in the struct. And also improves support for __bdos in an index into the FAM. If the index is further out than the length of the FAM, then we return __bdos's "can't determine the size" value (zero or negative one, depending on type). Also simplify the code to use helper methods to get the field referenced by counted_by and the flexible array member itself, which also had some issues with FAMs in sub-structs.
2023-11-09Revert "Revert "[AMDGPU] const-fold imm operands of (#71669)Pravin Jagtap1-0/+2
amdgcn_update_dpp intrinsic (#71139)"" This reverts commit d1fb9307951319eea3e869d78470341d603c8363 and fixes the lit test clang/test/CodeGenHIP/dpp-const-fold.hip --------- Authored-by: Pravin Jagtap <Pravin.Jagtap@amd.com>
2023-11-08Revert "[AMDGPU] const-fold imm operands of amdgcn_update_dpp intrinsic ↵Mitch Phillips1-2/+0
(#71139)" This reverts commit 32a3f2afe6ea7ffb02a6a188b123ded6f4c89f6c. Reason: Broke the sanitizer buildbots. More details at https://github.com/llvm/llvm-project/commit/32a3f2afe6ea7ffb02a6a188b123ded6f4c89f6c
2023-11-08[AMDGPU] const-fold imm operands of amdgcn_update_dpp intrinsic (#71139)Pravin Jagtap1-0/+2
Operands of `__builtin_amdgcn_update_dpp` need to evaluate to constant to match the intrinsic requirements. Fixes: SWDEV-426822, SWDEV-431138 --------- Authored-by: Pravin Jagtap <Pravin.Jagtap@amd.com>
2023-11-07[Clang][SME2] Add multi-vector add/sub builtins (#69725)Kerry McLaughlin1-0/+1
Adds the following SME2 builtins: - sv(add|sub) - sv(add|sub)_za32/za64, - sv(add|sub)_write_za32/za64 Other changes in this patch: - CGBuiltin.cpp: The GetAArch64SMEProcessedOperands function is created to avoid duplicating existing code from EmitAArch64SVEBuiltinExpr. - arm_sve.td: The add/sub SME2 builtins which do not operate on ZA have been added to arm_sve.td, matching the corrosponding LLVM IR intrinsic names which start with @llvm.aarch64.sve for this reason. - SveEmitter.cpp: Adds the createCoreHeaderIntrinsics function to remove duplicated code in createHeader & createSMEHeader. Uses a new enum (ACLEKind) to choose either "__builtin_sme_" or "__builtin_sve_" when emitting the intrinsics. See https://github.com/ARM-software/acle/pull/217/files
2023-11-02[AArch64][Clang] Refactor code to emit SVE & SME builtins (#70959)Kerry McLaughlin1-0/+5
This patch removes duplicated code in EmitAArch64SVEBuiltinExpr and EmitAArch64SMEBuiltinExpr by creating a new function called GetAArch64SVEProcessedOperands which handles splitting up multi-vector arguments using vector extracts. These changes are non-functional.
2023-11-01Revert "[AArch64][Clang] Refactor code to emit SVE & SME builtins (#70662)"Kerry McLaughlin1-5/+0
This reverts commit c34efe3c2734629b925d9411b3c86a710911a93a.
2023-11-01[AArch64][Clang] Refactor code to emit SVE & SME builtins (#70662)Kerry McLaughlin1-0/+5
This patch removes duplicated code in EmitAArch64SVEBuiltinExpr and EmitAArch64SMEBuiltinExpr by creating a new function called GetAArch64SVEProcessedOperands which handles splitting up multi-vector arguments using vector extracts. These changes are non-functional.
2023-10-17[Clang][SVE2.1] Add svpext builtinsCaroline Concatto1-0/+5
As described in: https://github.com/ARM-software/acle/pull/257 Reviewed By: hassnaa-arm Differential Revision: https://reviews.llvm.org/D151081
2023-10-17[AArch64][SME] Remove immediate argument restriction for svldr and svstr ↵Sam Tebbs1-1/+0
(#68908) The svldr_vnum_za and svstr_vnum_za builtins/intrinsics currently require that the vnum argument be an immediate, but since vnum is used to modify the base register via a mul and add, that restriction is not necessary. This patch removes that restriction.
2023-10-14[Clang] Implement the 'counted_by' attribute (#68750)Bill Wendling1-0/+6
The 'counted_by' attribute is used on flexible array members. The argument for the attribute is the name of the field member in the same structure holding the count of elements in the flexible array. This information can be used to improve the results of the array bound sanitizer and the '__builtin_dynamic_object_size' builtin. This example specifies the that the flexible array member 'array' has the number of elements allocated for it in 'count': struct bar; struct foo { size_t count; /* ... */ struct bar *array[] __attribute__((counted_by(count))); }; This establishes a relationship between 'array' and 'count', specifically that 'p->array' must have *at least* 'p->count' number of elements available. It's the user's responsibility to ensure that this relationship is maintained through changes to the structure. In the following, the allocated array erroneously has fewer elements than what's specified by 'p->count'. This would result in an out-of-bounds access not not being detected: struct foo *p; void foo_alloc(size_t count) { p = malloc(MAX(sizeof(struct foo), offsetof(struct foo, array[0]) + count * sizeof(struct bar *))); p->count = count + 42; } The next example updates 'p->count', breaking the relationship requirement that 'p->array' must have at least 'p->count' number of elements available: struct foo *p; void foo_alloc(size_t count) { p = malloc(MAX(sizeof(struct foo), offsetof(struct foo, array[0]) + count * sizeof(struct bar *))); p->count = count + 42; } void use_foo(int index) { p->count += 42; p->array[index] = 0; /* The sanitizer cannot properly check this access */ } Reviewed By: nickdesaulniers, aaron.ballman Differential Revision: https://reviews.llvm.org/D148381
2023-10-09Revert "[Clang] Implement the 'counted_by' attribute" (#68603)alexfh1-6/+0
This reverts commit 9a954c693573281407f6ee3f4eb1b16cc545033d, which causes clang crashes when compiling with `-fsanitize=bounds`. See https://github.com/llvm/llvm-project/commit/9a954c693573281407f6ee3f4eb1b16cc545033d#commitcomment-129529574 for details.
2023-10-04[Clang] Implement the 'counted_by' attributeBill Wendling1-0/+6
The 'counted_by' attribute is used on flexible array members. The argument for the attribute is the name of the field member in the same structure holding the count of elements in the flexible array. This information can be used to improve the results of the array bound sanitizer and the '__builtin_dynamic_object_size' builtin. This example specifies the that the flexible array member 'array' has the number of elements allocated for it in 'count': struct bar; struct foo { size_t count; /* ... */ struct bar *array[] __attribute__((counted_by(count))); }; This establishes a relationship between 'array' and 'count', specifically that 'p->array' must have *at least* 'p->count' number of elements available. It's the user's responsibility to ensure that this relationship is maintained through changes to the structure. In the following, the allocated array erroneously has fewer elements than what's specified by 'p->count'. This would result in an out-of-bounds access not not being detected: struct foo *p; void foo_alloc(size_t count) { p = malloc(MAX(sizeof(struct foo), offsetof(struct foo, array[0]) + count * sizeof(struct bar *))); p->count = count + 42; } The next example updates 'p->count', breaking the relationship requirement that 'p->array' must have at least 'p->count' number of elements available: struct foo *p; void foo_alloc(size_t count) { p = malloc(MAX(sizeof(struct foo), offsetof(struct foo, array[0]) + count * sizeof(struct bar *))); p->count = count + 42; } void use_foo(int index) { p->count += 42; p->array[index] = 0; /* The sanitizer cannot properly check this access */ } Reviewed By: nickdesaulniers, aaron.ballman Differential Revision: https://reviews.llvm.org/D148381
2023-10-02[C++] Implement "Deducing this" (P0847R7)Corentin Jabot1-0/+2
This patch implements P0847R7 (partially), CWG2561 and CWG2653. Reviewed By: aaron.ballman, #clang-language-wg Differential Revision: https://reviews.llvm.org/D140828
2023-08-28Revert "[C++20] [Coroutines] Mark await_suspend as noinline if the awaiter ↵Chuanqi Xu1-5/+0
is not empty" This reverts commit 9d9c25f81456aace2bec4b58498a420e650007d9. This reverts commit 19ab2664ad3182ffa8fe3a95bb19765e4ae84653. This reverts commit c4672454743e942f148a1aff1e809dae73e464f6. As the issue https://github.com/llvm/llvm-project/issues/65018 shows, the previous fix introduce a regression actually. So this commit reverts the fix by our policies.
2023-08-27[CodeGen] Modernize PeepholeProtection (NFC)Kazu Hirata1-2/+2
2023-08-23[X86] Support arch=x86-64{,-v2,-v3,-v4} for target_clones attributeFangrui Song1-1/+1
GCC 12 (https://gcc.gnu.org/PR101696) allows `arch=x86-64` `arch=x86-64-v2` `arch=x86-64-v3` `arch=x86-64-v4` in the target_clones function attribute. This patch ports the feature. * Set KeyFeature to `x86-64{,-v2,-v3,-v4}` in `Processors[]`, to be used by X86TargetInfo::multiVersionSortPriority * builtins: change `__cpu_features2` to an array like libgcc. Define `FEATURE_X86_64_{BASELINE,V2,V3,V4}` and depended ISA feature bits. * CGBuiltin.cpp: update EmitX86CpuSupports to handle `arch=x86-64*`. Close https://github.com/llvm/llvm-project/issues/55830 Reviewed By: pengfei Differential Revision: https://reviews.llvm.org/D158329
2023-08-23[NFC][CLANG] Fix static analyzer bugs about large copy by valuesManna, Soumi1-4/+4
Static Analyzer Tool complains about a large function call parameter which is is passed by value in CGBuiltin.cpp file. 1. In CodeGenFunction::EmitSMELdrStr(clang::SVETypeFlags, llvm::SmallVectorImpl<llvm::Value *> &, unsigned int): We are passing parameter TypeFlags of type clang::SVETypeFlags by value. 2. In CodeGenFunction::EmitSMEZero(clang::SVETypeFlags, llvm::SmallVectorImpl<llvm::Value *> &, unsigned int): We are passing parameter TypeFlags of type clang::SVETypeFlags by value. 3. In CodeGenFunction::EmitSMEReadWrite(clang::SVETypeFlags, llvm::SmallVectorImpl<llvm::Value *> &, unsigned int): We are passing parameter TypeFlags of type clang::SVETypeFlags by value. 4. In CodeGenFunction::EmitSMELd1St1(clang::SVETypeFlags, llvm::SmallVectorImpl<llvm::Value *> &, unsigned int): We are passing parameter TypeFlags of type clang::SVETypeFlags by value. I see many places in CGBuiltin.cpp file, we are passing parameter TypeFlags of type clang::SVETypeFlags by reference. clang::SVETypeFlags inherits several other types. This patch passes parameter TypeFlags by reference instead of by value in the function. Reviewed By: tahonermann, sdesmalen Differential Revision: https://reviews.llvm.org/D158522
2023-08-22[C++20] [Coroutines] Mark await_suspend as noinline if the awaiter is not emptyChuanqi Xu1-0/+5
Close https://github.com/llvm/llvm-project/issues/56301 Close https://github.com/llvm/llvm-project/issues/64151 See the summary and the discussion of https://reviews.llvm.org/D157070 to get the full context. As @rjmccall pointed out, the key point of the root cause is that currently we didn't implement the semantics for '@llvm.coro.save' well ("after the await-ready returns false, the coroutine is considered to be suspended ") well. Since the semantics implies that we (the compiler) shouldn't write the spills into the coroutine frame in the await_suspend. But now it is possible due to some combinations of the optimizations so the semantics are broken. And the inlining is the root optimization of such optimizations. So in this patch, we tried to add the `noinline` attribute to the await_suspend call. Also as an optimization, we don't add the `noinline` attribute to the await_suspend call if the awaiter is an empty class. This should be correct since the programmers can't access the local variables in await_suspend if the awaiter is empty. I think this is necessary for the performance since it is pretty common. Another potential optimization is: call @llvm.coro.await_suspend(ptr %awaiter, ptr %handle, ptr @awaitSuspendFn) Then it is much easier to perform the safety analysis in the middle end. If it is safe to inline the call to awaitSuspend, we can replace it in the CoroEarly pass. Otherwise we could replace it in the CoroSplit pass. Reviewed By: rjmccall Differential Revision: https://reviews.llvm.org/D157833
2023-08-09[Clang][LoongArch] Use the ClangBuiltin class to automatically generate ↵wanglei1-1/+0
support for CBE and CFE Fixed the type modifier (L->W), removed redundant feature checking code since the feature has already been checked in `EmitBuiltinExpr`. And Cleaned up unused diagnostic information. Reviewed By: SixWeining Differential Revision: https://reviews.llvm.org/D156866
2023-07-26Reland "Try to implement lambdas with inalloca parameters by forwarding ↵Amy Huang1-2/+12
without use of inallocas."t This reverts commit 8ed7aa59f489715d39d32e72a787b8e75cfda151. Differential Revision: https://reviews.llvm.org/D154007
2023-07-20[Clang][AArch64][SME] Add intrinsics for ZA array load/store (LDR/STR)Bryan Chan1-0/+3
This patch adds support for the following SME ACLE intrinsics (as defined in https://arm-software.github.io/acle/main/acle.html): - svldr_vnum_za - svstr_vnum_za Co-authored-by: Sagar Kulkarni <sagar.kulkarni1@huawei.com> Reviewed By: sdesmalen Differential Revision: https://reviews.llvm.org/D134678
2023-07-20[Clang][AArch64][SME] Add ZA zeroing intrinsicsBryan Chan1-0/+3
This patch adds support for the following SME ACLE intrinsics (as defined in https://arm-software.github.io/acle/main/acle.html): - svzero_mask_za - svzero_za Co-authored-by: Sagar Kulkarni <sagar.kulkarni1@huawei.com> Reviewed By: sdesmalen Differential Revision: https://reviews.llvm.org/D134677
2023-07-20[Clang][AArch64][SME] Add vector read/write (mova) intrinsicsBryan Chan1-0/+3
This patch adds support for the following SME ACLE intrinsics (as defined in https://arm-software.github.io/acle/main/acle.html): - svread_hor_za8[_s8]_m // also for u8 - svread_hor_za16[_s16]_m // also for u16, f16, bf16 - svread_hor_za32[_s32]_m // also for u32, f32 - svread_hor_za64[_s64]_m // also for u64, f64 - svread_hor_za128[_s8]_m // also for s16, s32, s64, u8, u16, u32, u64, bf16, f16, f32, f64 - svread_ver_za8[_s8]_m // also for u8 - svread_ver_za16[_s16]_m // also for u16, f16, bf16 - svread_ver_za32[_s32]_m // also for u32, f32 - svread_ver_za64[_s64]_m // also for u64, f64 - svread_ver_za128[_s8]_m // also for s16, s32, s64, u8, u16, u32, u64, bf16, f16, f32, f64 - svwrite_hor_za8[_s8]_m // also for u8 - svwrite_hor_za16[_s16]_m // also for u16, f16, bf16 - svwrite_hor_za32[_s32]_m // also for u32, f32 - svwrite_hor_za64[_s64]_m // also for u64, f64 - svwrite_hor_za128[_s8]_m // also for s16, s32, s64, u8, u16, u32, u64, bf16, f16, f32, f64 - svwrite_ver_za8[_s8]_m // also for u8 - svwrite_ver_za16[_s16]_m // also for u16, f16, bf16 - svwrite_ver_za32[_s32]_m // also for u32, f32 - svwrite_ver_za64[_s64]_m // also for u64, f64 - svwrite_ver_za128[_s8]_m // also for s16, s32, s64, u8, u16, u32, u64, bf16, f16, f32, f64 Co-authored-by: Sagar Kulkarni <sagar.kulkarni1@huawei.com> Reviewed By: sdesmalen, kmclaughlin Differential Revision: https://reviews.llvm.org/D128648
2023-07-14clang: Attach !fpmath metadata to __builtin_sqrt based on language flagsMatt Arsenault1-0/+8
OpenCL and HIP have -cl-fp32-correctly-rounded-divide-sqrt and -fno-hip-correctly-rounded-divide-sqrt. The corresponding fpmath metadata was only set on fdiv, and not sqrt. The backend is currently underutilizing sqrt lowering options, and the responsibility is split between the libraries and backend and this metadata is needed. CUDA/NVCC has -prec-div and -prev-sqrt but clang doesn't appear to be aiming for compatibility with those. Don't know if OpenMP has a similar control.
2023-07-12[OpenMP] Migrate device code privatisation from Clang CodeGen to OMPIRBuilderAkash Banerjee1-2/+4
This patch migrates the UseDevicePtr and UseDeviceAddr clause related code for handling privatisation from Clang codegen to the OMPIRBuilder Depends on D150860 Reviewed By: jdoerfert Differential Revision: https://reviews.llvm.org/D152554
2023-07-10[NFC] Initialize class member pointers to nullptr.Sindhu Chittireddy1-5/+5
Reviewed here: https://reviews.llvm.org/D153926
2023-07-06Enable dynamic-sized VLAs for data sharing in OpenMP offloaded target regions.Doru Bercea1-0/+2
Review: https://reviews.llvm.org/D153883
2023-07-05[OpenMP][CodeGen] Add codegen for combined 'loop' directives.Dave Pagan1-0/+16
The loop directive is a descriptive construct which allows the compiler flexibility in how it generates code for the directive's associated loop(s). See OpenMP specification 5.2 [257:8-9]. Codegen added in this patch for the combined 'loop' directives are: 'target teams loop' -> 'target teams distribute parallel for' 'teams loop' -> 'teams distribute parallel for' 'target parallel loop' -> 'target parallel for' 'parallel loop' -> 'parallel for' NOTE: The implementation of the 'loop' directive itself is unchanged. Differential Revision: https://reviews.llvm.org/D145823
2023-06-29[clang][CodeGen] Remove no-op EmitCastToVoidPtr (NFC)Sergei Barannikov1-3/+0
Reviewed By: JOE1994 Differential Revision: https://reviews.llvm.org/D153694
2023-06-22Revert "Try to implement lambdas with inalloca parameters by forwarding ↵Amy Huang1-12/+2
without use of inallocas." Causes a clang crash (see crbug.com/1457256). This reverts commit 015049338d7e8e0e81f2ad2f94e5a43e2e3f5220.
2023-06-20Try to implement lambdas with inalloca parameters by forwarding without use ↵Amy Huang1-2/+12
of inallocas. Differential Revision: https://reviews.llvm.org/D137872
2023-05-28[Clang][AArch64][SME] Add vector load/store (ld1/st1) intrinsicsBryan Chan1-0/+6
This patch adds support for the following SME ACLE intrinsics (as defined in https://arm-software.github.io/acle/main/acle.html): - svld1_hor_za8 // also for _za16, _za32, _za64 and _za128 - svld1_hor_vnum_za8 // also for _za16, _za32, _za64 and _za128 - svld1_ver_za8 // also for _za16, _za32, _za64 and _za128 - svld1_ver_vnum_za8 // also for _za16, _za32, _za64 and _za128 - svst1_hor_za8 // also for _za16, _za32, _za64 and _za128 - svst1_hor_vnum_za8 // also for _za16, _za32, _za64 and _za128 - svst1_ver_za8 // also for _za16, _za32, _za64 and _za128 - svst1_ver_vnum_za8 // also for _za16, _za32, _za64 and _za128 SveEmitter.cpp is extended to generate arm_sme.h (currently named arm_sme_draft_spec_subject_to_change.h) and other SME definitions from arm_sme.td, which is modeled after arm_sve.td. Common TableGen definitions are moved into arm_sve_sme_incl.td. Co-authored-by: Sagar Kulkarni <sagar.kulkarni1@huawei.com> Reviewed By: sdesmalen, kmclaughlin Differential Revision: https://reviews.llvm.org/D127910
2023-05-20-fsanitize=function: use type hashes instead of RTTI objectsFangrui Song1-5/+4
Currently we use RTTI objects to check type compatibility. To support non-unique RTTI objects, commit 5745eccef54ddd3caca278d1d292a88b2281528b added a `checkTypeInfoEquality` string matching to the runtime. The scheme is inefficient. ``` _Z1fv: .long 846595819 # jmp .long .L__llvm_rtti_proxy-_Z3funv ... main: ... # Load the second word (pointer to the RTTI object) and dereference it. movslq 4(%rsi), %rax movq (%rax,%rsi), %rdx # Is it the desired typeinfo object? leaq _ZTIFvvE(%rip), %rax # If not, call __ubsan_handle_function_type_mismatch_v1, which may recover if checkTypeInfoEquality allows cmpq %rax, %rdx jne .LBB1_2 ... .section .data.rel.ro,"aw",@progbits .p2align 3, 0x0 .L__llvm_rtti_proxy: .quad _ZTIFvvE ``` Let's replace the indirect `_ZTI` pointer with a type hash similar to `-fsanitize=kcfi`. ``` _Z1fv: .long 3238382334 .long 2772461324 # type hash main: ... # Load the second word (callee type hash) and check whether it is expected cmpl $-1522505972, -4(%rax) # If not, fail: call __ubsan_handle_function_type_mismatch jne .LBB2_2 ``` The RTTI object derives its name from `clang::MangleContext::mangleCXXRTTI`, which uses `mangleType`. `mangleTypeName` uses `mangleType` as well. So the type compatibility change is high-fidelity. Since we no longer need RTTI pointers in `__ubsan::__ubsan_handle_function_type_mismatch_v1`, let's switch it back to version 0, the original signature before e215996a2932ed7c472f4e94dc4345b30fd0c373 (2019). `__ubsan::__ubsan_handle_function_type_mismatch_abort` is not recoverable, so we can revert some changes from e215996a2932ed7c472f4e94dc4345b30fd0c373. Reviewed By: samitolvanen Differential Revision: https://reviews.llvm.org/D148785
2023-04-03[OpenMP][5.1] Fix parallel masked is ignored #59939Rafael A. Herrera Guaitero1-0/+1
Code generation support for 'parallel masked' directive. The `EmitOMPParallelMaskedDirective` was implemented. In addition, the appropiate device functions were added. Fix #59939. Reviewed By: jdoerfert Differential Revision: https://reviews.llvm.org/D143527
2023-02-28[Coroutines] Avoid creating conditional cleanup markers in suspend blockWei Wang1-0/+5
We shouldn't access coro frame after returning from `await_suspend()` and before `llvm.coro.suspend()`. Make sure we always hoist conditional cleanup markers when inside the `await.suspend` block. Fix https://github.com/llvm/llvm-project/issues/59181 Reviewed By: ChuanqiXu Differential Revision: https://reviews.llvm.org/D144680
2023-02-15[CodeGen] Add a flag to `Address` and `Lvalue` that is used to keepAkira Hatanaka1-5/+12
track of whether the pointer is known not to be null The flag will be used for the arm64e work we plan to upstream in the future (see https://lists.llvm.org/pipermail/llvm-dev/2019-October/136091.html). Currently the flag has no effect on code generation. Differential Revision: https://reviews.llvm.org/D142584
2023-01-14[clang] Use std::optional instead of llvm::Optional (NFC)Kazu Hirata1-1/+1
This patch replaces (llvm::|)Optional< with std::optional<. I'll post a separate patch to remove #include "llvm/ADT/Optional.h". This is part of an effort to migrate from llvm::Optional to std::optional: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716