aboutsummaryrefslogtreecommitdiff
path: root/clang/lib/CodeGen/CGExprComplex.cpp
AgeCommit message (Collapse)AuthorFilesLines
2017-03-26[coroutines] Add codegen for await and yield expressionsGor Nishanov1-0/+10
Details: Emit suspend expression which roughly looks like: auto && x = CommonExpr(); if (!x.await_ready()) { llvm_coro_save(); x.await_suspend(...); (*) llvm_coro_suspend(); (**) } x.await_resume(); where the result of the entire expression is the result of x.await_resume() (*) If x.await_suspend return type is bool, it allows to veto a suspend: if (x.await_suspend(...)) llvm_coro_suspend(); (**) llvm_coro_suspend() encodes three possible continuations as a switch instruction: %where-to = call i8 @llvm.coro.suspend(...) switch i8 %where-to, label %coro.ret [ ; jump to epilogue to suspend i8 0, label %yield.ready ; go here when resumed i8 1, label %yield.cleanup ; go here when destroyed ] llvm-svn: 298784
2017-03-06Don't assume cleanup emission preserves dominance in expr evaluationReid Kleckner1-1/+5
Summary: Because of the existence branches out of GNU statement expressions, it is possible that emitting cleanups for a full expression may cause the new insertion point to not be dominated by the result of the inner expression. Consider this example: struct Foo { Foo(); ~Foo(); int x; }; int g(Foo, int); int f(bool cond) { int n = g(Foo(), ({ if (cond) return 0; 42; })); return n; } Before this change, result of the call to 'g' did not dominate its use in the store to 'n'. The early return exit from the statement expression branches to a shared cleanup block, which ends in a switch between the fallthrough destination (the assignment to 'n') or the function exit block. This change solves the problem by spilling and reloading expression evaluation results when any of the active cleanups have branches. I audited the other call sites of enterFullExpression, and they don't appear to keep and Values live across the site of the cleanup, except in ARC code. I wasn't able to create a test case for ARC that exhibits this problem, though. Reviewers: rjmccall, rsmith Subscribers: cfe-commits Differential Revision: https://reviews.llvm.org/D30590 llvm-svn: 297084
2016-12-23Fix problems in "[OpenCL] Enabling the usage of CLK_NULL_QUEUE as compare ↵Egor Churaev1-0/+1
operand." Summary: Fixed warnings in commit: https://reviews.llvm.org/rL290171 Reviewers: djasper, Anastasia Subscribers: yaxunl, cfe-commits, bader Differential Revision: https://reviews.llvm.org/D27981 llvm-svn: 290431
2016-12-20Revert "[OpenCL] Enabling the usage of CLK_NULL_QUEUE as compare operand."Daniel Jasper1-1/+0
This reverts commit r290171. It triggers a bunch of warnings, because the new enumerator isn't handled in all switches. We want a warning-free build. Replied on the commit with more details. llvm-svn: 290173
2016-12-20[OpenCL] Enabling the usage of CLK_NULL_QUEUE as compare operand.Egor Churaev1-0/+1
Summary: Enabling the compression of CLK_NULL_QUEUE to variable of type queue_t. Reviewers: Anastasia Subscribers: cfe-commits, yaxunl, bader Differential Revision: https://reviews.llvm.org/D27569 llvm-svn: 290171
2016-10-26Refactor call emission to package the function pointer together withJohn McCall1-3/+3
abstract information about the callee. NFC. The goal here is to make it easier to recognize indirect calls and trigger additional logic in certain cases. That logic will come in a later patch; in the meantime, I felt that this was a significant improvement to the code. llvm-svn: 285258
2016-07-28[OpenCL] Generate opaque type for sampler_t and function call for the ↵Yaxun Liu1-0/+1
initializer Currently Clang use int32 to represent sampler_t, which have been a source of issue for some backends, because in some backends sampler_t cannot be represented by int32. They have to depend on kernel argument metadata and use IPA to find the sampler arguments and global variables and transform them to target specific sampler type. This patch uses opaque pointer type opencl.sampler_t* for sampler_t. For each use of file-scope sampler variable, it generates a function call of __translate_sampler_initializer. For each initialization of function-scope sampler variable, it generates a function call of __translate_sampler_initializer. Each builtin library can implement its own __translate_sampler_initializer(). Since the real sampler type tends to be architecture dependent, allowing it to be initialized by a library function simplifies backend design. A typical implementation of __translate_sampler_initializer could be a table lookup of real sampler literal values. Since its argument is always a literal, the returned pointer is known at compile time and easily optimized to finally become some literal values directly put into image read instructions. This patch is partially based on Alexey Sotkin's work in Khronos Clang (https://github.com/KhronosGroup/SPIR/commit/3d4eec61623502fc306e8c67c9868be2b136e42b). Differential Revision: https://reviews.llvm.org/D21567 llvm-svn: 277024
2016-07-18[NFC] Header cleanupMehdi Amini1-3/+0
Summary: Removed unused headers, replaced some headers with forward class declarations Patch by: Eugene <claprix@yandex.ru> Differential Revision: https://reviews.llvm.org/D20100 llvm-svn: 275882
2016-01-13[Bugfix] Fix ICE on constexpr vector splat.George Burgess IV1-0/+1
In {CG,}ExprConstant.cpp, we weren't treating vector splats properly. This patch makes us treat splats more properly. Additionally, this patch adds a new cast kind which allows a bool->int cast to result in -1 or 0, instead of 1 or 0 (for true and false, respectively), so we can sanely model OpenCL bool->int casts in the AST. Differential Revision: http://reviews.llvm.org/D14877 llvm-svn: 257559
2015-11-23Preserve exceptions information during calls code generation.Samuel Antao1-7/+13
This patch changes the generation of CGFunctionInfo to contain the FunctionProtoType if it is available. This enables the code generation for call instructions to look into this type for exception information and therefore generate better quality IR - it will not create invoke instructions for functions that are know not to throw. llvm-svn: 253926
2015-10-20[DEBUG INFO] Emit debug info for type used in explicit cast only.Alexey Bataev1-0/+2
Currently debug info for types used in explicit cast only is not emitted. It happened after a patch for better alignment handling. This patch fixes this bug. Differential Revision: http://reviews.llvm.org/D13582 llvm-svn: 250795
2015-09-17Support __builtin_ms_va_list.Charles Davis1-2/+2
Summary: This change adds support for `__builtin_ms_va_list`, a GCC extension for variadic `ms_abi` functions. The existing `__builtin_va_list` support is inadequate for this because `va_list` is defined differently in the Win64 ABI vs. the System V/AMD64 ABI. Depends on D1622. Reviewers: rsmith, rnk, rjmccall CC: cfe-commits Differential Revision: http://reviews.llvm.org/D1623 llvm-svn: 247941
2015-09-08Compute and preserve alignment more faithfully in IR-generation.John McCall1-40/+35
Introduce an Address type to bundle a pointer value with an alignment. Introduce APIs on CGBuilderTy to work with Address values. Change core APIs on CGF/CGM to traffic in Address where appropriate. Require alignments to be non-zero. Update a ton of code to compute and propagate alignment information. As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment helper function to CGF and made use of it in a number of places in the expression emitter. The end result is that we should now be significantly more correct when performing operations on objects that are locally known to be under-aligned. Since alignment is not reliably tracked in the type system, there are inherent limits to this, but at least we are no longer confused by standard operations like derived-to-base conversions and array-to-pointer decay. I've also fixed a large number of bugs where we were applying the complete-object alignment to a pointer instead of the non-virtual alignment, although most of these were hidden by the very conservative approach we took with member alignment. Also, because IRGen now reliably asserts on zero alignments, we should no longer be subject to an absurd but frustrating recurring bug where an incomplete type would report a zero alignment and then we'd naively do a alignmentAtOffset on it and emit code using an alignment equal to the largest power-of-two factor of the offset. We should also now be emitting much more aggressive alignment attributes in the presence of over-alignment. In particular, field access now uses alignmentAtOffset instead of min. Several times in this patch, I had to change the existing code-generation pattern in order to more effectively use the Address APIs. For the most part, this seems to be a strict improvement, like doing pointer arithmetic with GEPs instead of ptrtoint. That said, I've tried very hard to not change semantics, but it is likely that I've failed in a few places, for which I apologize. ABIArgInfo now always carries the assumed alignment of indirect and indirect byval arguments. In order to cut down on what was already a dauntingly large patch, I changed the code to never set align attributes in the IR on non-byval indirect arguments. That is, we still generate code which assumes that indirect arguments have the given alignment, but we don't express this information to the backend except where it's semantically required (i.e. on byvals). This is likely a minor regression for those targets that did provide this information, but it'll be trivial to add it back in a later patch. I partially punted on applying this work to CGBuiltin. Please do not add more uses of the CreateDefaultAligned{Load,Store} APIs; they will be going away eventually. llvm-svn: 246985
2015-08-11Propagate SourceLocations through to get a Loc on float_cast_overflowFilipe Cabecinhas1-17/+22
Summary: float_cast_overflow is the only UBSan check without a source location attached. This patch propagates SourceLocations where necessary to get them to the EmitCheck() call. Reviewers: rsmith, ABataev, rjmccall Subscribers: cfe-commits Differential Revision: http://reviews.llvm.org/D11757 llvm-svn: 244568
2015-08-05Don't repeat function names in comments. NFC.Filipe Cabecinhas1-3/+3
llvm-svn: 244018
2015-04-23InstrProf: Stop using RegionCounter outside of CodeGenPGO (NFC)Justin Bogner1-3/+4
The RegionCounter type does a lot of legwork, but most of it is only meaningful within the implementation of CodeGenPGO. The uses elsewhere in CodeGen generally just want to increment or read counters, so do that directly. llvm-svn: 235664
2015-04-05[opaque pointer type] More GEP API migrationsDavid Blaikie1-4/+4
Looks like the VTable code in particular will need some work to pass around the pointee type explicitly. llvm-svn: 234128
2015-02-25Sema: Parenthesized bound destructor member expressions can be calledDavid Majnemer1-1/+1
We would wrongfully reject (a.~A)() in both the destructor and pseudo-destructor cases. This fixes PR22668. llvm-svn: 230512
2015-02-14CodeGen: _Atomic(_Complex) shouldn't crashDavid Majnemer1-0/+2
We could be a little kinder if we did a compare-exchange loop instead of an atomic-load/store pair. llvm-svn: 229212
2015-02-14Revert "Revert r229082 for a bit, it caused PR22577."David Majnemer1-1/+2
This reverts commit r229123. It was a red herring, the bug was present without r229082. llvm-svn: 229205
2015-02-13Revert r229082 for a bit, it caused PR22577.Nico Weber1-2/+1
llvm-svn: 229123
2015-02-13MS ABI: Implement /volatile:msDavid Majnemer1-1/+2
The /volatile:ms semantics turn volatile loads and stores into atomic acquire and release operations. This distinction is important because volatile memory operations do not form a happens-before relationship with non-atomic memory. This means that a volatile store is not sufficient for implementing a mutex unlock routine. Differential Revision: http://reviews.llvm.org/D7580 llvm-svn: 229082
2015-02-12Fix typoo.Richard Smith1-2/+2
llvm-svn: 228963
2015-02-09DebugInfo: Refactor default arg handling into a common place (instead of ↵David Blaikie1-8/+2
handling in repeatedly for aggregate, complex, and scalar types) llvm-svn: 228591
2015-02-09DebugInfo: Suppress the location of instructions in complex default arguments.David Blaikie1-2/+8
llvm-svn: 228589
2015-01-25DebugInfo: Use the preferred location rather than the start location for ↵David Blaikie1-1/+1
expression line info This causes things like assignment to refer to the '=' rather than the LHS when attributing the store instruction, for example. There were essentially 3 options for this: * The beginning of an expression (this was the behavior prior to this commit). This meant that stepping through subexpressions would bounce around from subexpressions back to the start of the outer expression, etc. (eg: x + y + z would go x, y, x, z, x (the repeated 'x's would be where the actual addition occurred)). * The end of an expression. This seems to be what GCC does /mostly/, and certainly this for function calls. This has the advantage that progress is always 'forwards' (never jumping backwards - except for independent subexpressions if they're evaluated in interesting orders, etc). "x + y + z" would go "x y z" with the additions occurring at y and z after the respective loads. The problem with this is that the user would still have to think fairly hard about precedence to realize which subexpression is being evaluated or which operator overload is being called in, say, an asan backtrace. * The preferred location or 'exprloc'. In this case you get sort of what you'd expect, though it's a bit confusing in its own way due to going 'backwards'. In this case the locations would be: "x y + z +" in lovely postfix arithmetic order. But this does mean that if the op+ were an operator overload, say, and in a backtrace, the backtrace will point to the exact '+' that's being called, not to the end of one of its operands. (actually the operator overload case doesn't work yet for other reasons, but that's being fixed - but this at least gets scalar/complex assignments and other plain operators right) llvm-svn: 227027
2015-01-18DebugInfo: Attribute complex expressions to the source location of the ↵David Blaikie1-0/+1
expression Just as r225956 did for scalar expressions (CGExprScalar::Visit), do the same for complex expressions. llvm-svn: 226390
2015-01-14Reapply r225000 (reverted in r225555): DebugInfo: Generalize debug info ↵David Blaikie1-14/+8
location handling (and follow-up commits). Several pieces of code were relying on implicit debug location setting which usually lead to incorrect line information anyway. So I've fixed those (in r225955 and r225845) separately which should pave the way for this commit to be cleanly reapplied. The reason these implicit dependencies resulted in crashes with this patch is that the debug location would no longer implicitly leak from one place to another, but be set back to invalid. Once a call with no/invalid location was emitted, if that call was ever inlined it could produce invalid debugloc chains and assert during LLVM's codegen. There may be further cases of such bugs in this patch - they're hard to flush out with regression testing, so I'll keep an eye out for reports and investigate/fix them ASAP if they come up. Original commit message: Reapply "DebugInfo: Generalize debug info location handling" Originally committed in r224385 and reverted in r224441 due to concerns this change might've introduced a crash. Turns out this change fixes the crash introduced by one of my earlier more specific location handling changes (those specific fixes are reverted by this patch, in favor of the more general solution). Recommitted in r224941 and reverted in r224970 after it caused a crash when building compiler-rt. Looks to be due to this change zeroing out the debug location when emitting default arguments (which were meant to inherit their outer expression's location) thus creating call instructions without locations - these create problems for inlining and must not be created. That is fixed and tested in this version of the change. Original commit message: This is a more scalable (fixed in mostly one place, rather than many places that will need constant improvement/maintenance) solution to several commits I've made recently to increase source fidelity for subexpressions. This resetting had to be done at the DebugLoc level (not the SourceLocation level) to preserve scoping information (if the resetting was done with CGDebugInfo::EmitLocation, it would've caused the tail end of an expression's codegen to end up in a potentially different scope than the start, even though it was at the same source location). The drawback to this is that it might leave CGDebugInfo out of sync. Ideally CGDebugInfo shouldn't have a duplicate sense of the current SourceLocation, but for now it seems it does... - I don't think I'm going to tackle removing that just now. I expect this'll probably cause some more buildbot fallout & I'll investigate that as it comes up. Also these sort of improvements might be starting to show a weakness/bug in LLVM's line table handling: we don't correctly emit is_stmt for statements, we just put it on every line table entry. This means one statement split over multiple lines appears as multiple 'statements' and two statements on one line (without column info) are treated as one statement. I don't think we have any IR representation of statements that would help us distinguish these cases and identify the beginning of each statement - so that might be something we need to add (possibly to the lexical scope chain - a scope for each statement). This does cause some problems for GDB and possibly other DWARF consumers. llvm-svn: 225956
2015-01-09Revert "DebugInfo: Generalize debug info location handling" and related commitsDavid Blaikie1-8/+14
This reverts commit r225000, r225021, r225083, r225086, r225090. The root change (r225000) still has several issues where it's caused calls to be emitted without debug locations. This causes assertion failures if/when those calls are inlined. I'll work up some test cases and fixes before recommitting this. llvm-svn: 225555
2014-12-30Reapply "DebugInfo: Generalize debug info location handling"David Blaikie1-14/+8
Originally committed in r224385 and reverted in r224441 due to concerns this change might've introduced a crash. Turns out this change fixes the crash introduced by one of my earlier more specific location handling changes (those specific fixes are reverted by this patch, in favor of the more general solution). Recommitted in r224941 and reverted in r224970 after it caused a crash when building compiler-rt. Looks to be due to this change zeroing out the debug location when emitting default arguments (which were meant to inherit their outer expression's location) thus creating call instructions without locations - these create problems for inlining and must not be created. That is fixed and tested in this version of the change. Original commit message: This is a more scalable (fixed in mostly one place, rather than many places that will need constant improvement/maintenance) solution to several commits I've made recently to increase source fidelity for subexpressions. This resetting had to be done at the DebugLoc level (not the SourceLocation level) to preserve scoping information (if the resetting was done with CGDebugInfo::EmitLocation, it would've caused the tail end of an expression's codegen to end up in a potentially different scope than the start, even though it was at the same source location). The drawback to this is that it might leave CGDebugInfo out of sync. Ideally CGDebugInfo shouldn't have a duplicate sense of the current SourceLocation, but for now it seems it does... - I don't think I'm going to tackle removing that just now. I expect this'll probably cause some more buildbot fallout & I'll investigate that as it comes up. Also these sort of improvements might be starting to show a weakness/bug in LLVM's line table handling: we don't correctly emit is_stmt for statements, we just put it on every line table entry. This means one statement split over multiple lines appears as multiple 'statements' and two statements on one line (without column info) are treated as one statement. I don't think we have any IR representation of statements that would help us distinguish these cases and identify the beginning of each statement - so that might be something we need to add (possibly to the lexical scope chain - a scope for each statement). This does cause some problems for GDB and possibly other DWARF consumers. llvm-svn: 225000
2014-12-29Revert "DebugInfo: Generalize debug info location handling"David Blaikie1-8/+14
Asserting when building compiler-rt when using a GCC host compiler. Reverting while I investigate. This reverts commit r224941. llvm-svn: 224970
2014-12-29Reapply "DebugInfo: Generalize debug info location handling"David Blaikie1-14/+8
Originally committed in r224385 and reverted in r224441 due to concerns this change might've introduced a crash. Turns out this change fixes the crash introduced by one of my earlier more specific location handling changes (those specific fixes are reverted by this patch, in favor of the more general solution). Original commit message: This is a more scalable (fixed in mostly one place, rather than many places that will need constant improvement/maintenance) solution to several commits I've made recently to increase source fidelity for subexpressions. This resetting had to be done at the DebugLoc level (not the SourceLocation level) to preserve scoping information (if the resetting was done with CGDebugInfo::EmitLocation, it would've caused the tail end of an expression's codegen to end up in a potentially different scope than the start, even though it was at the same source location). The drawback to this is that it might leave CGDebugInfo out of sync. Ideally CGDebugInfo shouldn't have a duplicate sense of the current SourceLocation, but for now it seems it does... - I don't think I'm going to tackle removing that just now. I expect this'll probably cause some more buildbot fallout & I'll investigate that as it comes up. Also these sort of improvements might be starting to show a weakness/bug in LLVM's line table handling: we don't correctly emit is_stmt for statements, we just put it on every line table entry. This means one statement split over multiple lines appears as multiple 'statements' and two statements on one line (without column info) are treated as one statement. I don't think we have any IR representation of statements that would help us distinguish these cases and identify the beginning of each statement - so that might be something we need to add (possibly to the lexical scope chain - a scope for each statement). This does cause some problems for GDB and possibly other DWARF consumers. llvm-svn: 224941
2014-12-17Revert "DebugInfo: Generalize debug info location handling"David Blaikie1-8/+14
Fails an ASan bootstrap - I'll try to reproduce locally & sort that out before recommitting. This reverts commit r224385. llvm-svn: 224441
2014-12-16DebugInfo: Generalize debug info location handlingDavid Blaikie1-14/+8
This is a more scalable (fixed in mostly one place, rather than many places that will need constant improvement/maintenance) solution to several commits I've made recently to increase source fidelity for subexpressions. This resetting had to be done at the DebugLoc level (not the SourceLocation level) to preserve scoping information (if the resetting was done with CGDebugInfo::EmitLocation, it would've caused the tail end of an expression's codegen to end up in a potentially different scope than the start, even though it was at the same source location). The drawback to this is that it might leave CGDebugInfo out of sync. Ideally CGDebugInfo shouldn't have a duplicate sense of the current SourceLocation, but for now it seems it does... - I don't think I'm going to tackle removing that just now. I expect this'll probably cause some more buildbot fallout & I'll investigate that as it comes up. Also these sort of improvements might be starting to show a weakness/bug in LLVM's line table handling: we don't correctly emit is_stmt for statements, we just put it on every line table entry. This means one statement split over multiple lines appears as multiple 'statements' and two statements on one line (without column info) are treated as one statement. I don't think we have any IR representation of statements that would help us distinguish these cases and identify the beginning of each statement - so that might be something we need to add (possibly to the lexical scope chain - a scope for each statement). This does cause some problems for GDB and possibly other DWARF consumers. llvm-svn: 224385
2014-12-09DebugInfo: Correct location for compound complex assignmentDavid Blaikie1-2/+1
llvm-svn: 223835
2014-12-09DebugInfo: Accurate location information for complex assignmentDavid Blaikie1-4/+4
llvm-svn: 223828
2014-12-09DebugInfo: Emit the correct location for initialization of a complex variableDavid Blaikie1-8/+14
Especially useful for sanitizer reports. llvm-svn: 223825
2014-12-02Fix invalid calling convention used for libcalls on ARM.Anton Korobeynikov1-5/+14
ARM ABI specifies that all the libcalls use soft FP ABI (even hard FP binaries). These days clang emits _mulsc3 / _muldc3 calls with default (C) calling convention which would be translated into AAPCS_VFP LLVM calling and thus the result of complex multiplication will be bogus. Introduce a way for a target to specify explicitly calling convention for libcalls. Right now this is temporary correctness fix. Ultimately, we'll end with intrinsic for complex multiplication and all calling convention decisions for libcalls will be put into backend. llvm-svn: 223123
2014-10-31Remove CastKind typedef from CastExpr since CastKind is in the clang namespace.Craig Topper1-2/+2
llvm-svn: 220955
2014-10-21Lower compound assignment for the missing type llvm::Type::FP128TyID.Jiangning Liu1-0/+4
llvm-svn: 220257
2014-10-19[complex] Teach the complex math IR gen to emit direct math andChandler Carruth1-17/+83
a NaN-test prior to the call to the library function. This should automatically make fastmath (including just non-NaNs) able to avoid the expensive libcalls and also open the door to more advanced folding in LLVM based on the rules for complex math. Two important notes to remember: first is that this isn't yet a proper limited range mode, it's still just improving the unlimited range mode. Also, it isn't really perfecet w.r.t. what an unlimited range mode should be doing because it isn't quite handling the flags produced by all the operations in the way desirable for that mode, but then neither is compiler-rt's libcall. When the compiler-rt libcall is improved to carefully manage flags, the code emitted here should be improved correspondingly. And it is still a long-term desirable thing to add a limited range mode to Clang that would be able to use direct math without library calls here. Special thanks to Steve Canon for the careful review on this patch and teaching me about these issues. =D Differential Revision: http://reviews.llvm.org/D5756 llvm-svn: 220167
2014-10-17complex long double support for PowerPCJoerg Sonnenberger1-0/+4
llvm-svn: 220034
2014-10-11[complex] Use the much more powerful EmitCall routine to call libcallsChandler Carruth1-17/+1
for complex math. This should fix the windows build bots that started having trouble here and generally fix complex libcall emission on targets which use sret for complex data types. It also makes the code a bit simpler (despite calling into a much more complex bucket of code). llvm-svn: 219565
2014-10-11[complex] Teach Clang to preserve different-type operands to arithmeticChandler Carruth1-35/+162
operators where one type is a C complex type, and to emit both the efficient and correct implementation for complex arithmetic according to C11 Annex G using this extra information. For both multiply and divide the old code was writing a long-hand reduced version of the math without any of the special handling of inf and NaN recommended by the standard here. Instead of putting more complexity here, this change does what GCC does which is to emit a libcall for the fully general case. However, the old code also failed to do the proper minimization of the set of operations when there was a mixed complex and real operation. In those cases, C provides a spec for much more minimal operations that are valid. Clang now emits the exact suggested operations. This change isn't *just* about performance though, without minimizing these operations, we again lose the correct handling of infinities and NaNs. It is critical that this happen in the frontend based on assymetric type operands to complex math operations. The performance implications of this change aren't trivial either. I've run a set of benchmarks in Eigen, an open source mathematics library that makes heavy use of complex. While a few have slowed down due to the libcall being introduce, most sped up and some by a huge amount: up to 100% and 140%. In order to make all of this work, also match the algorithm in the constant evaluator to the one in the runtime library. Currently it is a broken port of the simplifications from C's Annex G to the long-hand formulation of the algorithm. Splitting this patch up is very hard because none of this works without the AST change to preserve non-complex operands. Sorry for the enormous change. Follow-up changes will include support for sinking the libcalls onto cold paths in common cases and fastmath improvements to allow more aggressive backend folding. Differential Revision: http://reviews.llvm.org/D5698 llvm-svn: 219557
2014-05-21[C++11] Use 'nullptr'. CodeGen edition.Craig Topper1-1/+1
llvm-svn: 209272
2014-02-17Change PGO instrumentation to compute counts in a separate AST traversal.Bob Wilson1-4/+0
Previously, we made one traversal of the AST prior to codegen to assign counters to the ASTs and then propagated the count values during codegen. This patch now adds a separate AST traversal prior to codegen for the -fprofile-instr-use option to propagate the count values. The counts are then saved in a map from which they can be retrieved during codegen. This new approach has several advantages: 1. It gets rid of a lot of extra PGO-related code that had previously been added to codegen. 2. It fixes a serious bug. My original implementation (which was mailed to the list but never committed) used 3 counters for every loop. Justin improved it to move 2 of those counters into the less-frequently executed breaks and continues, but that turned out to produce wrong count values in some cases. The solution requires visiting a loop body before the condition so that the count for the condition properly includes the break and continue counts. Changing codegen to visit a loop body first would be a fairly invasive change, but with a separate AST traversal, it is easy to control the order of traversal. I've added a testcase (provided by Justin) to make sure this works correctly. 3. It improves the instrumentation overhead, reducing the number of counters for a loop from 3 to 1. We no longer need dedicated counters for breaks and continues, since we can just use the propagated count values when visiting breaks and continues. To make this work, I needed to make a change to the way we count case statements, going back to my original approach of not including the fall-through in the counter values. This was necessary because there isn't always an AST node that can be used to record the fall-through count. Now case statements are handled the same as default statements, with the fall-through paths branching over the counter increments. While I was at it, I also went back to using this approach for do-loops -- omitting the fall-through count into the loop body simplifies some of the calculations and make them behave the same as other loops. Whenever we start using this instrumentation for coverage, we'll need to add the fall-through counts into the counter values. llvm-svn: 201528
2014-01-13CodeGen: Rename adjustFallThroughCount -> adjustForControlFlowJustin Bogner1-2/+2
adjustFallThroughCount isn't a good name, and the documentation was even worse. This commit attempts to clarify what it's for and when to use it. llvm-svn: 199139
2014-01-06CodeGen: Initial instrumentation based PGO implementationJustin Bogner1-1/+7
llvm-svn: 198640
2013-12-11Add front-end infrastructure now address space casts are in LLVM IR.David Tweed1-0/+1
With the introduction of explicit address space casts into LLVM, there's a need to provide a new cast kind the front-end can create for C/OpenCL/CUDA and code to produce address space casts from those kinds when appropriate. Patch by Michele Scandale! llvm-svn: 197036
2013-11-22CodeGen: WhitespaceJustin Bogner1-5/+5
llvm-svn: 195437