aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/Analysis/CodeMetrics.cpp
AgeCommit message (Collapse)AuthorFilesLines
2022-08-18[CostModel] Replace getUserCost with getInstructionCostSimon Pilgrim1-1/+1
* Replace getUserCost with getInstructionCost, covering all cost kinds. * Remove getInstructionLatency, it's not implemented by any backends, and we should fold the functionality into getUserCost (now getInstructionCost) to make it easier for targets to handle the cost kinds with their existing cost callbacks. Original Patch by @samparker (Sam Parker) Differential Revision: https://reviews.llvm.org/D79483
2022-07-26[CodeMetrics] use hasOneLiveUse instead of hasOneUse while analyzing ↵Sinan Lin1-1/+2
inlinable callsites It would be better for CodeMetrics to use hasOneLiveUse while analyzing static and called once callsites, since inline cost now uses hasOneLiveUse instead of hasOneUse to avoid overpessimization on dead constant cases (since this patch https://reviews.llvm.org/D109294). This change has no noticeable influence now, but it helps improve the accuracy of cost models of passes that use CodeMetrics. Reviewed By: fhahn, nikic Differential Revision: https://reviews.llvm.org/D130461
2022-06-09Pipe potentially invalid InstructionCost through CodeMetricsPhilip Reames1-11/+3
Per the documentation in Support/InstructionCost.h, the purpose of an invalid cost is so that clients can change behavior on impossible to cost inputs. CodeMetrics was instead asserting that invalid costs never occurred. On a target with an incomplete cost model - e.g. RISCV - this means that transformations would crash on (falsely) invalid constructs - e.g. scalable vectors. While we certainly should improve the cost model - and I plan to do so in the near future - we also shouldn't be crashing. This violates the explicitly stated purpose of an invalid InstructionCost. I updated all of the "easy" consumers where bailouts were locally obvious. I plan to follow up with loop unroll in a following change. Differential Revision: https://reviews.llvm.org/D127131
2022-03-01Cleanup includes: LLVMAnalysisserge-sans-paille1-1/+0
Number of lines output by preprocessor: before: 1065940348 after: 1065307662 Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup Differential Revision: https://reviews.llvm.org/D120659
2021-10-21[CodeMetrics] Don't require speculatability for ephemeral valuesNikita Popov1-2/+3
As discussed in D112016, our current requirement of speculatability for ephemeral is overly strict: What we really care about is that the instruction will be DCEd once the assume is dropped. For that it is sufficient that the instruction is side-effect free and not a terminator. In particular, this allows non-dereferenceable loads to be ephemeral values. Differential Revision: https://reviews.llvm.org/D112179
2021-02-11Revert "[AssumptionCache] Avoid dangling llvm.assume calls in the cache"Michael Kruse1-2/+6
This reverts commit b7d870eae7fdadcf10d0f177faa7409c2e37d776 and the subsequent fix "[Polly] Fix build after AssumptionCache change (D96168)" (commit e6810cab09fcbc87b6e5e4d226de0810e2f2ea38). It caused indeterminism in the output, such that e.g. the polly-x86_64-linux buildbot failed accasionally.
2021-02-11NFC: Migrate CodeMetrics to work on InstructionCostSander de Smalen1-3/+13
This patch migrates cost values and arithmetic to work on InstructionCost. When the interfaces to TargetTransformInfo are changed, any InstructionCost state will propagate naturally. See this patch for the introduction of the type: https://reviews.llvm.org/D91174 See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2020-November/146408.html Reviewed By: david-arm Differential Revision: https://reviews.llvm.org/D96030
2021-02-06[AssumptionCache] Avoid dangling llvm.assume calls in the cacheJohannes Doerfert1-6/+2
PR49043 exposed a problem when it comes to RAUW llvm.assumes. While D96106 would fix it for GVNSink, it seems a more general concern. To avoid future problems this patch moves away from the vector of weak reference model used in the assumption cache. Instead, we track the llvm.assume calls with a callback handle which will remove itself from the cache if the call is deleted. Fixes PR49043. Reviewed By: nikic Differential Revision: https://reviews.llvm.org/D96168
2021-01-19[LoopRotate] Calls not lowered to calls should not block rotation.Florian Hahn1-2/+3
83daa49758a1 made loop-rotate more conservative in the presence of function calls in the prepare-for-lto stage. The code did not properly account for calls that are no actual function calls, like calls to intrinsics. This patch updates the code to ensure only calls that are lowered to actual calls are considered inline candidates.
2021-01-19[LoopRotate] Add PrepareForLTO stage, avoid rotating with inline cands.Florian Hahn1-4/+8
D84108 exposed a bad interaction between inlining and loop-rotation during regular LTO, which is causing notable regressions in at least CINT2006/473.astar. The problem boils down to: we now rotate a loop just before the vectorizer which requires duplicating a function call in the preheader when compiling the individual files ('prepare for LTO'). But this then prevents further inlining of the function during LTO. This patch tries to resolve this issue by making LoopRotate more conservative with respect to rotating loops that have inline-able calls during the 'prepare for LTO' stage. I think this change intuitively improves the current situation in general. Loop-rotate tries hard to avoid creating headers that are 'too big'. At the moment, it assumes all inlining already happened and the cost of duplicating a call is equal to just doing the call. But with LTO, inlining also happens during full LTO and it is possible that a previously duplicated call is actually a huge function which gets inlined during LTO. From the perspective of LV, not much should change overall. Most loops calling user-provided functions won't get vectorized to start with (unless we can infer that the function does not touch memory, has no other side effects). If we do not inline the 'inline-able' call during the LTO stage, we merely delayed loop-rotation & vectorization. If we inline during LTO, chances should be very high that the inlined code is itself vectorizable or the user call was not vectorizable to start with. There could of course be scenarios where we inline a sufficiently large function with code not profitable to vectorize, which would have be vectorized earlier (by scalarzing the call). But even in that case, there probably is no big performance impact, because it should be mostly down to the cost-model to reject vectorization in that case. And then the version with scalarized calls should also not be beneficial. In a way, LV should have strictly more information after inlining and make more accurate decisions (barring cost-model issues). There is of course plenty of room for things to go wrong unexpectedly, so we need to keep a close look at actual performance and address any follow-up issues. I took a look at the impact on statistics for MultiSource/SPEC2000/SPEC2006. There are a few benchmarks with fewer loops rotated, but no change to the number of loops vectorized. Reviewed By: sanwou01 Differential Revision: https://reviews.llvm.org/D94232
2020-05-10CodeMetrics.cpp - remove unused includes. NFC.Simon Pilgrim1-2/+0
2020-04-28[TTI] Add TargetCostKind argument to getUserCostSam Parker1-1/+1
There are several different types of cost that TTI tries to provide explicit information for: throughput, latency, code size along with a vague 'intersection of code-size cost and execution cost'. The vectorizer is a keen user of RecipThroughput and there's at least 'getInstructionThroughput' and 'getArithmeticInstrCost' designed to help with this cost. The latency cost has a single use and a single implementation. The intersection cost appears to cover most of the rest of the API. getUserCost is explicitly called from within TTI when the user has been explicit in wanting the code size (also only one use) as well as a few passes which are concerned with a mixture of size and/or a relative cost. In many cases these costs are closely related, such as when multiple instructions are required, but one evident diverging cost in this function is for div/rem. This patch adds an argument so that the cost required is explicit, so that we can make the important distinction when necessary. Differential Revision: https://reviews.llvm.org/D78635
2020-04-13CodeMetrics.h - include and forward declaration cleanup. NFC.Simon Pilgrim1-0/+1
Remove SmallPtrSet include, replace with forward declaration and include SmallPtrSet.h in CodeMetrics.cpp directly. Remove unused llvm::DataLayout/Instruction forward declarations.
2019-02-11Remove `CallSite` from the CodeMetrics analysis, moving it to the newChandler Carruth1-7/+4
`CallBase` and simpler APIs therein. llvm-svn: 353673
2019-01-19Update the file headers across all of the LLVM projects in the monorepoChandler Carruth1-4/+3
to reflect the new license. We understand that people may be surprised that we're moving the header entirely to discuss the new license. We checked this carefully with the Foundation's lawyer and we believe this is the correct approach. Essentially, all code in the project is now made available by the LLVM project under our new license, so you will see that the license headers include that license only. Some of our contributors have contributed code under our old license, and accordingly, we have retained a copy of our old license notice in the top-level files in each project and repository. llvm-svn: 351636
2018-05-14Rename DEBUG macro to LLVM_DEBUG.Nicola Zaghen1-1/+1
The DEBUG() macro is very generic so it might clash with other projects. The renaming was done as follows: - git grep -l 'DEBUG' | xargs sed -i 's/\bDEBUG\s\?(/LLVM_DEBUG(/g' - git diff -U0 master | ../clang/tools/clang-format/clang-format-diff.py -i -p1 -style LLVM - Manual change to APInt - Manually chage DOCS as regex doesn't match it. In the transition period the DEBUG() macro is still present and aliased to the LLVM_DEBUG() one. Differential Revision: https://reviews.llvm.org/D43624 llvm-svn: 332240
2017-12-13Remove redundant includes from lib/Analysis.Michael Zolotukhin1-1/+0
llvm-svn: 320617
2017-06-06Sort the remaining #include lines in include/... and lib/....Chandler Carruth1-1/+1
I did this a long time ago with a janky python script, but now clang-format has built-in support for this. I fed clang-format every line with a #include and let it re-sort things according to the precise LLVM rules for include ordering baked into clang-format these days. I've reverted a number of files where the results of sorting includes isn't healthy. Either places where we have legacy code relying on particular include ordering (where possible, I'll fix these separately) or where we have particular formatting around #include lines that I didn't want to disturb in this patch. This patch is *entirely* mechanical. If you get merge conflicts or anything, just ignore the changes in this patch and run clang-format over your #include lines in the files. Sorry for any noise here, but it is important to keep these things stable. I was seeing an increasing number of patches with irrelevant re-ordering of #include lines because clang-format was used. This patch at least isolates that churn, makes it easy to skip when resolving conflicts, and gets us to a clean baseline (again). llvm-svn: 304787
2016-12-19Revert @llvm.assume with operator bundles (r289755-r289757)Daniel Jasper1-14/+29
This creates non-linear behavior in the inliner (see more details in r289755's commit thread). llvm-svn: 290086
2016-12-15Remove the AssumptionCacheHal Finkel1-5/+2
After r289755, the AssumptionCache is no longer needed. Variables affected by assumptions are now found by using the new operand-bundle-based scheme. This new scheme is more computationally efficient, and also we need much less code... llvm-svn: 289756
2016-12-15Make processing @llvm.assume more efficient by using operand bundlesHal Finkel1-24/+12
There was an efficiency problem with how we processed @llvm.assume in ValueTracking (and other places). The AssumptionCache tracked all of the assumptions in a given function. In order to find assumptions relevant to computing known bits, etc. we searched every assumption in the function. For ValueTracking, that means that we did O(#assumes * #values) work in InstCombine and other passes (with a constant factor that can be quite large because we'd repeat this search at every level of recursion of the analysis). Several of us discussed this situation at the last developers' meeting, and this implements the discussed solution: Make the values that an assume might affect operands of the assume itself. To avoid exposing this detail to frontends and passes that need not worry about it, I've used the new operand-bundle feature to add these extra call "operands" in a way that does not affect the intrinsic's signature. I think this solution is relatively clean. InstCombine adds these extra operands based on what ValueTracking, LVI, etc. will need and then those passes need only search the users of the values under consideration. This should fix the computational-complexity problem. At this point, no passes depend on the AssumptionCache, and so I'll remove that as a follow-up change. Differential Revision: https://reviews.llvm.org/D27259 llvm-svn: 289755
2016-08-18[Assumptions] Make collecting ephemeral values not quadratic in theChandler Carruth1-23/+38
number of assume intrinsics. The classical way to have a cache-friendly vector style container when we need queue semantics for BFS instead of stack semantics for DFS is to use an ever-growing vector and an index. Erasing from the front requires O(size) work, and unless we expect the worklist to grow *very* large, its probably cheaper to just grow and race down the list. But that makes it more bad that we're putting the assume intrinsics in this at all. We end up looking at the (by definition empty) use list to see if they're ephemeral (when we've already put them in that set), etc. Instead, directly populate the worklist with the operands when we mark the assume intrinsics as ephemeral. Also, test the visited set *before* putting things into the worklist so we don't accumulate the same value in the list 100s of times. It would be nice to use a set-vector for this but I think its useful to test the set earlier to avoid repeatedly querying whether the same instruction is safe to speculate. Hopefully with these changes the number of values pushed onto the worklist is smaller, and we avoid quadratic work by letting it grow as necessary. Differential Revision: https://reviews.llvm.org/D23396 llvm-svn: 279099
2016-08-11Use range algorithms instead of unpacking begin/endDavid Majnemer1-2/+1
No functionality change is intended. llvm-svn: 278417
2016-08-03Pass EphValues by const-ref as it is not modified in the calleeSebastian Pop1-1/+1
Patch by Aditya Kumar. Differential Revision: https://reviews.llvm.org/D22967 llvm-svn: 277634
2016-03-08use range-based for loop; NFCISanjay Patel1-13/+12
llvm-svn: 262956
2016-02-12Add convergent property to CodeMetrics.Justin Lebar1-1/+4
Summary: No functional changes. Reviewers: jingyue, arsenm Subscribers: arsenm, llvm-commits Differential Revision: http://reviews.llvm.org/D17126 llvm-svn: 260728
2015-10-24Use all_of to simplify control flow. NFC.Benjamin Kramer1-8/+2
llvm-svn: 251202
2015-10-10Analysis: Remove implicit ilist iterator conversionsDuncan P. N. Exon Smith1-1/+1
Remove implicit ilist iterator conversions from LLVMAnalysis. I came across something really scary in `llvm::isKnownNotFullPoison()` which relied on `Instruction::getNextNode()` being completely broken (not surprising, but scary nevertheless). This function is documented (and coded to) return `nullptr` when it gets to the sentinel, but with an `ilist_half_node` as a sentinel, the sentinel check looks into some other memory and we don't recognize we've hit the end. Rooting out these scary cases is the reason I'm removing the implicit conversions before doing anything else with `ilist`; I'm not at all surprised that clients rely on badness. I found another scary case -- this time, not relying on badness, just bad (but I guess getting lucky so far) -- in `ObjectSizeOffsetEvaluator::compute_()`. Here, we save out the insertion point, do some things, and then restore it. Previously, we let the iterator auto-convert to `Instruction*`, and then set it back using the `Instruction*` version: Instruction *PrevInsertPoint = Builder.GetInsertPoint(); /* Logic that may change insert point */ if (PrevInsertPoint) Builder.SetInsertPoint(PrevInsertPoint); The check for `PrevInsertPoint` doesn't protect correctly against bad accesses. If the insertion point has been set to the end of a basic block (i.e., `SetInsertPoint(SomeBB)`), then `GetInsertPoint()` returns an iterator pointing at the list sentinel. The version of `SetInsertPoint()` that's getting called will then call `PrevInsertPoint->getParent()`, which explodes horribly. The only reason this hasn't blown up is that it's fairly unlikely the builder is adding to the end of the block; usually, we're adding instructions somewhere before the terminator. llvm-svn: 249925
2015-08-14[IR] Add token typesDavid Majnemer1-0/+3
This introduces the basic functionality to support "token types". The motivation stems from the need to perform operations on a Value whose provenance cannot be obscured. There are several applications for such a type but my immediate motivation stems from WinEH. Our personality routine enforces a single-entry - single-exit regime for cleanups. After several rounds of optimizations, we may be left with a terminator whose "cleanup-entry block" is not entirely clear because control flow has merged two cleanups together. We have experimented with using labels as operands inside of instructions which are not terminators to indicate where we came from but found that LLVM does not expect such exotic uses of BasicBlocks. Instead, we can use this new type to clearly associate the "entry point" and "exit point" of our cleanup. This is done by having the cleanuppad yield a Token and consuming it at the cleanupret. The token type makes it impossible to obscure or otherwise hide the Value, making it trivial to track the relationship between the two points. What is the burden to the optimizer? Well, it turns out we have already paid down this cost by accepting that there are certain calls that we are not permitted to duplicate, optimizations have to watch out for such instructions anyway. There are additional places in the optimizer that we will probably have to update but early examination has given me the impression that this will not be heroic. Differential Revision: http://reviews.llvm.org/D11861 llvm-svn: 245029
2015-03-23Re-sort includes with sort-includes.py and insert raw_ostream.h where it's used.Benjamin Kramer1-0/+1
llvm-svn: 232998
2015-01-04[PM] Split the AssumptionTracker immutable pass into two separate APIs:Chandler Carruth1-7/+19
a cache of assumptions for a single function, and an immutable pass that manages those caches. The motivation for this change is two fold. Immutable analyses are really hacks around the current pass manager design and don't exist in the new design. This is usually OK, but it requires that the core logic of an immutable pass be reasonably partitioned off from the pass logic. This change does precisely that. As a consequence it also paves the way for the *many* utility functions that deal in the assumptions to live in both pass manager worlds by creating an separate non-pass object with its own independent API that they all rely on. Now, the only bits of the system that deal with the actual pass mechanics are those that actually need to deal with the pass mechanics. Once this separation is made, several simplifications become pretty obvious in the assumption cache itself. Rather than using a set and callback value handles, it can just be a vector of weak value handles. The callers can easily skip the handles that are null, and eventually we can wrap all of this up behind a filter iterator. For now, this adds boiler plate to the various passes, but this kind of boiler plate will end up making it possible to port these passes to the new pass manager, and so it will end up factored away pretty reasonably. llvm-svn: 225131
2014-11-19Update SetVector to rely on the underlying set's insert to return a ↵David Blaikie1-1/+1
pair<iterator, bool> This is to be consistent with StringSet and ultimately with the standard library's associative container insert function. This lead to updating SmallSet::insert to return pair<iterator, bool>, and then to update SmallPtrSet::insert to return pair<iterator, bool>, and then to update all the existing users of those functions... llvm-svn: 222334
2014-10-15Treat the WorkSet used to find ephemeral values as double-endedHal Finkel1-1/+3
We need to make sure that we visit all operands of an instruction before moving deeper in the operand graph. We had been pushing operands onto the back of the work set, and popping them off the back as well, meaning that we might visit an instruction before visiting all of its uses that sit in between it and the call to @llvm.assume. To provide an explicit example, given the following: %q0 = extractelement <4 x float> %rd, i32 0 %q1 = extractelement <4 x float> %rd, i32 1 %q2 = extractelement <4 x float> %rd, i32 2 %q3 = extractelement <4 x float> %rd, i32 3 %q4 = fadd float %q0, %q1 %q5 = fadd float %q2, %q3 %q6 = fadd float %q4, %q5 %qi = fcmp olt float %q6, %q5 call void @llvm.assume(i1 %qi) %q5 is used by both %qi and %q6. When we visit %qi, it will be marked as ephemeral, and we'll queue %q6 and %q5. %q6 will be marked as ephemeral and we'll queue %q4 and %q5. Under the old system, we'd then visit %q4, which would become ephemeral, %q1 and then %q0, which would become ephemeral as well, and now we have a problem. We'd visit %rd, but it would not be marked as ephemeral because we've not yet visited %q2 and %q3 (because we've not yet visited %q5). This will be covered by a test case in a follow-up commit that enables ephemeral-value awareness in the SLP vectorizer. llvm-svn: 219815
2014-09-07Add functions for finding ephemeral valuesHal Finkel1-1/+77
This adds a set of utility functions for collecting 'ephemeral' values. These are LLVM IR values that are used only by @llvm.assume intrinsics (directly or indirectly), and thus will be removed prior to code generation, implying that they should be considered free for certain purposes (like inlining). The inliner's cost analysis, and a few other passes, have been updated to account for ephemeral values using the provided functionality. This functionality is important for the usability of @llvm.assume, because it limits the "non-local" side-effects of adding llvm.assume on inlining, loop unrolling, etc. (these are hints, and do not generate code, so they should not directly contribute to estimates of execution cost). llvm-svn: 217335
2014-03-17Consistent use of the noduplicate attribute.Eli Bendersky1-2/+2
The "noduplicate" attribute of call instructions is sometimes queried directly and sometimes through the cannotDuplicate() predicate. This patch streamlines all queries to use the cannotDuplicate() predicate. It also adds this predicate to InvokeInst, to mirror what CallInst has. llvm-svn: 204049
2014-03-04[Modules] Move CallSite into the IR library where it belogs. It isChandler Carruth1-1/+1
abstracting between a CallInst and an InvokeInst, both of which are IR concepts. llvm-svn: 202816
2013-01-22Begin fleshing out an interface in TTI for modelling the costs ofChandler Carruth1-44/+4
generic function calls and intrinsics. This is somewhat overlapping with an existing intrinsic cost method, but that one seems targetted at vector intrinsics. I'll merge them or separate their names and use cases in a separate commit. This sinks the test of 'callIsSmall' down into TTI where targets can control it. The whole thing feels very hack-ish to me though. I've left a FIXME comment about the fundamental design problem this presents. It isn't yet clear to me what the users of this function *really* care about. I'll have to do more analysis to figure that out. Putting this here at least provides it access to proper analysis pass tools and other such. It also allows us to more cleanly implement the baseline cost interfaces in TTI. With this commit, it is now theoretically possible to simplify much of the inline cost analysis's handling of calls by calling through to this interface. That conversion will have to happen in subsequent commits as it requires more extensive restructuring of the inline cost analysis. The CodeMetrics class is now really only in the business of running over a block of code and aggregating the metrics on that block of code, with the actual cost evaluation done entirely in terms of TTI. llvm-svn: 173148
2013-01-21Switch CodeMetrics itself over to use TTI to determine if an instructionChandler Carruth1-79/+3
is free. The whole CodeMetrics API should probably be reworked more, but this is enough to allow deleting the duplicate code there for computing whether an instruction is free. All of the passes using this have been updated to pull in TTI and hand it to the CodeMetrics stuff. Further, a dead CodeMetrics API (analyzeFunction) is nuked for lack of users. llvm-svn: 173036
2013-01-02Move all of the header files which are involved in modelling the LLVM IRChandler Carruth1-3/+3
into their new header subdirectory: include/llvm/IR. This matches the directory structure of lib, and begins to correct a long standing point of file layout clutter in LLVM. There are still more header files to move here, but I wanted to handle them in separate commits to make tracking what files make sense at each layer easier. The only really questionable files here are the target intrinsic tablegen files. But that's a battle I'd rather not fight today. I've updated both CMake and Makefile build systems (I think, and my tests think, but I may have missed something). I've also re-sorted the includes throughout the project. I'll be committing updates to Clang, DragonEgg, and Polly momentarily. llvm-svn: 171366
2012-12-30Remove the Function::getFnAttributes method in favor of using the AttributeSetBill Wendling1-1/+2
directly. This is in preparation for removing the use of the 'Attribute' class as a collection of attributes. That will shift to the AttributeSet class instead. llvm-svn: 171253
2012-12-20Add a new attribute, 'noduplicate'. If a function contains a noduplicate ↵James Molloy1-2/+9
call, the call cannot be duplicated - Jump threading, loop unrolling, loop unswitching, and loop rotation are inhibited if they would duplicate the call. Similarly inlining of the function is inhibited, if that would duplicate the call (in particular inlining is still allowed when there is only one callsite and the function has internal linkage). llvm-svn: 170704
2012-12-19Rename the 'Attributes' class to 'Attribute'. It's going to represent a ↵Bill Wendling1-1/+1
single attribute in the future. llvm-svn: 170502
2012-12-03Use the new script to sort the includes of every file under lib.Chandler Carruth1-2/+2
Sooooo many of these had incorrect or strange main module includes. I have manually inspected all of these, and fixed the main module include to be the nearest plausible thing I could find. If you own or care about any of these source files, I encourage you to take some time and check that these edits were sensible. I can't have broken anything (I strictly added headers, and reordered them, never removed), but they may not be the headers you'd really like to identify as containing the API being implemented. Many forward declarations and missing includes were added to a header files to allow them to parse cleanly when included first. The main module rule does in fact have its merits. =] llvm-svn: 169131
2012-11-01Revert the majority of the next patch in the address space series:Chandler Carruth1-4/+2
r165941: Resubmit the changes to llvm core to update the functions to support different pointer sizes on a per address space basis. Despite this commit log, this change primarily changed stuff outside of VMCore, and those changes do not carry any tests for correctness (or even plausibility), and we have consistently found questionable or flat out incorrect cases in these changes. Most of them are probably correct, but we need to devise a system that makes it more clear when we have handled the address space concerns correctly, and ideally each pass that gets updated would receive an accompanying test case that exercises that pass specificaly w.r.t. alternate address spaces. However, from this commit, I have retained the new C API entry points. Those were an orthogonal change that probably should have been split apart, but they seem entirely good. In several places the changes were very obvious cleanups with no actual multiple address space code added; these I have not reverted when I spotted them. In a few other places there were merge conflicts due to a cleaner solution being implemented later, often not using address spaces at all. In those cases, I've preserved the new code which isn't address space dependent. This is part of my ongoing effort to clean out the partial address space code which carries high risk and low test coverage, and not likely to be finished before the 3.2 release looms closer. Duncan and I would both like to see the above issues addressed before we return to these changes. llvm-svn: 167222
2012-10-15Resubmit the changes to llvm core to update the functions to support ↵Micah Villmow1-2/+4
different pointer sizes on a per address space basis. llvm-svn: 165941
2012-10-11Revert 165732 for further review.Micah Villmow1-4/+2
llvm-svn: 165747
2012-10-11Add in the first iteration of support for llvm/clang/lldb to allow variable ↵Micah Villmow1-2/+4
per address space pointer sizes to be optimized correctly. llvm-svn: 165726
2012-10-09Create enums for the different attributes.Bill Wendling1-1/+1
We use the enums to query whether an Attributes object has that attribute. The opaque layer is responsible for knowing where that specific attribute is stored. llvm-svn: 165488
2012-10-08Move TargetData to DataLayout.Micah Villmow1-4/+4
llvm-svn: 165402
2012-09-26Remove the `hasFnAttr' method from Function.Bill Wendling1-1/+1
The hasFnAttr method has been replaced by querying the Attributes explicitly. No intended functionality change. llvm-svn: 164725