aboutsummaryrefslogtreecommitdiff
path: root/llvm/lib/Analysis/LazyCallGraph.cpp
AgeCommit message (Collapse)AuthorFilesLines
2019-08-16Revert "[CallGraph] Refine call graph for indirect calls with !callees metadata"Benjamin Kramer1-4/+2
This reverts commit r369025. Crashes clang, test case is on the mailing list. llvm-svn: 369096
2019-08-15[CallGraph] Refine call graph for indirect calls with !callees metadataMark Lacey1-2/+4
For indirect call sites having a small set of possible callees, !callees metadata can be used to indicate what those callees are. This patch updates the call graph and lazy call graph analyses so that they consider this metadata when encountering call sites. For the call graph, it adds a new external call graph node to the graph for each unique !callees metadata node. A call graph edge connects an indirect call site with the external node associated with the !callees metadata that is attached to it. And there is an edge from this external node to each of the callees indicated by the metadata. Similarly, for the lazy call graph, the patch adds Ref edges from a caller to the possible callees indicated by the metadata. The primary purpose of the patch is to facilitate iterating over the functions in a module such that all of the callees indicated by a given !callees metadata node will be visited prior to the functions containing call sites annotated by that node. This property is required by optimizations performing a bottom-up traversal of the SCC DAG. For example, the inliner can be made to inline through an indirect call. If the call site is annotated with !callees metadata, this patch ensures that the inliner will have visited all of the callees prior to the caller, allowing it to reliably compute the cost of inlining one or more of the potential callees. Original patch by @mssimpso. I've made some small changes to get it to apply, build, and pass tests on the top of tree, as well as some minor tweaks to formatting and functionality. Subscribers: mehdi_amini, hiraditya, llvm-commits, mssimpso Tags: #llvm Differential Revision: https://reviews.llvm.org/D39339 llvm-svn: 369025
2019-08-06Change two unnecessary uses of llvm::size(C) to C.size()Fangrui Song1-4/+2
llvm-svn: 368011
2019-04-05[LCG] Add aliased functions as LCG rootsGuozhi Wei1-0/+13
Current LCG doesn't check aliased functions. So if an internal function has a public alias it will not be added to CG SCC, but it is still reachable from outside through the alias. So this patch adds aliased functions to SCC. Differential Revision: https://reviews.llvm.org/D59898 llvm-svn: 357795
2019-01-19Update the file headers across all of the LLVM projects in the monorepoChandler Carruth1-4/+3
to reflect the new license. We understand that people may be surprised that we're moving the header entirely to discuss the new license. We checked this carefully with the Foundation's lawyer and we believe this is the correct approach. Essentially, all code in the project is now made available by the LLVM project under our new license, so you will see that the license headers include that license only. Some of our contributors have contributed code under our old license, and accordingly, we have retained a copy of our old license notice in the top-level files in each project and repository. llvm-svn: 351636
2018-10-31ADT/STLExtras: Introduce llvm::empty; NFCMatthias Braun1-1/+1
This is modeled after C++17 std::empty(). Differential Revision: https://reviews.llvm.org/D53909 llvm-svn: 345679
2018-05-16[STLExtras] Add size() for ranges, and remove distance()Vedant Kumar1-3/+3
r332057 introduced distance() for ranges. Based on post-commit feedback, this renames distance() to size(). The new size() is also only enabled when the operation is O(1). Differential Revision: https://reviews.llvm.org/D46976 llvm-svn: 332551
2018-05-14Rename DEBUG macro to LLVM_DEBUG.Nicola Zaghen1-9/+10
The DEBUG() macro is very generic so it might clash with other projects. The renaming was done as follows: - git grep -l 'DEBUG' | xargs sed -i 's/\bDEBUG\s\?(/LLVM_DEBUG(/g' - git diff -U0 master | ../clang/tools/clang-format/clang-format-diff.py -i -p1 -style LLVM - Manual change to APInt - Manually chage DOCS as regex doesn't match it. In the transition period the DEBUG() macro is still present and aliased to the LLVM_DEBUG() one. Differential Revision: https://reviews.llvm.org/D43624 llvm-svn: 332240
2018-05-10[STLExtras] Add distance() for ranges, pred_size(), and succ_size()Vedant Kumar1-4/+3
This commit adds a wrapper for std::distance() which works with ranges. As it would be a common case to write `distance(predecessors(BB))`, this also introduces `pred_size()` and `succ_size()` helpers to make that easier to write. Differential Revision: https://reviews.llvm.org/D46668 llvm-svn: 332057
2018-04-30IWYU for llvm-config.h in llvm, additions.Nico Weber1-0/+1
See r331124 for how I made a list of files missing the include. I then ran this Python script: for f in open('filelist.txt'): f = f.strip() fl = open(f).readlines() found = False for i in xrange(len(fl)): p = '#include "llvm/' if not fl[i].startswith(p): continue if fl[i][len(p):] > 'Config': fl.insert(i, '#include "llvm/Config/llvm-config.h"\n') found = True break if not found: print 'not found', f else: open(f, 'w').write(''.join(fl)) and then looked through everything with `svn diff | diffstat -l | xargs -n 1000 gvim -p` and tried to fix include ordering and whatnot. No intended behavior change. llvm-svn: 331184
2018-03-02Fix more spelling mistakes in comments of LLVM Analysis passesVedant Kumar1-5/+5
Patch by Reshabh Sharma! Differential Revision: https://reviews.llvm.org/D43939 llvm-svn: 326601
2018-01-24Fix typos of occurred and occurrenceMalcolm Parsons1-1/+1
llvm-svn: 323318
2017-10-15Reverting r315590; it did not include changes for llvm-tblgen, which is ↵Aaron Ballman1-3/+3
causing link errors for several people. Error LNK2019 unresolved external symbol "public: void __cdecl `anonymous namespace'::MatchableInfo::dump(void)const " (?dump@MatchableInfo@?A0xf4f1c304@@QEBAXXZ) referenced in function "public: void __cdecl `anonymous namespace'::AsmMatcherEmitter::run(class llvm::raw_ostream &)" (?run@AsmMatcherEmitter@?A0xf4f1c304@@QEAAXAEAVraw_ostream@llvm@@@Z) llvm-tblgen D:\llvm\2017\utils\TableGen\AsmMatcherEmitter.obj 1 llvm-svn: 315854
2017-10-12[dump] Remove NDEBUG from test to enable dump methods [NFC]Don Hinton1-3/+3
Summary: Add LLVM_FORCE_ENABLE_DUMP cmake option, and use it along with LLVM_ENABLE_ASSERTIONS to set LLVM_ENABLE_DUMP. Remove NDEBUG and only use LLVM_ENABLE_DUMP to enable dump methods. Move definition of LLVM_ENABLE_DUMP from config.h to llvm-config.h so it'll be picked up by public headers. Differential Revision: https://reviews.llvm.org/D38406 llvm-svn: 315590
2017-08-11[Analysis] Fix some Clang-tidy modernize-use-using and Include What You Use ↵Eugene Zelenko1-6/+20
warnings; other minor fixes (NFC). llvm-svn: 310766
2017-08-10[LCG] Fix an assert in a on-scope-exit lambda that checked the contentsChandler Carruth1-7/+9
of the returned value. Checking the returned value from inside of a scoped exit isn't actually valid. It happens to work when NRVO fires and the stars align, which they reliably do with Clang but don't, for example, on MSVC builds. llvm-svn: 310547
2017-08-09[LCG] Completely remove the map-based association of post-order numbersChandler Carruth1-35/+34
to Nodes when removing ref edges from a RefSCC. This map based association turns out to be pretty expensive for large RefSCCs and pointless as we already have embedded data members inside nodes that we use to track the DFS state. We can reuse one of those and the map becomes unnecessary. This also fuses the update of those numbers into the scan across the pending stack of nodes so that we don't walk the nodes twice during the DFS. With this I expect the new PM to be faster than the old PM for the test case I have been optimizing. That said, it also seems simpler and more direct in many ways. The side storage was always pretty awkward. The last remaining hot-spot in the profile of the LCG once this is done will be the edge iterator walk in the DFS. I'll take a look at improving that next. llvm-svn: 310456
2017-08-09[LCG] Special case when removing a ref edge from a RefSCC leavesChandler Carruth1-12/+29
that RefSCC still connected. This is common and can be handled much more efficiently. As soon as we know we've covered every node in the RefSCC with the DFS, we can simply reset our state and return. This avoids numerous data structure updates and other complexity. On top of other changes, this appears to get new PM back to parity with the old PM for a large protocol buffer message source code. The dense map updates are very hot in this function. llvm-svn: 310451
2017-08-09[LCG] Switch one of the update methods for the LazyCallGraph to supportChandler Carruth1-100/+57
limited batch updates. Specifically, allow removing multiple reference edges starting from a common source node. There are a few constraints that play into supporting this form of batching: 1) The way updates occur during the CGSCC walk, about the most we can functionally batch together are those with a common source node. This also makes the batching simpler to implement, so it seems a worthwhile restriction. 2) The far and away hottest function for large C++ files I measured (generated code for protocol buffers) showed a huge amount of time was spent removing ref edges specifically, so it seems worth focusing there. 3) The algorithm for removing ref edges is very amenable to this restricted batching. There are just both API and implementation special casing for the non-batch case that gets in the way. Once removed, supporting batches is nearly trivial. This does modify the API in an interesting way -- now, we only preserve the target RefSCC when the RefSCC structure is unchanged. In the face of any splits, we create brand new RefSCC objects. However, all of the users were OK with it that I could find. Only the unittest needed interesting updates here. How much does batching these updates help? I instrumented the compiler when run over a very large generated source file for a protocol buffer and found that the majority of updates are intrinsically updating one function at a time. However, nearly 40% of the total ref edges removed are removed as part of a batch of removals greater than one, so these are the cases batching can help with. When compiling the IR for this file with 'opt' and 'O3', this patch reduces the total time by 8-9%. Differential Revision: https://reviews.llvm.org/D36352 llvm-svn: 310450
2017-08-05[LCG] Remove yet another variable only used inside of asserts.Chandler Carruth1-3/+3
llvm-svn: 310174
2017-08-05[LCG] Fold otherwise unused variable into assert.Benjamin Kramer1-3/+2
No functionality change intended. llvm-svn: 310173
2017-08-05[LCG] Completely remove the parent set and leaf tracking for RefSCCs.Chandler Carruth1-176/+3
After the previous series of patches, this is now trivial and deletes a pretty astonishing amount of complexity. This has been a long time coming, as the move toward a PO sequence of RefSCCs started eroding the underlying use cases for this half of the data structure. Among the biggest advantages here is that now there aren't two independent data structures that need to stay in sync. Some of my profiling has also indicated that updating the parent sets was among the most expensive parts of the lazy call graph. Eliminating it whole sale is likely to be a nice win in terms of compile time. Last but not least, I had discussed with some folks previously keeping it around for asserts and other correctness checking, but once the fundamentals of the parent and child checking were implemented without the parent sets their value in correctness checking was tiny and no where near worth the cost of the complexity required to keep everything up-to-date. llvm-svn: 310171
2017-08-05[LCG] Re-implement the basic isParentOf, isAncestorOf, isChildOf, andChandler Carruth1-10/+37
isDescendantOf methods on RefSCCs in terms of the forward edges rather than the parent sets. This is technically slower, but probably not interestingly slower, and all of these routines were already so expensive that they're guarded behind both !NDEBUG and EXPENSIVE_CHECKS. This removes another non-critical usage of parent sets. I've also added some comments to try and help clarify to any potential users the costs of these routines. They're mostly useful for debugging, asserts, or other queries. llvm-svn: 310170
2017-08-05[LCG] Add the concept of a "dead" node and use it to avoid a complexChandler Carruth1-8/+1
walk over the parent set. When removing a single function from the call graph, we previously would walk the entire RefSCC's parent set and then walk every outgoing edge just to find the ones to remove. In addition to this being quite high complexity in theory, it is also the last fundamental use of the parent sets. With this change, when we remove a function we transform the node containing it to be recognizably "dead" and then teach the edge iterators to recognize edges to such nodes and skip them the same way they skip null edges. We can't move fully to using "dead" nodes -- when disconnecting two live nodes we need to null out the edge. But the complexity this adds to the edge sequence isn't too bad and the simplification of lazily handling this seems like a significant win. llvm-svn: 310169
2017-08-05[LCG] Replace an implicit bool operator with a named function. (NFC)Chandler Carruth1-2/+2
The definition of 'false' here was already pretty vague and debatable, and I'm about to add another potential 'false' that would actually make much more sense in a bool operator. Especially given how rarely this is used, a nicely named method seems better. llvm-svn: 310165
2017-08-05[LCG] When removing a dead function and clearing out the dataChandler Carruth1-0/+2
structures, actually null out the graph pointers as well. We won't ever update these, and we certainly shouldn't be calling any methods on them, so it seems good to defensively nuke them. llvm-svn: 310164
2017-08-05[LCG] Rather than walking the directed graph structure to update graphChandler Carruth1-14/+4
pointers in node objects, just walk the map from function to node. It doesn't have stable ordering, but works just as well and is much simpler. We don't need ordering when just updating internal pointers. llvm-svn: 310163
2017-08-05[LCG] Remove the complex walk of the parent sets to update graphChandler Carruth1-11/+2
pointers. This is completely unnecessary as we have a trivial list of RefSCCs now that we can walk. llvm-svn: 310162
2017-08-05[LCG] Remove the use of the parent sets to compute connectivity whenChandler Carruth1-16/+14
merging RefSCCs. The logic to directly use the reference edges is simpler and not substantially slower (despite the comments to the contrary) because this is not actually an especially hot part of LCG in practice. llvm-svn: 310161
2017-07-19[PM/LCG] Follow-up fix to r308088 to handle deletion of libraryChandler Carruth1-1/+6
functions. In the prior commit, we provide ordering to the LCG between functions and library function definitions that they might begin to call through transformations. But we still would delete these library functions from the call graph if they became dead during inlining. While this immediately crashed, it also exposed a loss of information. We shouldn't remove definitions of library functions that can still usefully participate in the LCG-powered CGSCC optimization process. If new call edges are formed, we want to have definitions to be called. We can still remove these functions if truly dead using global-dce, etc, but removing them during the CGSCC walk is premature. This fixes a crash in the new PM when optimizing some unusual libraries that end up with "internal" lib functions such as the code in the "R" language's libraries. llvm-svn: 308417
2017-07-15[PM/LCG] Teach the LazyCallGraph to maintain reference edges from everyChandler Carruth1-8/+36
function to every defined function known to LLVM as a library function. LLVM can introduce calls to these functions either by replacing other library calls or by recognizing patterns (such as memset_pattern or vector math patterns) and replacing those with calls. When these library functions are actually defined in the module, we need to have reference edges to them initially so that we visit them during the CGSCC walk in the right order and can effectively rebuild the call graph afterward. This was discovered when building code with Fortify enabled as that is a common case of both inline definitions of library calls and simplifications of code into calling them. This can in extreme cases of LTO-ing with libc introduce *many* more reference edges. I discussed a bunch of different options with folks but all of them are unsatisfying. They either make the graph operations substantially more complex even when there are *no* defined libfuncs, or they introduce some other complexity into the callgraph. So this patch goes with the simplest possible solution of actual synthetic reference edges. If this proves to be a memory problem, I'm happy to implement one of the clever techniques to save memory here. llvm-svn: 308088
2017-07-09[PM] Fix a nasty bug in the new PM where we failed to properlyChandler Carruth1-7/+13
invalidation of analyses when merging SCCs. While I've added a bunch of testing of this, it takes something much more like the inliner to really trigger this as you need to have partially-analyzed SCCs with updates at just the right time. So I've added a direct test for this using the inliner and verifying the domtree. Without the changes here, this test ends up finding a stale dominator tree. However, to handle this properly, we need to invalidate analyses *before* merging the SCCs. After talking to Philip and Sanjoy about this they convinced me this was the right approach. To do this, we need a callback mechanism when merging SCCs so we can observe the cycle that will be merged before the merge happens. This API update ended up being surprisingly easy. With this commit, the new PM passes the test-suite again. It hadn't since MemorySSA was enabled for EarlyCSE as that also will find this bug very quickly. llvm-svn: 307498
2017-06-06Sort the remaining #include lines in include/... and lib/....Chandler Carruth1-2/+1
I did this a long time ago with a janky python script, but now clang-format has built-in support for this. I fed clang-format every line with a #include and let it re-sort things according to the precise LLVM rules for include ordering baked into clang-format these days. I've reverted a number of files where the results of sorting includes isn't healthy. Either places where we have legacy code relying on particular include ordering (where possible, I'll fix these separately) or where we have particular formatting around #include lines that I didn't want to disturb in this patch. This patch is *entirely* mechanical. If you get merge conflicts or anything, just ignore the changes in this patch and run clang-format over your #include lines in the files. Sorry for any noise here, but it is important to keep these things stable. I was seeing an increasing number of patches with irrelevant re-ordering of #include lines because clang-format was used. This patch at least isolates that churn, makes it easy to skip when resolving conflicts, and gets us to a clean baseline (again). llvm-svn: 304787
2017-02-28[LCG] Fix EXPENSIVE_CHECKS typo. NFCFrancis Visoiu Mistrih1-5/+5
Differential Revision: https://reviews.llvm.org/D30434 llvm-svn: 296500
2017-02-09[PM/LCG] Teach LCG to support spurious reference edges.Chandler Carruth1-1/+8
Somewhat amazingly, this only requires teaching it to clean them up when deleting a dead function from the graph. And we already have exactly the necessary data structures to do that in the parent RefSCCs. This allows ArgPromote to work in a much simpler way be merely letting reference edges linger in the graph after the causing IR is deleted. We will clean up these edges when we run any function pass over the IR, but don't remove them eagerly. This avoids all of the quadratic update issues both in the current pass manager and in my previous attempt with the new pass manager. Differential Revision: https://reviews.llvm.org/D29579 llvm-svn: 294663
2017-02-09[PM/LCG] Teach the LazyCallGraph how to replace a function withoutChandler Carruth1-167/+189
disturbing the graph or having to update edges. This is motivated by porting argument promotion to the new pass manager. Because of how LLVM IR Function objects work, in order to change their signature a new object needs to be created. This is efficient and straight forward in the IR but previously was very hard to implement in LCG. We could easily replace the function a node in the graph represents. The challenging part is how to handle updating the edges in the graph. LCG previously used an edge to a raw function to represent a node that had not yet been scanned for calls and references. This was the core of its laziness. However, that model causes this kind of update to be very hard: 1) The keys to lookup an edge need to be `Function*`s that would all need to be updated when we update the node. 2) There will be some unknown number of edges that haven't transitioned from `Function*` edges to `Node*` edges. All of this complexity isn't necessary. Instead, we can always build a node around any function, always pointing edges at it and always using it as the key to lookup an edge. To maintain the laziness, we need to sink the *edges* of a node into a secondary object and explicitly model transitioning a node from empty to populated by scanning the function. This design seems much cleaner in a number of ways, but importantly there is now exactly *one* place where the `Function*` has to be updated! Some other cleanups that fall out of this include having something to model the *entry* edges more accurately. Rather than hand rolling parts of the node in the graph itself, we have an explicit `EdgeSequence` object that gives us exactly the functionality needed. We also have a consistent place to define the edge iterators and can use them for both the entry edges and the internal edges of the graph. The API used to model the separation between a node and its edges is intentionally very thin as most clients are expected to deal with nodes that have populated edges. We model this exactly as an optional does with an additional method to populate the edges when that is a reasonable thing for a client to do. This is based on API design suggestions from Richard Smith and David Blaikie, credit goes to them for helping pick how to model this without it being either too explicit or too implicit. The patch is somewhat noisy due to shifting around iterator types and new syntax for walking the edges of a node, but most of the functionality change is in the `Edge`, `EdgeSequence`, and `Node` types. Differential Revision: https://reviews.llvm.org/D29577 llvm-svn: 294653
2017-02-06[PM/LCG] Fix the no-asserts build after r294227. Sorry for the noise.Chandler Carruth1-0/+2
llvm-svn: 294235
2017-02-06[PM/LCG] Remove the lazy RefSCC formation from the LazyCallGraph duringChandler Carruth1-171/+117
iteration. The lazy formation of RefSCCs isn't really the most important part of the laziness here -- that has to do with walking the functions themselves -- and isn't essential to maintain. Originally, there were incremental update algorithms that relied on updates happening predominantly near the most recent RefSCC formed, but those have been replaced with ones that have much tighter general case bounds at this point. We do still perform asserts that only scale well due to this incrementality, but those are easy to place behind EXPENSIVE_CHECKS. Removing this simplifies the entire analysis by having a single up-front step that builds all of the RefSCCs in a direct Tarjan walk. We can even easily replace this with other or better algorithms at will and with much less confusion now that there is no iterator-based incremental logic involved. This removes a lot of complexity from LCG. Another advantage of moving in this direction is that it simplifies testing the system substantially as we no longer have to worry about observing and mutating the graph half-way through the RefSCC formation. We still need a somewhat special iterator for RefSCCs because we want the iterator to remain stable in the face of graph updates. However, this now merely involves relative indexing to the current RefSCC's position in the sequence which isn't too hard. Differential Revision: https://reviews.llvm.org/D29381 llvm-svn: 294227
2017-01-28Cleanup dump() functions.Matthias Braun1-3/+9
We had various variants of defining dump() functions in LLVM. Normalize them (this should just consistently implement the things discussed in http://lists.llvm.org/pipermail/cfe-dev/2014-January/034323.html For reference: - Public headers should just declare the dump() method but not use LLVM_DUMP_METHOD or #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP) - The definition of a dump method should look like this: #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP) LLVM_DUMP_METHOD void MyClass::dump() { // print stuff to dbgs()... } #endif llvm-svn: 293359
2016-12-28[PM] Teach the CGSCC's CG update utility to more carefully invalidateChandler Carruth1-10/+29
analyses when we're about to break apart an SCC. We can't wait until after breaking apart the SCC to invalidate things: 1) Which SCC do we then invalidate? All of them? 2) Even if we invalidate all of them, a newly created SCC may not have a proxy that will convey the invalidation to functions! Previously we only invalidated one of the SCCs and too late. This led to stale analyses remaining in the cache. And because the caching strategy actually works, they would get used and chaos would ensue. Doing invalidation early is somewhat pessimizing though if we *know* that the SCC structure won't change. So it turns out that the design to make the mutation API force the caller to know the *kind* of mutation in advance was indeed 100% correct and we didn't do enough of it. So this change also splits two cases of switching a call edge to a ref edge into two separate APIs so that callers can clearly test for this and take the easy path without invalidating when appropriate. This is particularly important in this case as we expect most inlines to be between functions in separate SCCs and so the common case is that we don't have to so aggressively invalidate analyses. The LCG API change in turn needed some basic cleanups and better testing in its unittest. No interesting functionality changed there other than more coverage of the returned sequence of SCCs. While this seems like an obvious improvement over the current state, I'd like to revisit the core concept of invalidating within the CG-update layer at all. I'm wondering if we would be better served forcing the callers to handle the invalidation beforehand in the cases that they can handle it. An interesting example is when we want to teach the inliner to *update and preserve* analyses. But we can cross that bridge when we get there. With this patch, the new pass manager an build all of the LLVM test suite at -O3 and everything passes. =D I haven't bootstrapped yet and I'm sure there are still plenty of bugs, but this gives a nice baseline so I'm going to increasingly focus on fleshing out the missing functionality, especially the bits that are just turned off right now in order to let us establish this baseline. llvm-svn: 290664
2016-12-28[LCG] Teach the ref edge removal to handle a ref edge that is trivialChandler Carruth1-1/+7
due to a call cycle. This actually crashed the ref removal before. I've added a unittest that covers this kind of interesting graph structure and mutation. llvm-svn: 290645
2016-12-09[LCG] Minor cleanup to the LCG walk over a function, NFC.Chandler Carruth1-19/+22
This just hoists the check for declarations up a layer which allows various sets used in the walk to be smaller. Also moves the relevant comments to match, and catches a few other cleanups in this code. llvm-svn: 289163
2016-12-07[LCG] Add basic verification of the parent set and fix bugs it uncovers.Chandler Carruth1-4/+23
The existing unittests actually cover this now that we verify things. llvm-svn: 288875
2016-12-06[LCG] Add some much needed asserts and verify runs to uncoverChandler Carruth1-4/+12
a hilarious bug and fix it. We somehow were never verifying the RefSCCs newly formed when splitting an existing one apart, and when verifying them we weren't really checking the SCC indices mapping effectively. If we had been, it would have been blindingly obvious that right after putting something int `RC.SCCs` we should update `RC.SCCIndices` instead of `SCCIndices` which we were about to clear and rebuild anyways. =[ Anyways, this is thoroughly covered by existing tests now that we actually verify things properly. llvm-svn: 288795
2016-11-23[PM] Change the static object whose address is used to uniquely identifyChandler Carruth1-1/+1
analyses to have a common type which is enforced rather than using a char object and a `void *` type when used as an identifier. This has a number of advantages. First, it at least helps some of the confusion raised in Justin Lebar's code review of why `void *` was being used everywhere by having a stronger type that connects to documentation about this. However, perhaps more importantly, it addresses a serious issue where the alignment of these pointer-like identifiers was unknown. This made it hard to use them in pointer-like data structures. We were already dodging this in dangerous ways to create the "all analyses" entry. In a subsequent patch I attempted to use these with TinyPtrVector and things fell apart in a very bad way. And it isn't just a compile time or type system issue. Worse than that, the actual alignment of these pointer-like opaque identifiers wasn't guaranteed to be a useful alignment as they were just characters. This change introduces a type to use as the "key" object whose address forms the opaque identifier. This both forces the objects to have proper alignment, and provides type checking that we get it right everywhere. It also makes the types somewhat less mysterious than `void *`. We could go one step further and introduce a truly opaque pointer-like type to return from the `ID()` static function rather than returning `AnalysisKey *`, but that didn't seem to be a clear win so this is just the initial change to get to a reliably typed and aligned object serving is a key for all the analyses. Thanks to Richard Smith and Justin Lebar for helping pick plausible names and avoid making this refactoring many times. =] And thanks to Sean for the super fast review! While here, I've tried to move away from the "PassID" nomenclature entirely as it wasn't really helping and is overloaded with old pass manager constructs. Now we have IDs for analyses, and key objects whose address can be used as IDs. Where possible and clear I've shortened this to just "ID". In a few places I kept "AnalysisID" to make it clear what was being identified. Differential Revision: https://reviews.llvm.org/D27031 llvm-svn: 287783
2016-11-22[LCG] Add a previously missing assert about the relationship of RefSCCs.Chandler Carruth1-0/+7
No intended change, everything seems to be in working order already. llvm-svn: 287705
2016-11-22[LCG] Add utilities to compute parent and ascestor relationships betweenChandler Carruth1-0/+58
SCCs. These will be fairly expensive routines to call and might be abused in real code, but are quite useful when debugging or in asserts and are reasonable and well formed properties to query. I've used one of them in an assert that was requested in a code review here. In subsequent commits I'll start using these routines more heavily, for example in unittests etc. But this at least gets the groundwork in place. Differential Revision: https://reviews.llvm.org/D25506 llvm-svn: 287682
2016-10-12[LCG] Add the necessary functionality to the LazyCallGraph to support inlining.Chandler Carruth1-1/+164
The basic inlining operation makes the following changes to the call graph: 1) Add edges that were previously transitive edges. This is always trivial and this patch gives the LCG helper methods to make this more convenient. 2) Remove the inlined edge. We had existing support for this, but it contained bugs that needed to be fixed. Testing in the same pattern as the inliner exposes these bugs very nicely. 3) Delete a function when it becomes dead because it is internal and all calls have been inlined. The LCG had no support at all for this operation, so this adds that support. Two unittests have been added that exercise this specific mutation pattern to the call graph. They were extremely effective in uncovering bugs. Sadly, a large fraction of the code here is just to implement those unit tests, but I think they're paying for themselves. =] This was split out of a patch that actually uses the routines to implement inlining in the new pass manager in order to isolate (with unit tests) the logic that was entirely within the LCG. Many thanks for the careful review from folks! There will be a few minor follow-up patches based on the comments in the review as well. Differential Revision: https://reviews.llvm.org/D24225 llvm-svn: 283982
2016-09-16[LCG] Redesign the lazy post-order iteration mechanism for theChandler Carruth1-103/+127
LazyCallGraph to support repeated, stable iterations, even in the face of graph updates. This is particularly important to allow the CGSCC pass manager to walk the RefSCCs (and thus everything else) in a module more than once. Lots of unittests and other tests were hard or impossible to write because repeated CGSCC pass managers which didn't invalidate the LazyCallGraph would conclude the module was empty after the first one. =[ Really, really bad. The interesting thing is that in many ways this simplifies the code. We can now re-use the same code for handling reference edge insertion updates of the RefSCC graph as we use for handling call edge insertion updates of the SCC graph. Outside of adapting to the shared logic for this (which isn't trivial, but is *much* simpler than the DFS it replaces!), the new code involves putting newly created RefSCCs when deleting a reference edge into the cached list in the correct way, and to re-formulate the iterator to be stable and effective even in the face of these kinds of updates. I've updated the unittests for the LazyCallGraph to re-iterate the postorder sequence and verify that this all works. We even check for using alternating iterators to trigger the lazy formation of RefSCCs after mutation has occured. It's worth noting that there are a reasonable number of likely simplifications we can make past this. It isn't clear that we need to keep the "LeafRefSCCs" around any more. But I've not removed that mostly because I want this to be a more isolated change. Differential Revision: https://reviews.llvm.org/D24219 llvm-svn: 281716
2016-09-04[LCG] Clean up and make NDEBUG verify calls more rigorous withChandler Carruth1-32/+38
make_scope_exit now that we have that utility. This makes the code much more clear and readable by isolating the check. It also makes it easy to go through and make sure all the interesting update routines have a start and end verify so we don't slowly let the graph drift into an invalid state. llvm-svn: 280619