diff options
author | Andi Kleen <ak@linux.intel.com> | 2011-10-29 01:01:34 +0000 |
---|---|---|
committer | Andi Kleen <ak@gcc.gnu.org> | 2011-10-29 01:01:34 +0000 |
commit | bf72b0094aa097ec23fdac68b33d2f86274bfd1d (patch) | |
tree | 68feb3c0af7c2a57c894bfbcfdcd0d088cc63d89 /gcc/ggc-page.c | |
parent | 3b6a5655d7535efcc9897a14545a16a16a7e6eb8 (diff) | |
download | gcc-bf72b0094aa097ec23fdac68b33d2f86274bfd1d.zip gcc-bf72b0094aa097ec23fdac68b33d2f86274bfd1d.tar.gz gcc-bf72b0094aa097ec23fdac68b33d2f86274bfd1d.tar.bz2 |
Add missing page rounding of a page_entry
This one place in ggc forgot to round page_entry->bytes to the
next page boundary, which lead to all the heuristics in freeing to
check for continuous memory failing. Round here too, like all other
allocators already do. The memory consumed should be the same
for MMAP because the kernel would round anyways. It may slightly
increase memory usage when malloc groups are used.
This will also increase the hitrate on the free page list
slightly.
gcc/:
2011-10-18 Andi Kleen <ak@linux.intel.com>
* ggc-page.c (alloc_pages): Always round up to entry_size.
From-SVN: r180647
Diffstat (limited to 'gcc/ggc-page.c')
-rw-r--r-- | gcc/ggc-page.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/gcc/ggc-page.c b/gcc/ggc-page.c index 617a493..077bc8e 100644 --- a/gcc/ggc-page.c +++ b/gcc/ggc-page.c @@ -737,6 +737,7 @@ alloc_page (unsigned order) entry_size = num_objects * OBJECT_SIZE (order); if (entry_size < G.pagesize) entry_size = G.pagesize; + entry_size = ROUND_UP (entry_size, G.pagesize); entry = NULL; page = NULL; |