aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAndi Kleen <ak@linux.intel.com>2011-10-29 01:01:34 +0000
committerAndi Kleen <ak@gcc.gnu.org>2011-10-29 01:01:34 +0000
commitbf72b0094aa097ec23fdac68b33d2f86274bfd1d (patch)
tree68feb3c0af7c2a57c894bfbcfdcd0d088cc63d89
parent3b6a5655d7535efcc9897a14545a16a16a7e6eb8 (diff)
downloadgcc-bf72b0094aa097ec23fdac68b33d2f86274bfd1d.zip
gcc-bf72b0094aa097ec23fdac68b33d2f86274bfd1d.tar.gz
gcc-bf72b0094aa097ec23fdac68b33d2f86274bfd1d.tar.bz2
Add missing page rounding of a page_entry
This one place in ggc forgot to round page_entry->bytes to the next page boundary, which lead to all the heuristics in freeing to check for continuous memory failing. Round here too, like all other allocators already do. The memory consumed should be the same for MMAP because the kernel would round anyways. It may slightly increase memory usage when malloc groups are used. This will also increase the hitrate on the free page list slightly. gcc/: 2011-10-18 Andi Kleen <ak@linux.intel.com> * ggc-page.c (alloc_pages): Always round up to entry_size. From-SVN: r180647
-rw-r--r--gcc/ChangeLog4
-rw-r--r--gcc/ggc-page.c1
2 files changed, 5 insertions, 0 deletions
diff --git a/gcc/ChangeLog b/gcc/ChangeLog
index 6686f7d..65df15b 100644
--- a/gcc/ChangeLog
+++ b/gcc/ChangeLog
@@ -1,3 +1,7 @@
+2011-10-18 Andi Kleen <ak@linux.intel.com>
+
+ * ggc-page.c (alloc_pages): Always round up entry_size.
+
2011-10-19 Andi Kleen <ak@linux.intel.com>
* Makefile.in (MOSTLYCLEANFILES): Add gcc-ar/nm/ranlib.
diff --git a/gcc/ggc-page.c b/gcc/ggc-page.c
index 617a493..077bc8e 100644
--- a/gcc/ggc-page.c
+++ b/gcc/ggc-page.c
@@ -737,6 +737,7 @@ alloc_page (unsigned order)
entry_size = num_objects * OBJECT_SIZE (order);
if (entry_size < G.pagesize)
entry_size = G.pagesize;
+ entry_size = ROUND_UP (entry_size, G.pagesize);
entry = NULL;
page = NULL;