diff options
author | Pedro Alves <palves@redhat.com> | 2017-06-12 00:49:51 +0100 |
---|---|---|
committer | Pedro Alves <palves@redhat.com> | 2017-06-12 17:06:25 +0100 |
commit | 70a1152bee7cb959ab0c6c13bada03190125022f (patch) | |
tree | 3fc5d2bbb01ea32fbdf97502b2f7cd5ef7953c14 /gdb/ChangeLog | |
parent | c2f134ac418eafca850e7095d789a01ec1142fc4 (diff) | |
download | gdb-70a1152bee7cb959ab0c6c13bada03190125022f.zip gdb-70a1152bee7cb959ab0c6c13bada03190125022f.tar.gz gdb-70a1152bee7cb959ab0c6c13bada03190125022f.tar.bz2 |
.gdb_index prod perf regression: find before insert in unordered_map
"perf" shows the unordered_map::emplace call in write_hash_table a bit
high up on profiles. Fix this using the find + insert idiom instead
of going straight to insert.
I tried doing the same to the other unordered_maps::emplace calls in
the file, but saw no performance improvement, so left them be.
With a '-g3 -O2' build of gdb, and:
$ cat save-index.cmd
set $i = 0
while $i < 100
save gdb-index .
set $i = $i + 1
end
$ time ./gdb -data-directory=data-directory -nx --batch -q -x save-index.cmd ./gdb.pristine
I get an improvement of ~7%:
~7.0s => ~6.5s (average of 5 runs).
gdb/ChangeLog:
2017-06-12 Pedro Alves <palves@redhat.com>
* dwarf2read.c (write_hash_table): Check if key already exists
before emplacing.
Diffstat (limited to 'gdb/ChangeLog')
-rw-r--r-- | gdb/ChangeLog | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/gdb/ChangeLog b/gdb/ChangeLog index 4c8657c..01b66a1 100644 --- a/gdb/ChangeLog +++ b/gdb/ChangeLog @@ -1,5 +1,10 @@ 2017-06-12 Pedro Alves <palves@redhat.com> + * dwarf2read.c (write_hash_table): Check if key already exists + before emplacing. + +2017-06-12 Pedro Alves <palves@redhat.com> + * dwarf2read.c (data_buf::append_space): Rename to... (data_buf::grow): ... this, and make private. Adjust all callers. (data_buf::append_uint): New method. |