diff options
author | Pedro Alves <palves@redhat.com> | 2017-06-12 00:49:51 +0100 |
---|---|---|
committer | Pedro Alves <palves@redhat.com> | 2017-06-12 17:06:25 +0100 |
commit | 70a1152bee7cb959ab0c6c13bada03190125022f (patch) | |
tree | 3fc5d2bbb01ea32fbdf97502b2f7cd5ef7953c14 | |
parent | c2f134ac418eafca850e7095d789a01ec1142fc4 (diff) | |
download | gdb-70a1152bee7cb959ab0c6c13bada03190125022f.zip gdb-70a1152bee7cb959ab0c6c13bada03190125022f.tar.gz gdb-70a1152bee7cb959ab0c6c13bada03190125022f.tar.bz2 |
.gdb_index prod perf regression: find before insert in unordered_map
"perf" shows the unordered_map::emplace call in write_hash_table a bit
high up on profiles. Fix this using the find + insert idiom instead
of going straight to insert.
I tried doing the same to the other unordered_maps::emplace calls in
the file, but saw no performance improvement, so left them be.
With a '-g3 -O2' build of gdb, and:
$ cat save-index.cmd
set $i = 0
while $i < 100
save gdb-index .
set $i = $i + 1
end
$ time ./gdb -data-directory=data-directory -nx --batch -q -x save-index.cmd ./gdb.pristine
I get an improvement of ~7%:
~7.0s => ~6.5s (average of 5 runs).
gdb/ChangeLog:
2017-06-12 Pedro Alves <palves@redhat.com>
* dwarf2read.c (write_hash_table): Check if key already exists
before emplacing.
-rw-r--r-- | gdb/ChangeLog | 5 | ||||
-rw-r--r-- | gdb/dwarf2read.c | 21 |
2 files changed, 21 insertions, 5 deletions
diff --git a/gdb/ChangeLog b/gdb/ChangeLog index 4c8657c..01b66a1 100644 --- a/gdb/ChangeLog +++ b/gdb/ChangeLog @@ -1,5 +1,10 @@ 2017-06-12 Pedro Alves <palves@redhat.com> + * dwarf2read.c (write_hash_table): Check if key already exists + before emplacing. + +2017-06-12 Pedro Alves <palves@redhat.com> + * dwarf2read.c (data_buf::append_space): Rename to... (data_buf::grow): ... this, and make private. Adjust all callers. (data_buf::append_uint): New method. diff --git a/gdb/dwarf2read.c b/gdb/dwarf2read.c index 63a591e..93fd275 100644 --- a/gdb/dwarf2read.c +++ b/gdb/dwarf2read.c @@ -23430,11 +23430,22 @@ write_hash_table (mapped_symtab *symtab, data_buf &output, data_buf &cpool) if (it == NULL) continue; gdb_assert (it->index_offset == 0); - const auto insertpair - = symbol_hash_table.emplace (it->cu_indices, cpool.size ()); - it->index_offset = insertpair.first->second; - if (!insertpair.second) - continue; + + /* Finding before inserting is faster than always trying to + insert, because inserting always allocates a node, does the + lookup, and then destroys the new node if another node + already had the same key. C++17 try_emplace will avoid + this. */ + const auto found + = symbol_hash_table.find (it->cu_indices); + if (found != symbol_hash_table.end ()) + { + it->index_offset = found->second; + continue; + } + + symbol_hash_table.emplace (it->cu_indices, cpool.size ()); + it->index_offset = cpool.size (); cpool.append_data (MAYBE_SWAP (it->cu_indices.size ())); for (const auto iter : it->cu_indices) cpool.append_data (MAYBE_SWAP (iter)); |