aboutsummaryrefslogtreecommitdiff
path: root/gdb/go32-nat.c
diff options
context:
space:
mode:
authorAndreas Arnez <arnez@linux.vnet.ibm.com>2016-11-24 17:48:04 +0100
committerAndreas Arnez <arnez@linux.vnet.ibm.com>2016-11-24 17:48:04 +0100
commit793c128d03113816db85e8d1fa0bcd4982e246ee (patch)
treeb0b4e0219e6bb6b2900407c36bb9fc94702c2bac /gdb/go32-nat.c
parentad06383f106ccfa299a6c7ac9720178d2d3d583f (diff)
downloadgdb-793c128d03113816db85e8d1fa0bcd4982e246ee.zip
gdb-793c128d03113816db85e8d1fa0bcd4982e246ee.tar.gz
gdb-793c128d03113816db85e8d1fa0bcd4982e246ee.tar.bz2
Optimize byte-aligned copies in copy_bitwise()
The function copy_bitwise used for copying DWARF pieces can potentially be invoked for large chunks of data. For instance, consider a large struct one of whose members is currently located in a register. In this case copy_bitwise would still copy the data bitwise in a loop, which is much slower than necessary. This change uses memcpy for the large part instead, if possible. gdb/ChangeLog: * dwarf2loc.c (copy_bitwise): Use memcpy for the middle part, if it is byte-aligned.
Diffstat (limited to 'gdb/go32-nat.c')
0 files changed, 0 insertions, 0 deletions