diff options
| author | Tom de Vries <tdevries@suse.de> | 2026-03-05 22:10:09 +0100 |
|---|---|---|
| committer | Tom de Vries <tdevries@suse.de> | 2026-03-05 22:10:09 +0100 |
| commit | 0f9fdc542360d9159b36d8cf20c35de4c2fc2354 (patch) | |
| tree | 075b8a92646d5b75fc65d18c4e9b8577f7197bb0 /libctf/testsuite/libctf-writable/writable.exp | |
| parent | e1c3ac064ae5fec241f2c1f9fd712dbe5b3c19d2 (diff) | |
| download | gdb-master.zip gdb-master.tar.gz gdb-master.tar.bz2 | |
Our current BFD locking scheme is as follows [1]:
...
There is one global mutex, gdb_bfd_mutex, which BFD can lock and unlock via
the callbacks we pass it. This appears to lock the internal global data
structures of BFD (like its global cache or some global counter), but not data
in individual `bfd *`instances. If the user of BFD wishes to call functions
on a given `bfd *` from multiple threads, it must provide the synchronization
itself. For this, we have gdb_bfd_data::per_bfd_mutex.
...
PR33811 reports the following data race:
...
Read of size 1 at 0x72440010c608 by thread T5 (mutexes: write M0):
#0 bfd_get_section_limit_octets bfd.h:2433
#1 bfd_get_section_contents bfd/section.c:1612
#2 bfd_is_section_compressed_info bfd/compress.c:901
#3 bfd_is_section_compressed bfd/compress.c:959
#4 gdb_bfd_map_section(bfd_section*, unsigned long*) gdb/gdb_bfd.c:779
...
vs:
...
Previous write of size 4 at 0x72440010c608 by main thread (mutexes: write M1):
#0 bfd_cache_delete bfd/cache.c:180
#1 _bfd_cache_close_unlocked bfd/cache.c:607
#2 bfd_cache_close_all bfd/cache.c:664
#3 notify_before_prompt gdb/event-top.c:524
...
In more detail, this read in bfd_get_section_limit_octets in bfd/bfd-in2.h:
...
if (abfd->direction != write_direction && sec->rawsize != 0)
...
vs. this write in bfd_cache_delete in bfd/cache.c:
...
abfd->last_io = bfd_io_force;
...
There is already locking used for both the read and write.
In gdb_bfd_map_section, we use the per-BFD lock:
...
gdb_bfd_data *gdata = (gdb_bfd_data *) bfd_usrdata (abfd);
gdb::lock_guard<gdb::mutex> guard (gdata->per_bfd_mutex);
...
And in bfd_cache_close_all, we use the global BFD lock:
...
bool
bfd_cache_close_all (void)
{
...
if (!bfd_lock ())
return false;
...
if (!bfd_unlock ())
return false;
return ret;
}
...
The problem is that the locking is not sufficient. Since bfd_cache_close_all
accesses individual BFDs, it needs to lock the corresponding per-BFD locks as
well.
A naive way to implement this using the existing scheme of wrappers, would be to
add a gdb_bfd_cache_close_all that locks all per-BFD locks, calls
bfd_cache_close_all, and unlocks all per-BFD locks, like this:
...
bool
gdb_bfd_cache_close_all ()
{
bool res;
for (auto abfd : all_bfds)
{
auto gdata = static_cast<gdb_bfd_data *> (bfd_usrdata (abfd));
gdata->per_bfd_mutex.lock ();
}
res = bfd_cache_close_all ();
for (auto abfd : all_bfds)
{
auto gdata = static_cast<gdb_bfd_data *> (bfd_usrdata (abfd));
gdata->per_bfd_mutex.unlock ();
}
return res;
}
...
Apart from the fact that trying to hold all those locks at the same time
increases the changes of deadlock, it also accesses all_bfds without locking
the required global BFD lock (reported by TSAN).
It's easy enough to fix that by adding:
...
gdb_bfd_cache_close_all ()
{
+ gdb::lock_guard<gdb::recursive_mutex> guard (gdb_bfd_mutex);
...
but that brings us to the problem of lock-order-inversion (also reported by
TSAN), and indeed timeouts do occur.
I came up with a complicated scheme [2] that:
- doesn't try to lock all the per-BFD locks at the same time, and
- addresses the lock-order-inversion problem by releasing the global BFD lock
before acquiring the per-BFD lock and then re-acquiring the global BFD lock
However, this approach was seen as too convoluted.
So instead, revert to a simple locking scheme with only the global BFD lock,
dropping the per-BFD lock.
This changes the per-BFD locking in gdb_bfd_map_section to global BFD locking,
which means that the read in bfd_get_section_limit_octets is now guarded by
the global BFD lock, which is the same lock guarding the write in
bfd_cache_delete. So, the race is fixed.
Approved-By: Tom Tromey <tom@tromey.com>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33811
[1] https://sourceware.org/pipermail/gdb-patches/2026-January/224291.html
[2] https://sourceware.org/pipermail/gdb-patches/2026-January/224426.html
Diffstat (limited to 'libctf/testsuite/libctf-writable/writable.exp')
0 files changed, 0 insertions, 0 deletions
