Age | Commit message (Collapse) | Author | Files | Lines |
|
The comment over dwarf2_section_info::get_size says:
In other cases, you must call this function, because for compressed
sections the size field is not set correctly until the section has
been read
From what I can see (while debugging a test case compiled with -gz on
Linux), that's not true. For compressed sections, bfd_section_size
returns the uncompressed size. asection::size contains the uncompressed
size while asection::compressed_size contains the compressed size:
(top-gdb) p sec
$13 = (asection *) 0x521000119778
(top-gdb) p sec.compressed_size
$14 = 6191
(top-gdb) p sec.size
$15 = 12116
I therefore propose to remove dwarf2_section_info::get_size, as it
appears that reading in the section is orthogonal to knowing its size.
If the assumption above is false, it would be nice to document in which
case it's false.
I checked the callers, and I don't think that we need to add any
dwarf2_section_info::read calls to compensate for the fact that get_size
used to do it.
Change-Id: I428571e532301d49f1d8242d687e1fcb819b75c1
Approved-By: Tom Tromey <tom@tromey.com>
|
|
|
|
I noticed a few spots in ada-varobj.c that refer to calling xfree,
where the type in question has changed to std::string. This patch
removes these obsolete comments.
|
|
|
|
gprofng/ChangeLog
2025-04-17 Vladimir Mezentsev <vladimir.mezentsev@oracle.com>
* src/gp-archive.cc: Fix the --help output.
* src/gp-collect-app.cc: Likewise.
* src/gp-display-src.cc: Likewise.
* src/gp-display-text.cc: Likewise.
* src/gprofng.cc: Likewise.
|
|
A fix for commit c4fce3ef2927.
|
|
The test added in commit 4fe96ddaf614 results in an asan complaint:
loongarch-parse.y:225:16: runtime error: left shift of negative value -1
To avoid the complaint, perform left shifts as unsigned (which gives
the same result on 2's complement machines). Do the same for
addition, subtraction and multiplication. Furthermore, warn on
divide/modulus by zero.
|
|
|
|
After building gdb with -fsanitize=threads, and running test-case
gdb.cp/cplusfuncs.exp, I run into a single timeout:
...
FAIL: gdb.cp/cplusfuncs.exp: info function operator=( (timeout)
...
and the test-case takes 2m33s to finish.
This is due to expanding CUs from libstdc++.
After de-installing package libstdc++6-debuginfo, the timeout disappears and
testing time goes down to 9 seconds.
Fix this by not running to main, which brings testing time down to 3 seconds.
With a gdb built without -fsanitize=threads, testing time goes down from 11
seconds to less than 1 second.
Tested on x86_64-linux.
Reviewed-By: Keith Seitz <keiths@redhat.com>
|
|
value_struct_elt_bitpos is weird: it takes an in/out value parameter,
and it takes an error string parameter. However, it only has a single
caller, which never uses the "out" value.
I think it was done this way to mimic value_struct_elt. However,
value_struct_elt is pretty ugly and I don't think it's worth
imitating.
This patch cleans up value_struct_elt_bitpos a bit.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This test is not supposed to use -z force-bti
|
|
Both BTI and memory sealing require use of GNU properties and
we should check that there is no interference.
|
|
Fixes [1]. Previously iteration over GNU properties of an input file
could stop before reaching GNU_PROPERTY_AARCH64_FEATURE_1_AND which
would result in incorrect inference of properties of the output file.
In the particular use case described in [1], the memory seal property
GNU_PROPERTY_MEMORY_SEAL with number 3, if present in the input file,
prevented reading information from GNU_PROPERTY_AARCH64_FEATURE_1_AND
property due to filtering by property number.
[1] PR32818 https://sourceware.org/bugzilla/show_bug.cgi?id=32818
|
|
With test-case gdb.threads/clone-attach-detach.exp I usually get:
...
(gdb) attach <pid> &^M
Attaching to program: clone-attach-detach, process <pid>^M
[New LWP <lwp>]^M
(gdb) PASS: $exp: bg attach <n>: attach
[Thread debugging using libthread_db enabled]^M
Using host libthread_db library "/lib64/libthread_db.so.1".^M
...
but sometimes I run into:
...
(gdb) attach <pid> &^M
Attaching to program: clone-attach-detach, process <pid>^M
[New LWP <lwp>]^M
(gdb) [Thread debugging using libthread_db enabled]^M
Using host libthread_db library "/lib64/libthread_db.so.1".^M
FAIL: $exp: bg attach <n>: attach (timeout)
...
I managed to reproduce this using make target check-readmore and
READMORE_SLEEP=100.
Fix this using -no-prompt-anchor.
Tested on x86_64-linux.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
gdb/copyright.py currently changes some files that it shouldn't:
- despite having a `gnulib/import` entry in EXCLUDE_LIST, it does
change the files under that directory
- it is missing `sim/Makefile.in`
Change the exclude list logic to use glob patterns. This makes it
easier to specify exclusions of full directories or files by basename,
while simplifying the code.
Merge EXCLUDE_LIST and NOT_FSF_LIST, since there's no fundamental reason
to keep them separate (they are treated identically). I kept the
comment that explains that some files are excluded due to not being
FSF-licensed.
Merge EXCLUDE_ALL_LIST in EXCLUDE_LIST, converting the entries to glob
patterns that match everywhere in the tree (e.g. `**/configure`).
Tested by running the script on the parent commit of d01e823438c7
("Update copyright dates to include 2025") and diff'ing the result with
d01e823438c7. The only differences are:
- the files that we don't want to modify (gnulib/import and
sim/Makefile.in)
- the files that need to be modified by hand
Running the script on latest master produces no diff.
Change-Id: I318dc3bff34e4b3a9b66ea305d0c3872f69cd072
Reviewed-By: Guinevere Larsen <guinevere@redhat.com>
|
|
Add `pyright: strict` at the top of the file, then adjust the fallouts.
This annotation is understood by pyright, and thus any IDE using pyright
behind the scenes (VSCode and probably others).
I presume that any GDB developer running this script is using a recent
enough version of Python, so specify the type annotations using the
actual types when possible (e.g. `list[str]` instead of
`typing.List[str]`). I believe this required Python 3.9.
Change-Id: I3698e28555e236a03126d4cd010dae4b5647ce48
Reviewed-By: Guinevere Larsen <guinevere@redhat.com>
|
|
|
|
With test-case gdb.base/bg-execution-repeat.exp, occasionally I run into a
timeout:
...
(gdb) c 1&
Will stop next time breakpoint 1 is reached. Continuing.
(gdb) PASS: $exp: c 1&: c 1&
Breakpoint 2, foo () at bg-execution-repeat.c:23
23 return 0; /* set break here */
PASS: $exp: c 1&: breakpoint hit 1
Will stop next time breakpoint 2 is reached. Continuing.
(gdb) PASS: $exp: c 1&: repeat bg command
print 1
$1 = 1
(gdb) PASS: $exp: c 1&: input still accepted
interrupt
(gdb) PASS: $exp: c 1&: interrupt
Program received signal SIGINT, Interrupt.
foo () at bg-execution-repeat.c:24
24 }
PASS: $exp: c 1&: interrupt received
set var do_wait=0
(gdb) PASS: $exp: c 1&: set var do_wait=0
continue&
Continuing.
(gdb) PASS: $exp: c 1&: continue&
FAIL: $exp: c 1&: breakpoint hit 2 (timeout)
...
I can reproduce it reliably by adding a "sleep (1)" before the "do_wait = 1"
in the .c file.
The timeout happens as follows:
- with the inferior stopped at main, gdb continues (command c 1&)
- the inferior hits the breakpoint at foo
- gdb continues (using the repeat command functionality)
- the inferior is interrupted
- inferior variable do_wait gets set to 0. The assumption here is that the
inferior has progressed enough that do_wait is set to 1 at that point, but
that happens not to be the case. Consequently, this has no effect.
- gdb continues
- the inferior sets do_wait to 1
- the inferior enters the wait function, and wait for do_wait to become 0,
which never happens.
Fix this by moving the "do_wait = 1" to before the first call to foo.
Tested on x86_64-linux.
Reviewed-By: Alexandra Petlanova Hajkova <ahajkova@redhat.com>
|
|
* rescoff.c (read_coff_res_dir): Add more sanity checking.
Tidy and correct existing checks.
|
|
Also remove unnecessary casts on memory alloc function returns.
|
|
Commit 9e68cae4fdfb broke the check I added in commit 4846e543de95.
Add missing "return NULL".
|
|
|
|
This changes the Ada symbol cache to use gdb::unordered_set rather
than an htab.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This patch changes decoded_names_store to use a gdb::string_set rather
than an htab.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
On s390x-linux, with test-case gdb.ada/overloads.exp and gcc 7.5.0 I run into:
...
(gdb) print Oload(CA)^M
Could not find a match for oload^M
(gdb) FAIL: $exp: print Oload(CA)
...
The mismatch happens here in ada_type_match:
...
return ftype->code () == atype->code ();
...
with:
...
(gdb) p ftype->code ()
$3 = TYPE_CODE_TYPEDEF
(gdb) p atype->code ()
$4 = TYPE_CODE_ARRAY
...
At the start of ada_type_match, typedefs are skipped:
...
ftype = ada_check_typedef (ftype);
atype = ada_check_typedef (atype);
...
but immediately after this, refs are skipped:
...
if (ftype->code () == TYPE_CODE_REF)
ftype = ftype->target_type ();
if (atype->code () == TYPE_CODE_REF)
atype = atype->target_type ();
...
which in this case makes ftype a typedef.
Fix this by using ada_check_typedef after skipping the refs as well.
Tested on x86_64-linux and s390x-linux.
Approved-By: Tom Tromey <tom@tromey.com>
PR ada/32409
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32409
|
|
On s390x-linux, with test-case gdb.ada/scalar_storage.exp we have:
...
(gdb) print V_LE^M
$1 = (value => 126, another_value => 12, color => 3)^M
(gdb) FAIL: gdb.ada/scalar_storage.exp: print V_LE
print V_BE^M
$2 = (value => 125, another_value => 9, color => green)^M
(gdb) KFAIL: $exp: print V_BE (PRMS: DW_AT_endianity on enum types)
...
The KFAIL is incorrect in the sense that gdb is behaving as expected.
The problem is incorrect debug info, so change this into an xfail.
Furthermore, extend the xfail to cover V_LE.
Tested on s390x-linux and x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
PR testsuite/32875
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32875
|
|
When compiling with -gsplit-dwarf -fdebug-types-section, DWARF 5
.debug_info.dwo sections may contain some type units:
$ llvm-dwarfdump -F -color a-test.dwo | head -n 5
a-test.dwo: file format elf64-x86-64
.debug_info.dwo contents:
0x00000000: Type Unit: length = 0x000008a0, format = DWARF32, version = 0x0005, unit_type = DW_UT_split_type, abbr_offset = 0x0000, addr_size = 0x08, name = 'vector<int, std::allocator<int> >', type_signature = 0xb499dcf29e2928c4, type_offset = 0x0023 (next unit at 0x000008a4)
In this case, create_dwo_cus_hash_table wrongly creates a dwo_unit for
it and adds it to dwo_file::cus. create_dwo_debug_type_hash_table later
correctly creates a dwo_unit that it puts in dwo_file::tus.
This can be observed with:
$ ./gdb -nx -q --data-directory=data-directory -ex 'maint set dwarf sync on' -ex "maint set worker-threads 0" -ex "set debug dwarf-read 2" -ex "file a.out" -batch
...
[dwarf-read] create_dwo_cus_hash_table: Reading .debug_info.dwo for /home/smarchi/build/binutils-gdb/gdb/a-test.dwo:
[dwarf-read] create_dwo_cus_hash_table: offset 0x0, dwo_id 0xb499dcf29e2928c4
[dwarf-read] create_dwo_cus_hash_table: offset 0x8a4, dwo_id 0x496a8791a842701b
[dwarf-read] create_dwo_cus_hash_table: offset 0x941, dwo_id 0xefd13b3f62ea9fea
...
[dwarf-read] create_dwo_debug_type_hash_table: Reading .debug_info.dwo for /home/smarchi/build/binutils-gdb/gdb/a-test.dwo
[dwarf-read] create_dwo_debug_type_hash_table: offset 0x0, signature 0xb499dcf29e2928c4
[dwarf-read] create_dwo_debug_type_hash_table: offset 0x8a4, signature 0x496a8791a842701b
[dwarf-read] create_dwo_debug_type_hash_table: offset 0x941, signature 0xefd13b3f62ea9fea
...
Fix it by skipping anything that isn't a compile unit in
create_dwo_cus_hash_table. After this patch, the debug output of
create_dwo_cus_hash_table only shows one created dwo_unit, as we expect.
I couldn't find any user-visible problem related to this, I just noticed
it while debugging.
Change-Id: I7dddf766fe1164123b6702027b1beb56114f25b1
Reviewed-By: Tom de Vries <tdevries@suse.de>
|
|
Rename some functions to make it clearer that they are only relevant
when dealing with DWO files.
Change-Id: Ia0cd3320bf16ebdbdc3c09d7963f372e6679ef7c
Reviewed-By: Tom de Vries <tdevries@suse.de>
|
|
The flag already exists but it's not been exposed to user.
Signed-off-by: Marek Pikuła <m.pikula@partner.samsung.com>
|
|
|
|
Fix commit c4fce3ef2927 brace position.
|
|
windres code has the habit of exiting on any error. That's not so
bad, but it does make oss-fuzz ineffective when testing windres. Fix
many places that print errors and exit to instead print the error and
pass status up the call chain. In the process of doing this, I
noticed write_res_file was calling bfd_close without checking return
status. Fixing that resulted in lots of testsuite failures. The
problem was a lack of bfd_set_format in windres_open_as_binary, which
leaves the output file as bfd_unknown format. As it happens this
doesn't make any difference in writing the output binary file, except
for the bfd_close return status.
|
|
oss-fuzz testcase manages to hit a buffer overflow. Sanity check
by passing the buffer length to bin_to_res_toolbar and ensuring reads
don't go off the end of the buffer.
|
|
Add -fPIC when compiling the test, to fix complaints on some targets
about certains relocation not being valid for shared libraries.
|
|
There's currently no test for unwinding the SVE registers from a signal
frame, so add one.
Tested on aarch64-linux-gnu native.
Tested-By: Luis Machado <luis.machado@arm.com>
Approved-By: Luis Machado <luis.machado@arm.com>
|
|
While commit ef6379e16dd1 ("Set the default DLL chracteristics to 0 for
Cygwin based targets") tried to undo the too broad earlier 514b4e191d5f
("Change the default characteristics of DLLs built by the linker to more
secure settings"), it didn't go quite far enough. Apparently the
assumption was that if it's not MinGW, it must be Cygwin. Whether it
really is okay to default three of the flags to non-zero on MinGW also
remains unclear - sadly neither of the commits came with any description
whatsoever. (Documentation also wasn't updated to indicate the restored
default.)
Setting effectively any of the DLL characteristics flags depends on
properties of the binary being linked. While defaulting to "more secure"
is a fair goal, it's only the programmer who can know whether their code
is actually compatible with the respective settings. On the assumption
that the change of defaults was indeed deliberate (and justifiable) for
MinGW, limit them to just that. In particular, don't default any of the
flags to set also for non-MinGW, non-Cygwin targets, like e.g. UEFI. At
least the mere applicability of the high-entropy-VA bit is pretty
questionable there in the first place - UEFI applications, after all,
run in "physical mode", i.e. either unpaged or (where paging is a
requirement, like for x86-64) direct-mapped.
The situation is particularly problematic with NX-compat: Many UEFI
implementations respect the "physical mode" property, where permissions
can't be enforced anyway. Some, like reportedly OVMF, even have a build
option to behave either way. Hence successfully testing a UEFI binary on
any number of systems does not guarantee it won't crash elsewhere if the
flag is wrongly set.
Get rid of excess semicolons as well.
|
|
There's no reason not to do as the comment says, just like all other
architectures do when they need custom field: Call the allocation method
of the "superclass". Which is the ELF one, of which in turn the BFD one
is the "superclass", dealt with accordingly by
_bfd_elf_link_hash_newfunc().
|
|
The need for this has disappeared with c65c21e1ffd1 ("various i386-aout
and i386-coff target removal"), with a few other users having got
removed just a few days earlier; avoid the unnecessary indirection.
|
|
While COFF, unlike ELF, doesn't have a generic way to express symbol
size, there is a means to do so for functions. When inputs are ELF,
propagate function sizes, including the fact that a symbol denotes a
function, to the output's symbol table.
Note that this requires hackery (cross-object-format processing) in two
places - when linking, global symbols are entered into a global hash
table, and hence relevant information needs to be updated there in that
case, while otherwise the original symbol structures can be consulted.
For the setting of ->u.syment.n_type the later writing of the field to
literal 0 needs to be dropped from coff_write_alien_symbol(). It was
redundant anyway with an earlier write of the field using C_NUL.
|
|
... to be all in one group. This helps code generation for code like
|| is_cpu (&i.tm, CpuPadLock)
|| is_cpu (&i.tm, CpuPadLockRNG2)
|| is_cpu (&i.tm, CpuPadLockPHE2)
|| is_cpu (&i.tm, CpuPadLockXMODX)
that we have (effectively) twice.
|
|
With the command before the change, gdb crashes with message:
(gdb) p 1 == { }
Fatal signal: Segmentation fault
After the fix in this commit, gdb shows following message:
(gdb) p 1 == { }
size of the array element must not be zero
Add new test cases to file gdb.base/printcmds.exp to test this change
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The cmd_list_element::doc variable must be non-nullptr, otherwise, in
`help_cmd` (cli/cli-decode.c), we will trigger an assert when we run
one of these lines:
gdb_puts (c->doc, stream);
or,
gdb_puts (alias->doc, stream);
as gdb_puts requires that the first argument (the doc string) be
non-nullptr.
Better, I think, to assert when the cmd_list_element is created,
rather than catching an assert later when 'help CMD' is used.
I only ran into this case when messing with the Python API command
creation code, I accidentally created a command with a nullptr doc
string, and only found out when I ran 'help CMD' and got an
assertion.
While I'm adding this assertion, I figure I should also assert that
the command name is not nullptr too. Looking through cli-decode.c,
there seems to be plenty of places where we assume a non-nullptr name.
Built and tested on x86-64 GNU/Linux with an all-targets build; I
don't see any regressions, so (I hope) there are no commands that
currently violate this assertion.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
|
|
These LA32R instructions are in fact special cases of the LA32S/LA64
rdtime{l,h}.w (with only one output operand instead of two, the other
one being forced to $zero), but are named differently in the LA32R
ISA manual nevertheless.
As the LA32R names are more memorable to a degree (especially for those
having difficulties remembering which operand corresponds to the node
ID), support them by making them aliases of the corresponding LA32S/LA64
instruction respectively, and make them render as such in disassembly.
Signed-off-by: WANG Xuerui <git@xen0n.name>
|
|
|
|
|
|
|
|
But PR gdb/20126 highlights a case where GDB emits a large number of
warnings like:
warning: Can't open file /anon_hugepage (deleted) during file-backed mapping note processing
warning: Can't open file /dev/shm/PostgreSQL.1150234652 during file-backed mapping note processing
warning: Can't open file /dev/shm/PostgreSQL.535700290 during file-backed mapping note processing
warning: Can't open file /SYSV604b7d00 (deleted) during file-backed mapping note processing
... etc ...
when opening a core file. This commit aims to avoid at least some of
these warnings.
What we know is that, for at least some of these cases, (e.g. the
'(deleted)' mappings), the content of the mapping will have been
written into the core file itself. As such, the fact that the file
isn't available ('/SYSV604b7d00' at least is a shared memory mapping),
isn't really relevant, GDB can still provide access to the mapping, by
reading the content from the core file itself.
What I propose is that, when processing the file backed mappings, if
all of the mappings for a file are covered by segments within the core
file itself, then there is no need to warn the user that the file
can't be opened again. The debug experience should be unchanged, as
GDB would have read from the in-core mapping anyway.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30126
|
|
Editing maint.h with clangd shows some errors about obj_section and
objfile being unknown. Add some forward declarations for them.
Change-Id: Ic4dd12a371198fdf740892254a8f2c1fae2846b9
|
|
In PR gdb/31846 the user reported an issue where GDB is unable to find
the build-id within an ELF, despite the build-id being
present (confirmed using readelf).
The user was able to try several different builds of GDB, and in one
build they observed the warning:
warning: BFD: FILENAME: unable to decompress section .debug_info
But in may other builds of GDB this warning was missing.
There are, I think, a couple of issues that the user is running into,
but this patch is about why the above warning is often missing from
GDB's output.
I wasn't able to reproduce a corrupted .debug_info section such that
the above warning would be triggered, but it is pretty easy to patch
the _bfd_elf_make_section_from_shdr function (in bfd/elf.c) such that
the call to bfd_init_section_decompress_status is reported as a
failure, thus triggering the warning. There is a patch that achieves
this in the bug report.
I did this, and can confirm that on my build of GDB, I don't see the
above warning, even though I can confirm that the _bfd_error_handler
call (in _bfd_elf_make_section_from_shdr) is being reached.
The problem is back in format.c, in bfd_check_format_matches. This
function intercepts all the warnings and places them into a
per_xvec_messages structure. These warnings are then printed with a
call to print_and_clear_messages.
If bfd_check_format_matches finds a single matching format, then
print_and_clear_messages, will print all warnings associated with that
single format.
But if no format matches, print_and_clear_messages will print all the
warnings, so long as all targets have emitted the same set of
warnings, and unfortunately, that is not the case for me.
The warnings are collected by iterating over bfd_target_vector and
trying each target. My target happens to be x86_64_elf64_vec, and, as
expected this target appears in bfd_target_vector.
However, bfd_target_vector also includes DEFAULT_VECTOR near the top.
And in my build, DEFAULT_VECTOR is x86_64_elf64_vec. Thus, for me,
the x86_64_elf64_vec entry appears twice in bfd_target_vector, this
means that x86_64_elf64_vec ends up being tried twice, and, as each
try generates one warning, the x86_64_elf64_vec entry in the
per_xvec_messages structure, has two warnings, while the other
per_xvec_messages entries only have one copy of the warning.
Because of this difference, print_and_clear_messages decides not to
print any of the warnings, which is not very helpful.
I considered a few different approaches to fix this issue:
We could de-duplicate warnings in the per_xvec_messages structure as
new entries are added. So for any particular xvec, each time a new
warning arrives, if the new warning is identical to an existing
warning, then don't record it. This might be an interesting change in
the future, but for now I rejected this solution as it felt like a
bodge, the duplicate warnings aren't really from a single attempt at
an xvec, but are from two distinct attempts at the same xvec. And so:
I wondered if we could remove the duplicate entries from
bfd_target_vector. Or if we could avoid processing the same xvec
twice maybe? For the single DEFAULT_VECTOR this wouldn't be too hard
to do, but bfd_target_vector also includes SELECT_VECS, which I think
could contain more duplicates. Changing bfd_check_format_matches to
avoid attempting any duplicate vectors would now require more
complexity than a single flag, and I felt there was an easier
solution, which was:
I propose that within bfd_check_format_matches, within the loop that
tries each entry from bfd_target_vector, as we switch to each vector
in turn, we should delete any existing warnings within the
per_xvec_messages structure for the target vector we are about to try.
This means that, if we repeat a target, only the last set of warnings
will survive.
With this change in place, print_and_clear_messages now sees the same
set of warnings for each target, and so prints out the warning
message.
Additionally, while I was investigating this issue I managed to call
print_and_clear_messages twice. This caused a crash because the first
call to print_and_clear_messages frees all the associated memory, but
leaves the per_xvec_messages::next field pointing to the now
deallocated object. I'd like to propose that we set the next field to
NULL in print_and_clear_messages. This clearly isn't needed so long
as print_and_clear_messages is only called once, but (personally) I
like to set pointers back to NULL if the object they are pointing to
is free and the parent object is going to live for some additional
time. I can drop this extra change if people don't like it.
This change doesn't really "fix" PR gdb/31846, but it does mean that
the warning about being unable to decompress .debug_info should now be
printed consistently, which is a good thing.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31846
Reviewed-By: Alan Modra <amodra@gmail.com>
|