Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch makes it possible to allow Value.format_string() to return
nibbles output.
When we set the parameter of nibbles to True, we can achieve the
displaying binary values in groups of every four bits.
Here's an example:
(gdb) py print (gdb.Value (1230).format_string (format='t', nibbles=True))
0100 1100 1110
(gdb)
Note that the parameter nibbles is only useful if format='t' is also used.
This patch also includes update to the relevant testcase and
documentation.
Tested on x86_64 openSUSE Tumbleweed.
|
|
Make an introduction of a new print setting that can be set by 'set
print nibbles [on|off]'. The default value if OFF, which can be changed
by user manually. Of course, 'show print nibbles' is also included in
the patch.
The new feature displays binary values by group, with four bits per
group. The motivation for this work is to enhance the readability of
binary values.
Here's a GDB session before this patch is applied.
(gdb) print var_a
$1 = 1230
(gdb) print/t var_a
$2 = 10011001110
With this patch applied, we can use the new print setting to display the
new form of the binary values.
(gdb) print var_a
$1 = 1230
(gdb) print/t var_a
$2 = 10011001110
(gdb) set print nibbles on
(gdb) print/t var_a
$3 = 0100 1100 1110
Tested on x86_64 openSUSE Tumbleweed.
|
|
When testing on openSUSE Leap 15.4 I ran into this FAIL:
...
FAIL: gdb.arch/i386-mpx-map.exp: NULL address of the pointer
...
and likewise for all the other mpx tests.
The problem is that have_mpx is supposed to return 0, but it doesn't because
it tries to match this output:
...
builtin_spawn -ignore SIGHUP temp/20294/have_mpx-2-20294.x^M
No MPX support^M
No MPX support^M
...
using:
...
&& ![string equal $output "No MPX support\r\n"]]
...
Fix this by matching using a regexp instead.
Tested on x86_64-linux.
|
|
This commit extends the Python API to include disassembler support.
The motivation for this commit was to provide an API by which the user
could write Python scripts that would augment the output of the
disassembler.
To achieve this I have followed the model of the existing libopcodes
disassembler, that is, instructions are disassembled one by one. This
does restrict the type of things that it is possible to do from a
Python script, i.e. all additional output has to fit on a single line,
but this was all I needed, and creating something more complex would,
I think, require greater changes to how GDB's internal disassembler
operates.
The disassembler API is contained in the new gdb.disassembler module,
which defines the following classes:
DisassembleInfo
Similar to libopcodes disassemble_info structure, has read-only
properties: address, architecture, and progspace. And has methods:
__init__, read_memory, and is_valid.
Each time GDB wants an instruction disassembled, an instance of
this class is passed to a user written disassembler function, by
reading the properties, and calling the methods (and other support
methods in the gdb.disassembler module) the user can perform and
return the disassembly.
Disassembler
This is a base-class which user written disassemblers should
inherit from. This base class provides base implementations of
__init__ and __call__ which the user written disassembler should
override.
DisassemblerResult
This class can be used to hold the result of a call to the
disassembler, it's really just a wrapper around a string (the text
of the disassembled instruction) and a length (in bytes). The user
can return an instance of this class from Disassembler.__call__ to
represent the newly disassembled instruction.
The gdb.disassembler module also provides the following functions:
register_disassembler
This function registers an instance of a Disassembler sub-class
as a disassembler, either for one specific architecture, or, as a
global disassembler for all architectures.
builtin_disassemble
This provides access to GDB's builtin disassembler. A common
use case that I see is augmenting the existing disassembler output.
The user code can call this function to have GDB disassemble the
instruction in the normal way. The user gets back a
DisassemblerResult object, which they can then read in order to
augment the disassembler output in any way they wish.
This function also provides a mechanism to intercept the
disassemblers reads of memory, thus the user can adjust what GDB
sees when it is disassembling.
The included documentation provides a more detailed description of the
API.
There is also a new CLI command added:
maint info python-disassemblers
This command is defined in the Python gdb.disassemblers module, and
can be used to list the currently registered Python disassemblers.
|
|
When running test-case gdb.python/py-mi-cmd.exp on openSUSE Leap 42.3 with
python 3.4, I occasionally run into:
...
Expecting: ^(-pycmd dct[^M
]+)?(\^done,result={hello="world",times="42"}[^M
]+[(]gdb[)] ^M
[ ]*)
-pycmd dct^M
^done,result={times="42",hello="world"}^M
(gdb) ^M
FAIL: gdb.python/py-mi-cmd.exp: -pycmd dct (unexpected output)
...
The problem is that the data type used here in py-mi-cmd.py:
...
elif argv[0] == "dct":
return {"result": {"hello": "world", "times": 42}}
...
is a dictionary, and only starting version 3.6 are dictionaries insertion
ordered, so using PyDict_Next in serialize_mi_result doesn't guarantee a
fixed order.
Fix this by allowing the alternative order.
Tested on x86_64-linux.
|
|
PR gdb/17160 points out that "interrupt -a" errors in all-stop mode,
but there's no good reason for this. This patch removes the error.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=17160
|
|
With gcc-12 and target board unix/-m32, we run into:
...
(gdb) ^M
Expecting: ^(-var-create A_String_Access \* A_String_Access[^M
]+)?(\^done,name="A_String_Access",numchild="1",.*[^M
]+[(]gdb[)] ^M
[ ]*)
-var-create A_String_Access * A_String_Access^M
^error,msg="Value out of range."^M
(gdb) ^M
FAIL: gdb.ada/mi_var_access.exp: Create varobj (unexpected output)
...
What happens is easier to understand if we take things out of the mi context:
...
$ gdb -q -batch \
outputs/gdb.ada/mi_var_access/mi_access \
-ex "b mi_access.adb:19" \
-ex run \
-ex "p A_String_Access"
...
Breakpoint 1, mi_access () at mi_access.adb:19
19 A_String : String (3 .. 5) := "345"; -- STOP
$1 = (pck.string_access) <error reading variable: Value out of range.>
...
while with target board unix we have instead:
...
$1 = (pck.string_access) 0x431b40 <ada_main.sec_default_sized_stacks>
...
The var-create command samples the value of the variable at a location where
the variable is not yet initialized, and with target board unix we
accidentally hit a valid address, but with target board unix/-m32 that's not
the case.
Fix the FAIL by accepting the error message.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28464
|
|
Starting with (future) Clang 15 (since
https://reviews.llvm.org/D120989), Clang emits the DWARF information
of global alias variables as DW_TAG_imported_declaration. However,
GDB does not handle it. It incorrectly always reads this tag as
C++/Fortran imported declaration (type alias, namespace alias and
Fortran module). This commit adds support to handle this tag as an
alias variable.
This change fixes the failures in the gdb.base/symbol-alias.exp
testcase with current git Clang. This testcase is also updated to
test nested (recursive) aliases.
|
|
When running test-case gdb.reverse/test_ioctl_TCSETSW.exp with glibc debuginfo
installed, I run into:
...
(gdb) PASS: gdb.reverse/test_ioctl_TCSETSW.exp: at TCSETSW call
step^M
__tcsetattr (fd=0, optional_actions=1, termios_p=0x7fffffffcf50) at \
../sysdeps/unix/sysv/linux/tcsetattr.c:45^M
45 {^M
(gdb) FAIL: gdb.reverse/test_ioctl_TCSETSW.exp: handle TCSETSW
...
The problem is that the step is expected to step over the call to tcsetattr,
but due to glibc debuginfo being installed, we step into the call.
Fix this by using next instead of step.
Tested on x86_64-linux.
|
|
On openSUSE Leap 42.3 with python 3.4, I run into:
...
(gdb) python import pygments^M
Traceback (most recent call last):^M
File "<string>", line 1, in <module>^M
ImportError: No module named 'pygments'^M
Error while executing Python code.^M
(gdb) FAIL: gdb.base/style.exp: python import pygments
ERROR: unexpected output from python import
...
because gdb_py_module_available doesn't handle the single quotes around the
module name in the ImportError.
Fix this by allowing the single quotes.
Tested on x86_64-linux.
|
|
The if statement in case gdb_sys_ioctl in function
record_linux_system_call in file gdb/linux-record.c is as follows:
if (tmpulongest == tdep->ioctl_FIOCLEX
|| tmpulongest == tdep->ioctl_FIONCLEX
....
|| tmpulongest == tdep->ioctl_TCSETSW
...
}
The PowerPC ioctl value for ioctl_TCSETW is 0x802c7415. The variable
ioctl_TCSETW is defined in gdb/linux-record.h as an int. The TCSETW value
has the MSB set to one so it is a negative integer. The comparison of the
unsigned long value tmpulongest to a negative integer value for
ioctl_TCSETSW fails.
This patch changes the declarations for the ioctl_* values in struct
linux_record_tdep to unsigned long to fix the comparisons between
tmpulongest and the tdep->ioctl_* values.
An additional test gdb.reverse/test_ioctl_TCSETSW.exp is added to verify
the gdb record_linux_system_call() if statement for the ioctl TCSETSW
succeeds.
This patch has been tested on Power 10 and Intel with no test failures.
|
|
Since pretty much forever the get_compiler_info function has included
these lines:
# Most compilers will evaluate comparisons and other boolean
# operations to 0 or 1.
uplevel \#0 { set true 1 }
uplevel \#0 { set false 0 }
These define global variables true (to 1) and false (to 0).
It seems odd to me that these globals are defined in
get_compiler_info, I guess maybe the original thinking was that if a
compiler had different true/false values then we would detect it there
and define true/false differently.
I don't think we should be bundling this logic into get_compiler_info,
it seems weird to me that in order to use $true/$false a user needs to
first call get_compiler_info.
It would be better I think if each test script that wants these
variables just defined them itself, if in the future we did need
different true/false values based on compiler version then we'd just
do:
if { [test_compiler_info "some_pattern"] } {
# Defined true/false one way...
} else {
# Defined true/false another way...
}
But given the current true/false definitions have been in place since
at least 1999, I suspect this will not be needed any time soon.
Given that the definitions of true/false are so simple, right now my
suggestion is just to define them in each test script that wants
them (there's not that many). If we ever did need more complex logic
then we can always add a function in gdb.exp that sets up these
globals, but that seems overkill for now.
There should be no change in what is tested after this commit.
|
|
This variable is useful when exercising AArch64 multi-arch support (debugging
32-bit AArch32 executables).
Unfortunately it isn't well documented. This patch adds information about it
and explains how to use it.
|
|
With gcc-12, I get for test-case gdb.base/vla-struct-fields.exp:
...
(gdb) print inner_vla_struct_object_size == sizeof(inner_vla_struct_object)^M
$7 = 1^M
(gdb) XPASS: gdb.base/vla-struct-fields.exp: size of inner_vla_struct_object
...
Fix this by limiting the xfailing to gcc-11 and earlier. Also, limit the
xfailing to the equality test.
Tested on x86_64-linux.
|
|
On openSUSE Tumbleweed with gcc-12, I run into a timeout:
...
(gdb) print value^M
Multiple matches for value^M
[0] cancel^M
[1] ada.strings.maps.value (<ref> ada.strings.maps.character_mapping; \
character) return character at a-strmap.adb:599^M
[2] pck.value at src/gdb/testsuite/gdb.ada/ghost/pck.ads:17^M
[3] system.object_reader.value (<ref> system.object_reader.object_symbol) \
return system.object_reader.uint64 at s-objrea.adb:2279^M
[4] system.traceback.symbolic.value (system.address) return string at \
s-trasym.adb:200^M
> FAIL: gdb.ada/ghost.exp: print value (timeout)
print ghost_value^M
Argument must be choice number^M
(gdb) FAIL: gdb.ada/ghost.exp: print ghost_value
...
Fix this by prefixing value (as well as the other printed values) with the
package name:
...
(gdb) print pck.value^M
...
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29055
|
|
The previous patch that introduced the arm_cc_for_target procedure
moved the ARM_CC_FOR_TARGET global check to that procedure, but forgot
to tell tcl that ARM_CC_FOR_TARGET is a global. As a result,
specifying ARM_CC_FOR_TARGET on the command line actually does
nothing. This fixes it.
Change-Id: I4e33b7633fa665e2f7b8f8c9592a949d74a19153
|
|
After this commit:
commit 44d469c5f85a4243462b8966722dafa62b602bf5
Date: Tue May 31 16:43:44 2022 +0200
gdb/testsuite: add Fortran compiler identification to GDB
Some regressions were noticed:
https://sourceware.org/pipermail/gdb-patches/2022-May/189673.html
The problem is associated with how compiler_info variable is cached
between calls to get_compiler_info.
Even before the above commit, get_compiler_info supported two
language, C and C++. Calling get_compiler_info would set the global
compiler_info based on the language passed as an argument to
get_compiler_info, and, in theory, compiler_info would not be updated
for the rest of the dejagnu run.
This obviously is slightly broken behaviour. If the first call to
get_compiler_info was for the C++ language then compiler_info would be
set based on the C++ compiler in use, while if the first call to
get_compiler_info was for the C language then compiler_info would be
set based on the C compiler.
This probably wasn't very noticable, assuming a GCC based test
environment then in most cases the C and C++ compiler would be the
same version.
However, if the user starting playing with CC_FOR_TARGET or
CXX_FOR_TARGET, then they might not get the behaviour they expect.
Except, to make matters worse, most of the time, the user probably
would get the behaviour they expected .... except when they didn't!
I'll explain:
In gdb.exp we try to avoid global variables leaking between test
scripts, this is done with the help of the two procs
gdb_setup_known_globals and gdb_cleanup_globals. All known globals
are recorded before a test script starts, and then, when the test
script ends, any new globals are deleted.
Normally, compiler_info is only set as a result of a test script
calling get_compiler_info or test_compiler_info. This means that the
compiler_info global will not exist when the test script starts, but
will exist when the test script end, and so, the compiler_info
variable is deleted at the end of each test.
This means that, in reality, the compiler_info is recalculated once
for each test script, hence, if a test script just checks on the C
compiler, or just checks on the C++ compiler, then compiler_info will
be correct and the user will get the behaviour they expect.
However, if a single test script tries to check both the C and C++
compiler versions then this will not work (even before the above
commit).
The situation is made worse be the behaviour or the load_lib proc.
This proc (provided by dejagnu) will only load each library once.
This means that if a library defines a global, then this global would
normally be deleted at the end of the first test script that includes
the library.
As future attempts to load the library will not actually reload it,
then the global will not be redefined and would be missing for later
test scripts that also tried to load that library.
To work around this issue we override load_lib in gdb.exp, this new
version adds all globals from the newly loaded library to the list of
globals that should be preserved (not deleted).
And this is where things get interesting for us. The library
trace-support.exp includes calls, at the file scope, to things like
is_amd64_regs_target, which cause get_compiler_info to be called.
This means that after loading the library the compiler_info global is
defined.
Our override of load_lib then decides that this new global has to be
preserved, and adds it to the gdb_persistent_globals array.
From that point on compiler_info will never be recomputed!
This commit addresses all the caching problems by doing the following:
Change the compiler_info global into compiler_info_cache global. This
new global is an array, the keys of this array will be each of the
supported languages, and the values will be the compiler version for
that language.
Now, when we call get_compiler_info, if the compiler information for
the specific language has not been computed, then we do that, and add
it to the cache.
Next, compiler_info_cache is defined by calling
gdb_persistent_global. This automatically adds the global to the list
of persistent globals. Now the cache will not be deleted at the end
of each test script.
This means that, for a single test run, we will compute the compiler
version just once for each language, this result will then be cached
between test scripts.
Finally, the legacy 'gcc_compiled' flag is now only set when we call
get_compiler_info with the language 'c'. Without making this change
the value of 'gcc_compiled' would change each time a new language is
passed to get_compiler_info. If the last language was e.g. Fortran,
then gcc_compiled might be left false.
|
|
Now that get_compiler_info might actually fail (if the language is not
handled), then we should try to handle this failure better in
test_compiler_info.
After this commit, if get_compiler_info fails then we will return a
suitable result depending on how the user called test_compiler_info.
If the user does something like:
set version [test_compiler_info "" "unknown-language"]
Then test_compiler_info will return an empty string. My assumption is
that the user will be trying to match 'version' against something, and
the empty string hopefully will not match.
If the user does something like:
if { [test_compiler_info "some_pattern" "unknown-language"] } {
....
}
Then test_compiler_info will return false which seems the obvious
choice.
There should be no change in the test results after this commit.
|
|
This commit is a minor cleanup for the two functions (in gdb.exp)
get_compiler_info and test_compiler_info.
Instead of using the empty string as the default language, and just
"knowing" that this means the C language. Make this explicit. The
language argument now defaults to "c" if not specified, and the if
chain in get_compiler_info that checks the language not explicitly
handles "c" and gives an error for unknown languages.
This is a good thing, now that the API appears to take a language, if
somebody does:
test_compiler_info "xxxx" "rust"
to check the version of the rust compiler then we will now give an
error rather than just using the C compiler and leaving the user
having to figure out why they are not getting the results they
expect.
After a little grepping, I think the only place we were explicitly
passing the empty string to either get_compiler_info or
test_compiler_info was in gdb_compile_shlib_1, this is now changed to
pass "c" as the default language.
There should be no changes to the test results after this commit.
|
|
We don't need to call get_compiler_info before calling
test_compiler_info; test_compiler_info includes a call to
get_compiler_info.
This commit cleans up lib/gdb.exp and lib/dwarf.exp a little by
removing some unneeded calls to get_compiler_info. We could do the
same cleanup throughout the testsuite, but I'm leaving that for
another day.
There should be no change in the test results after this commit.
|
|
The procedure gcc_major_version was earlier using the global variable
compiler_info to retrieve gcc's major version. This is discouraged and
(as can be read in a comment in compiler.c) compiler_info should be
local to get_compiler_info and test_compiler_info.
The preferred way of getting the compiler string is via calling
test_compiler_info without arguments. Gcc_major_version was changed to
do that.
|
|
While running the gdb.threads/tls.exp test with a GDB configured
without Python, I noticed some duplicate test names.
This is caused by a call to skip_python_tests that is within a proc
that is called multiple times by the test script. Each call to
skip_python_tests results in a call to 'unsupported', and this causes
the duplicate test names.
After this commit we now call skip_python_tests just once and place
the result into a variable. Now, instead of calling skip_python_tests
multiple times, we just check the variable.
There should be no change in what is tested after this commit.
|
|
While testing on AArch64 I spotted a duplicate test name in the
gdb.base/gnu_vector.exp test.
This commit adds a 'with_test_prefix' to resolve the duplicate.
While I was in the area I updated a 'gdb_test_multiple' call to make
use of $gdb_test_name.
There should be no change in what is tested after this commit.
|
|
The following commit changes the output format for the isel instruction on
PowerPC.
commit dd4832bf3efc1bd1797a6b9188260692b8b0db52 Introduces error in test
Author: Dmitry Selyutin <ghostmansd@gmail.com>
Date: Tue May 24 13:46:35 2022 +0000
opcodes: introduce BC field; fix isel
Per Power ISA Version 3.1B 3.3.12, isel uses BC field rather than CRB
field present in binutils sources. Also, per 1.6.2, BC has the same
semantics as BA and BB fields, so this should keep the same flags and
mask, only with the different offset.
opcodes/
* ppc-opc.c
(BC): Define new field, with the same definition as CRB field,
but with the PPC_OPERAND_CR_BIT flag present.
gas/
* testsuite/gas/ppc/476.d: Update.
* testsuite/gas/ppc/a2.d: Update.
* testsuite/gas/ppc/e500.d: Update.
* testsuite/gas/ppc/power7.d: Update.
<snip>
--- a/gas/testsuite/gas/ppc/476.d
+++ b/gas/testsuite/gas/ppc/476.d
@@ -209,7 +209,7 @@ Disassembly of section \.text:
.*: (7c 20 07 8c|8c 07 20 7c) ici 1
.*: (7c 03 27 cc|cc 27 03 7c) icread r3,r4
.*: (50 83 65 36|36 65 83 50) rlwimi r3,r4,12,20,27
-.*: (7c 43 27 1e|1e 27 43 7c) isel r2,r3,r4,28
+.*: (7c 43 27 1e|1e 27 43 7c) isel r2,r3,r4,4\*cr7\+lt
The above change breaks the gdb regression test gdb.arch/powerpc-power7.exp
on Power 7, Power 8, Power 9 and Power 10.
This patch updates the regression test gdb.arch/powerpc-power7.exp with
the new expected output for the isel instruction.
The patch has been tested on Power 7 and Power 10 to verify the patch fixes
the test.
|
|
On Aarch64, you can set ARM_CC_FOR_TARGET to point to the 32-bit
compiler to use when testing gdb.multi/multi-arch.exp and
gdb.multi/multi-arch-exec.exp. If you don't set it, then those
testcases don't run.
I guess that approximately nobody remembers to set ARM_CC_FOR_TARGET.
This commit adds a fallback. If ARM_CC_FOR_TARGET is not set, and
testing for Linux, try arm-linux-gnueabi-gcc,
arm-none-linux-gnueabi-gcc, arm-linux-gnueabihf-gcc as 32-bit
compilers, making sure that the produced executable runs on the target
machine before claiming that the compiler produces useful executables.
Change-Id: Iefe5865d5fc84b4032eaff7f4c5c61582bf75c39
|
|
PR mi/14270 points out that mi-break.exp has some tests for an
unimplemented "-r" switch for "-break-insert". This switch was never
implemented, and is not documented -- though it is mentioned in a
comment in the documentation. This patch removes the test and the doc
comment.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=14270
|
|
In commit:
commit 51e8dbe1fbe7d8955589703140ca5eba7b4f1bd7
Date: Mon May 16 19:26:54 2022 +0100
gdb/python: improve formatting of help text for user defined commands
the test that was added (gdb.python/py-doc-reformat.exp) was missing a
call to skip_python_tests. As a result, this test would fail for any
GDB built within Python support.
This commit adds a call to skip_python_tests.
|
|
Make sure we error out on overflow instead of truncating in all cases.
Tested on x86_64-linux, with a build with --enable-targets=all.
|
|
Rewrite parse_number to use ULONGEST instead of LONGEST, to fix UB errors as
mentioned in PR29163.
Furthermore, make sure we error out on overflow instead of truncating in all
cases.
Tested on x86_64-linux, with a build with --enable-targets=all.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29163
|
|
Make sure we error out on overflow instead of truncating in all cases.
I've used as overflow string: "Integer literal is too large", based
on what I found at
<rust-lang/rust>/src/test/ui/parser/int-literal-too-large-span.rs
but perhaps someone has a better idea.
Tested on x86_64-linux, with a build with --enable-targets=all.
|
|
Make sure we error out on overflow instead of truncating in all cases.
The current implementation of parse_number contains a comment about PR16377,
but that's related to C-like languages. In absence of information of whether
the same fix is needed for pascal, take the conservative approach and keep
behaviour for decimals unchanged.
Tested on x86_64-linux, with a build with --enable-targets=all.
|
|
Make sure we error out on overflow instead of truncating in all cases.
The current implementation of parse_number contains a comment about PR16377,
but that's related to C-like languages. In absence of information of whether
the same fix is needed for go, take the conservative approach and keep
behaviour for decimals unchanged.
Tested on x86_64-linux, with a build with --enable-targets=all.
|
|
As mentioned in commit 5b758627a18 ("Make gdb.base/parse_number.exp test all
architectures"):
...
There might be a bug that 32-bit fortran truncates 64-bit values to
32-bit, given "p/x 0xffffffffffffffff" returns "0xffffffff".
...
More concretely, we have:
...
$ for arch in i386:x86-64 i386; do \
gdb -q -batch -ex "set arch $arch" -ex "set lang fortran" \
-ex "p /x 0xffffffffffffffff"; \
done
The target architecture is set to "i386:x86-64".
$1 = 0xffffffffffffffff
The target architecture is set to "i386".
$1 = 0xffffffff
...
Fix this by adding a range check in parse_number in gdb/f-exp.y.
Furthermore, make sure we error out on overflow instead of truncating in all
other cases.
Tested on x86_64-linux.
|
|
[ Assuming arch i386:x86-64, sizeof (int) == 4,
sizeof (long) == sizeof (long long) == 8. ]
Currently we have (decimal for 0x80000000):
...
(gdb) ptype 2147483648
type = unsigned int
...
According to C language rules, unsigned types cannot be used for decimal
constants, so the type should be long instead (reported in PR16377).
Fix this by making sure the type of 2147483648 is long.
The next interesting case is (decimal for 0x8000000000000000):
...
(gdb) ptype 9223372036854775808
type = unsigned long
...
According to the same rules, unsigned long is incorrect.
Current gcc uses __int128 as type, which is allowed, but we don't have that
available in gdb, so the strict response here would be erroring out with
overflow.
Older gcc without __int128 support, as well as clang use an unsigned type, but with
a warning. Interestingly, clang uses "unsigned long long" while gcc uses
"unsigned long", which seems the better choice.
Given that the compilers allow this as a convience, do the same in gdb
and keep type "unsigned long", and make this explicit in parser and test-case.
Furthermore, make sure we error out on overflow instead of truncating in all
cases.
Tested on x86_64-linux with --enable-targets=all.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=16377
|
|
Currently we only test value 0xffffffffffffffff in test-case
gdb.base/parse_numbers.exp.
Test more interesting values, both in decimal and hex format, as well as
negative decimals for language modula-2.
This results in an increase in total tests from 15572 to 847448 (55 times
more tests).
Balance out the increase in runtime by reducing the number of architectures
tested: only test one architecture per sizeof longlong/long/int/short
combination, while keeping the possibility intact to run with all
architectures (through setting a variable in the test-case)
Results in slight reduction of total tests: 15572 -> 13853.
Document interesting cases in the expected results:
- wrapping from unsigned to signed
- truncation
- PR16377: using unsigned types to represent decimal constants in C
Running the test-case with a gdb build with -fsanitize=undefined, we trigger
two UB errors in the modula-2 parser, filed as PR29163.
Tested on x86_64-linux with --enable-targets=all.
|
|
On openSUSE Tumbleweed (with gcc-12, enabling ctf tests) I run into:
...
ERROR: tcl error sourcing src/gdb/testsuite/gdb.ctf/funcreturn.exp.
ERROR: tcl error code NONE
ERROR: Unexpected arguments: \
{print v_double_func} \
{[0-9]+ = {double \(\)} 0x[0-9a-z]+.*} \
{print double function} \
}
...
The problem is a curly brace as fourth argument to gdb_test, which errors out
due to recently introduced more strict argument checking in gdb_test.
Fix the error by removing the brace.
Though this fixes the error for me, due to PR29160 I get only FAILs, so I can't
claim proper testing on x86_64-linux.
|
|
When running test-case gdb.threads/manythreads.exp with check-read1, I ran
into this hard-to-reproduce FAIL:
...
[New Thread 0x7ffff7318700 (LWP 31125)]^M
[Thread 0x7ffff7321700 (LWP 31124) exited]^M
[New T^C^M
^M
Thread 769 "manythreads" received signal SIGINT, Interrupt.^M
[Switching to Thread 0x7ffff6d66700 (LWP 31287)]^M
0x00007ffff7586a81 in clone () from /lib64/libc.so.6^M
(gdb) FAIL: gdb.threads/manythreads.exp: stop threads 1
...
The matching in the failing gdb_test_multiple is done in an intricate way,
trying to pass on some order and fail on another order.
Fix this by rewriting the regexps to match one line at most, and detecting
invalid order by setting and checking state variables.
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29177
|
|
On openSUSE Tumbleweed with target board unix/-m32, I run into:
...
PASS: gdb.mi/mi-var-block.exp: step at do_block_test 2
Expecting: ^(-var-update \*[^M
]+)?(\^done,changelist=\[{name="foo",in_scope="true",type_changed="false",has_more="0"},
{name="cb",in_scope="true",type_changed="false",has_more="0"}\][^M
]+[(]gdb[)] ^M
[ ]*)
-var-update *^M
^done,changelist=[{name="foo",in_scope="true",type_changed="false",has_more="0"}]^M
(gdb) ^M
FAIL: gdb.mi/mi-var-block.exp: update all vars: cb foo changed (unexpected output)
...
The problem is that the test-case attempts to detect a change in the cb
variable caused by this initialization:
...
void
do_block_tests ()
{
int cb = 12;
...
but that only works if the stack location happens to be unequal to 12 before
the initialization.
Fix this by first initializing to 0, and then changing the value to 12:
...
- int cb = 12;
+ int cb = 0;
+ cb = 12;
...
and detecting that change.
Tested on x86_64-linux.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29195
|
|
This adds the gdb.current_language function, which can be used to find
the current language without (1) ever having the value "auto" or (2)
having to parse the output of "show language".
It also adds the gdb.Frame.language, which can be used to find the
language of a given frame. This is normally preferable if one has a
Frame object handy.
|
|
The order in which the variables in info common and info locals are
displayed is compiler (and dwarf) dependent. While all symbols should
be displayed the order is not fixed.
I added a gdb_test_multiple that lets ifx and ifort pass in cases where
only the order differs.
|
|
When value-printing a pointer within GDB by default GDB will look for
defined symbols residing at the address of the pointer. For the given
test the Intel/LLVM compiler stacks both display a symbol associated
with a printed pointer while the gnu stack does not. This leads to
failures in the test when running the test with CC_FOR_TARGET='clang'
CXX_FOR_TARGET='clang' F90_FOR_TARGET='flang'"
(gdb) b 37
(gdb) r
(gdb) f 6
(gdb) info args
a = 1
b = 2
c = 3
d = 4 + 5i
f = 0x419ed0 "abcdef"
g = 0x4041a0 <.BSS4>
or CC_FOR_TARGET='icx' CXX_FOR_TARGET='icpx' F90_FOR_TARGET='ifx'"
(gdb) b 37
(gdb) r
(gdb) f 6
(gdb) info args
a = 1
b = 2
c = 3
d = 4 + 5i
f = 0x52eee0 "abcdef"
g = 0x4ca210 <mixed_func_1a_$OBJ>
For the compiled binary the Intel/LLVM compilers both decide to move the
local variable g into the .bss section of their executable. The gnu
stack will keep the variable locally on the stack and not define a
symbol for it.
Since the behavior for Intel/LLVM is actually expected I adapted the
testcase at this point to be a bit more allowing for other outputs.
I added the optional "<SYMBOLNAME>" to the regex testing for g.
The given changes reduce the test fails for Intel/LLVM stack by 4 each.
|
|
While testing mixed-lang-stack I realized that valgrind actually
complained about a double free in the test.
All done
==2503051==
==2503051== HEAP SUMMARY:
==2503051== in use at exit: 0 bytes in 0 blocks
==2503051== total heap usage: 26 allocs, 27 frees, 87,343 bytes allocated
==2503051==
==2503051== All heap blocks were freed -- no leaks are possible
==2503051==
==2503051== For lists of detected and suppressed errors, rerun with: -s
==2503051== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Reason for this is that in mixed-lang-stack.cpp in mixed_func_1f an
object "derived_type obj" goes on the stack which is then passed-by-value
(so copied) to mixed_func_1g. The default copy-ctor will be called but,
since derived_type contains a heap allocated string and the copy
constructor is not implemented it will only be able to shallow copy the
object. Right after each of the functions the object gets freed - on the
other hand the d'tor of derived_type actually is implemented and calls
free on the heap allocated string which leads to a double free. Instead
of obeying the rule of 3/5 I just got rid of all that since it does not
serve the test. The string is now just a const char* = ".." object
member.
|
|
For ifort, ifx, and flang the tests "complete modm" and "complete
modmany" fail. This is because all three emit additional completion
suggestions. These additional suggestions have their origin in symbols
emitted by the compilers which can also be completed from the respective
incomplete word (modm or modmany). For this specific example gfortran
does not emit any additional symbols.
For example, in this test the linkage name for var_a in ifx is
"modmany_mp_var_a_" while gfortran uses "__modmany_MOD_var_a" instead.
Since modmany_mp_var_a can be completed from modm and also modmany they
will get displayed, while gfortran's symbol starts with "__" and thus will
be ignored (it cannot be a completion of a word starting with "m").
Similar things happen in flang and ifort. Some example output is shown
below:
FLANG
(gdb) complete p modm
p modmany
p modmany::var_a
p modmany::var_b
p modmany::var_c
p modmany::var_i
p modmany_
IFX/IFORT
(gdb) complete p modm
p modmany
p modmany._
p modmany::var_a
p modmany::var_b
p modmany::var_c
p modmany::var_i
p modmany_mp_var_a_
p modmany_mp_var_b_
p modmany_mp_var_c_
p modmany_mp_var_i_
GFORTRAN
(gdb) complete p modm
p modmany
p modmany::var_a
p modmany::var_b
p modmany::var_c
p modmany::var_i
I want to emphasize: for Fortran (and also C/C++) the complete command
does not actually check whether its suggestions make sense - all it does
is look for any symbol (in the minimal symbols, partial symbols etc.)
that a given substring can be completed to (meaning that the given substring
is the beginning of the symbol). One can easily produce a similar
output for the gfortran compiled executable. For this look at the
slightly modified "complete p mod" in gfortran:
(gdb) complete p mod
p mod1
p mod1::var_const
...
p mod_1.c
p modcounter
p mode_t
p modf
...
p modify_ldt
p modmany
p modmany::var_a
p modmany::var_b
p modmany::var_c
p modmany::var_i
p module
p module.f90
p module_entry
p moduse
p moduse::var_x
p moduse::var_y
Many of the displayed symbols do not actually work with print:
(gdb) p mode_t
Attempt to use a type name as an expression
(gdb) p mod_1.c
No symbol "mod_1" in current context.
(gdb)
I think that in the given test the output for gfortran only looks nice
"by chance" rather than is actually expected. Expected is any output
that also contains the completions
p modmany
p modmany::var_a
p modmany::var_b
p modmany::var_c
p modmany::var_i
while anythings else can be displayed as well (depending on the
compiler and its emitted symbols).
This, I'd consider all three outputs as valid and expected - one is just
somewhat lucky that gfortran does not produce any additional symbols that
got matched.
The given patch improves test performance for all three compilers
by allowing additional suggested completions inbetween and after
the two given blocks in the test. I did not allow additional print
within the modmany_list block since the output is ordered alphabetically
and there should normally not appear any additional symbols there.
For flang/ifx/ifort I each see 2 failures less (which are exactly the two
complete tests).
As a side note and since I mentioned C++ in the beginning: I also tried
the gdb.cp/completion.exp. The output seems a bit more reasonable,
mainly since C++ actually has a demangler in place and linkage symbols
do not appear in the output of complete. Still, with a poor enough
to-be-completed string one can easily produce similar results:
(gdb) complete p t
...
p typeinfo name for void
p typeinfo name for void const*
p typeinfo name for void*
p typeinfo name for wchar_t
p typeinfo name for wchar_t const*
p typeinfo name for wchar_t*
p t *** List may be truncated, max-completions reached. ***
(gdb) p typeinfo name for void*
No symbol "typeinfo" in current context.
(gdb) complete p B
p BACK_SLASH
p BUF_FIRST
p BUF_LAST
...
p Base
p Base::Base()
p Base::get_foo()
p bad_key_err
p buf
p buffer
p buffer_size
p buflen
p bufsize
p build_charclass.isra
(gdb) p bad_key_err
No symbol "bad_key_err" in current context.
(compiled with gcc/g++ and breaking at main).
This patch is only about making the referenced test more 'fair' for the
other compilers. Generally, I find the behavior of complete a bit
confusing and maybe one wants to change this at some point but this
would be a bigger task.
|
|
This info-types.exp test case had a few issues that this patch fixes.
First, the emitted symbol character(kind=1)/character*1 (different
compilers use different naming converntions here) which is checkedin the
test is not actually expected given the test program. There is no
variable of that type in the test. Still, gfortran emits it for every
Fortran program there is. The reason is the way gfortran handles Fortran's
named main program. It generates a wrapper around the Fortran program
that is quite similar to a C main function. This C-like wrapper has
argc and argv arguments for command line argument passing and the argv
pointer type has a base type character(kind=1) DIE emitted at CU scope.
Given the program
program prog
end program prog
the degbug info gfortran emits looks somewhat like
<0><c>: Abbrev Number: 3 (DW_TAG_compile_unit)
...
<1><2f>: Abbrev Number: 4 (DW_TAG_subprogram)
<30> DW_AT_external : 1
<30> DW_AT_name : (indirect string, ...): main
...
<2><51>: Abbrev Number: 1 (DW_TAG_formal_parameter)
<52> DW_AT_name : (indirect string, ...): argc
...
<2><5d>: Abbrev Number: 1 (DW_TAG_formal_parameter)
<5e> DW_AT_name : (indirect string, ...): argv
...
<62> DW_AT_type : <0x77>
...
<2><6a>: Abbrev Number: 0
...
<1><77>: Abbrev Number: 6 (DW_TAG_pointer_type)
<78> DW_AT_byte_size : 8
<79> DW_AT_type : <0x7d>
<1><7d>: Abbrev Number: 2 (DW_TAG_base_type)
<7e> DW_AT_byte_size : 1
<7f> DW_AT_encoding : 8 (unsigned char)
<80> DW_AT_name : (indirect string, ...): character(kind=1)
<1><84>: Abbrev Number: 7 (DW_TAG_subprogram)
<85> DW_AT_name : (indirect string, ...): prog
...
Ifx and flang do not emit any debug info for a wrapper main method so
the type is missing here. There was the possibility of actually adding
a character*1 type variable to the Fortran executable, but both, ifx and
gfortran chose to emit this variable's type as a DW_TAG_string_type of
length one (instead of a character(kind=1), or whatever the respective
compiler naming convention is). While string types are printed as
character*LENGHT in the fortran language part (e.g. when issuing a
'ptype') they do not generate any symbols inside GDB. In read.c it says
/* These dies have a type, but processing them does not create
a symbol or recurse to process the children. Therefore we can
read them on-demand through read_type_die. */
So they did not add any output to 'info types'. Only flang did emit a
character type here.
As adding a type would have a) not solved the problem for ifx and would
have b) somehow hidden the curious behavior of gfortran, instead, the
check for this character type was chagened to optional with the
check_optional_entry to allow for the symbols's absence and to allow
flang and ifx to pass this test as well.
Second, the line checked for s1 was hardcoded as 37 in the test. Given
that the type is actually defined on line 41 (which is what is emitted by
ifx) it even seems wrong. The line check for s1 was changed to actually
check for 41 and a gfortran bug has been filed here
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105454
The test is now marked as xfail for gfortran.
Third, the whole test of checking for the 'Type s1' in info types seemed
questionable. The type s1 is declared iside the scope of the Fortran
program info_types_test. Its DIE however is emitted as a child of the
whole compilation unit making it visible outside of the program's scope.
The 'info types' command checks for types stored in the GLOBAL_BLOCK,
or STATIC_BLOCKm wgucm according to block.h
The GLOBAL_BLOCK contains all the symbols defined in this compilation
whose scope is the entire program linked together.
The STATIC_BLOCK contains all the symbols whose scope is the
entire compilation excluding other separate compilations.
so for gfortran, the type shows up in the output of 'info types'. For
flang and ifx on the other hand this is not the case. The two compilers
emit the type (correctly) as a child of the Fortran program, thus not
adding it to either, the GLOBAL_BLOCK nor the LOCAL_BLOCK. A bug has
been opened for the gfortran scoping issue:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105454
While the most correct change might have been removing the check for s1,
the change made here was to only check for this type in case of gfortran
being used as the compiler, as this check also covers the declaration
line issue mentioned above. A comment was added to maybe remove this
check once the scoping issue is resolved (and it starts to fail with
newer gfortran versions). The one used to test these changes was 13.0.
|
|
There was already a similar functionality for the GDBInfoModuleSymbols.
This just extends the GDBInfoSymbols. We will use this feature in a
later commit to make a testcase less GNU specific and more flexible for
other compilers.
Namely, in gdb.fortran/info-types.exp currenlty
GDBInfoSymbols::check_entry is used to verify and test the output of the
info symbols command. The test, however was written with gfortran as a
basis and some of the tests are not fair with e.g. ifx and ifort as
they test for symbols that are not actually required to be emitted. The
lines
GDBInfoSymbols::check_entry "${srcfile}" "" "${character1}"
and
GDBInfoSymbols::check_entry "${srcfile}" "37" "Type s1;"
check for types that are either not used in the source file (character1)
or should not be emitted by the compiler at global scope (s1) thus no
appearing in the info symbols command. In order to fix this we will
later use the newly introduced check_optional_entry over check_entry.
|
|
In order for ifx and ifort to emit all debug entries, even for unused
parameters in modules we have to define the '-debug-parameters all' flag.
This commit adds it to the ifx-*/ifort-* specific flags in gdb.exp.
|
|
The test was earlier not using the compiler dependent type print system
in fortran.exp. I changed this. It should generally improve the test
performance for different compilers. For ifx and gfortran I do not see
any failures.
|
|
Currenlty, ifx/ifort cannot compile the given executable as it is not
valid Fortran. It is missing the external keyword on the
no_arg_subroutine. Gfortran compiles the example but this is actually
a bug and there is an open gcc ticket for this here:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=50377
Adding the keyword does not change the gfortran compiling of the example.
It will, however, prevent a future fail once 50377 has been addressed.
|
|
The test specifically tests for the Fortran CHARACTER(KIND=4) which is
not available in ifx/ifort.
Since the other characters are also printed elsewhere, we disable this
test for the unsupported compilers.
|
|
The name for icx and icpx in the testsuite was earlier set to 'intel-*'
by the compiler identification. This commit changes this to 'icx-*'.
Note, that currently these names are not used within the testsuite so no
tests have to be adapted here.
|