aboutsummaryrefslogtreecommitdiff
path: root/gdb/frame.c
diff options
context:
space:
mode:
authorPedro Alves <pedro@palves.net>2022-04-27 11:08:03 +0100
committerPedro Alves <pedro@palves.net>2022-04-27 19:29:38 +0100
commit5b758627a18f0d3b90a0207c9689dcf4ec5b9a4a (patch)
tree02fe714f646e641dcdaa5baf39117a8b1d0e03b6 /gdb/frame.c
parent801eb70f9aa916650b9ca03a1d54d426a3e99f17 (diff)
downloadbinutils-5b758627a18f0d3b90a0207c9689dcf4ec5b9a4a.zip
binutils-5b758627a18f0d3b90a0207c9689dcf4ec5b9a4a.tar.gz
binutils-5b758627a18f0d3b90a0207c9689dcf4ec5b9a4a.tar.bz2
Make gdb.base/parse_number.exp test all architectures
There are some subtle differences between architectures, like the size of a "long" type, and this isn't currently accounted for in gdb.base/parse_number.exp. For example, on aarch64 a long type is 8 bytes, whereas a long type is 4 bytes for x86_64. This causes the following FAIL's: FAIL: gdb.base/parse_number.exp: lang=asm: ptype 0xffffffffffffffff FAIL: gdb.base/parse_number.exp: lang=auto: ptype 0xffffffffffffffff FAIL: gdb.base/parse_number.exp: lang=c: ptype 0xffffffffffffffff FAIL: gdb.base/parse_number.exp: lang=c++: ptype 0xffffffffffffffff FAIL: gdb.base/parse_number.exp: lang=fortran: p/x 0xffffffffffffffff FAIL: gdb.base/parse_number.exp: lang=fortran: ptype 0xffffffffffffffff FAIL: gdb.base/parse_number.exp: lang=go: ptype 0xffffffffffffffff FAIL: gdb.base/parse_number.exp: lang=local: ptype 0xffffffffffffffff FAIL: gdb.base/parse_number.exp: lang=minimal: ptype 0xffffffffffffffff FAIL: gdb.base/parse_number.exp: lang=objective-c: ptype 0xffffffffffffffff FAIL: gdb.base/parse_number.exp: lang=opencl: ptype 0xffffffffffffffff FAIL: gdb.base/parse_number.exp: lang=pascal: ptype 0xffffffffffffffff There are some fortran-specific divergences as well, where 32-bit architectures show "unsigned int" for both 32-bit and 64-bit integers and 64-bit architectures show "unsigned int" and "unsigned long" for 32-bit and 64-bit integers. There might be a bug that 32-bit fortran truncates 64-bit values to 32-bit, given "p/x 0xffffffffffffffff" returns "0xffffffff". Here's what we get for aarch64: (gdb) ptype 0xffffffff type = unsigned int (gdb) ptype 0xffffffffffffffff type = unsigned long (gdb) p sizeof (0xffffffff) $1 = 4 (gdb) p sizeof (0xffffffffffffffff) quit $2 = 8 (gdb) ptype 0xffffffff type = unsigned int (gdb) ptype 0xffffffffffffffff type = unsigned long And for arm: (gdb) ptype 0xffffffff type = unsigned int (gdb) ptype 0xffffffffffffffff quit type = unsigned long long (gdb) p sizeof (0xffffffff) quit $1 = 4 (gdb) p sizeof (0xffffffffffffffff) quit $2 = 8 (gdb) ptype 0xffffffff type = unsigned int (gdb) ptype 0xffffffffffffffff type = unsigned long This patch... * Makes the testcase iterate over all architectures, thus covering all the different combinations of types/sizes every time. * Adjusts the expected values and types based on the sizes of long long, long and int. A particularly curious architecture is s12z, which has 32-bit long long, and thus no way to represent 64-bit integers in C-like languages. Co-Authored-By: Luis Machado <luis.machado@arm.com> Change-Id: Ifc0ccd33e7fd3c7585112ff6bebe7d266136768b
Diffstat (limited to 'gdb/frame.c')
0 files changed, 0 insertions, 0 deletions