Age | Commit message (Collapse) | Author | Files | Lines |
|
The newer update-copyright.py fixes file encoding too, removing cr/lf
on binutils/bfdtest2.c and ld/testsuite/ld-cygwin/exe-export.exp, and
embedded cr in binutils/testsuite/binutils-all/ar.exp string match.
|
|
|
|
This patch is essentially a revert of
commit-id: 8818c80cbd4116ef5af171ec47c61167179e225c
(libctf: Add ZSTD_LIBS to LIBS so that ac_cv_libctf_bfd_elf can be true)
As the specific configure check now uses libtool, this explicit mention of the
dependency $ZSTD_LIBS is not needed anymore.
ChangeLog:
* libctf/Makefile.in: Regenerated.
* libctf/aclocal.m4: Likewise.
* libctf/config.h.in: Likewise.
* libctf/configure: Likewise.
* libctf/configure.ac: Remove ZSTD_LIBS from LIBS. Cleanup
unused AC_ZSTD.
|
|
ACLOCAL_AMFLAGS is being set already. So using AC_CONFIG_MACRO_DIR is
unnecessary.
ChangeLog:
* libctf/configure: Regenerated.
* libctf/configure.ac: remove AC_CONFIG_MACRO_DIR usage.
|
|
This dependency is managed via libtool. So explicit addition to LDFLAGS
and LIBS is not necessary anymore.
ChangeLog:
* libctf/configure: Regenerated.
* libctf/configure.ac: remove zlib from LDFLAGS and LIBS.
|
|
* ctf-link.c (ctf_link_add_cu_mapping): Set t NULL after free.
|
|
The configure check for ELF support in BFD uses the AC_TRY_LINK. If
libbfd's dependencies change, this macro will need to be updated
manually with explicit additions to LDFLAGS and LIBS.
This patch updates the check to use libtool instead.
ChangeLog:
* libctf/configure.ac: Use libtool instead.
* libctf/configure: Regenerated.
|
|
gas uses ZSTD_compressStream2 which is only available with libzstd >=
1.4.0, leading to build errors when an older version is installed.
This patch updates the check libzstd presence to check its version is
>= 1.4.0. However, since gas seems to be the only component requiring
such a recent version this may imply that we disable ZSTD support for
all components although some would still benefit from an older
version.
I ran 'autoreconf -f' in all directories containing a configure.ac
file, using vanilla autoconf-2.69 and automake-1.15.1. I noticed
several errors from autoheader in readline, as well as warnings in
intl, but they are unrelated to this patch.
This should fix some of the buildbots.
OK for trunk?
Thanks,
Christophe
|
|
|
|
* ctf-link.c (ctf_link_add_ctf_internal): Don't free uninitialised
pointers.
|
|
|
|
We were failing to call prune_warnings appropriately, leading to
false-positive test failures on some platforms (observed on
sparclinux).
libctf/ChangeLog:
* testsuite/lib/ctf-lib.exp: Prune warnings from compiler and
linker output.
* testsuite/libctf-regression/libctf-repeat-cu.exp: Likewise,
and ar output too.
|
|
A missing paren led to an intended cast to avoid dependence on the size
of size_t in one argument of ctf_err_warn applying to the wrong type by
mistake.
libctf/ChangeLog:
* ctf-serialize.c (ctf_write_mem): Fix cast.
|
|
Right now, if you compile the same .c input repeatedly with CTF enabled
and different compilation flags, then arrange to link all of these
together, then things misbehave in various ways. libctf may conflate
either inputs (if the .o files have the same name, say if they are
stored in different .a archives), or per-CU outputs when conflicting
types are found: the latter can lead to entirely spurious errors when
it tries to produce multiple per-CU outputs with the same name
(discarding all but the last, but then looking for types in the earlier
ones which have just been thrown away).
Fixing this is multi-pronged. Both inputs and outputs need to be
differentiated in the hashtables libctf keeps them in: inputs with the
same cuname and filename need to be considered distinct as long as they
have different associated CTF dicts, and per-CU outputs need to be
considered distinct as long as they have different associated input
dicts. Right now there is nothing tying the two together other than the
CU name: fix this by introducing a new field in the ctf_dict_t named
ctf_link_in_out, which (for input dicts) points to the associated per-CU
output dict (if any), and for output dicts points to the associated
input dict. At creation time the name used is completely arbitrary:
it's only important that it be distinct if CTF dicts are distinct. So,
when a clash is found, adjust the CU name by sticking the number of
elements in the input on the end. At output time, the CU name will
appear in the linked object, so it matters a little more that it look
slightly less ugly: in conflicting cases, append an incrementing
integer, starting at 0.
This naming scheme is not very helpful, but it's hard to see what else
we can do. The input .o name may be the same. The input .a name is not
even visible to ctf_link, and even *that* might be the same, because
.a's can contain many members with the same name, all of which
participate in the link. All we really know is that the two have
distinct dictionaries with distinct types in them, and at least this way
they are all represented, any any symbols, variables etc referring to
those types are accurately stored.
(As a side-effect this also fixes a use-after-free and double-free when
errors are found during variable or symbol emission.)
Use the opportunity to prevent a couple of sources of problems, to wit
changing the active CU mappings when a link has already been done
(no effect on ld, which doesn't use CU mappings at all), and causing
multiple consecutive ctf_link's to have the same net effect as just
doing the last one (no effect on ld, which only ever does one
ctf_link) rather than having the links be a sort of half-incremental
not-really-intended mess.
libctf/ChangeLog:
PR libctf/29242
* ctf-impl.h (struct ctf_dict) [ctf_link_in_out]: New.
* ctf-dedup.c (ctf_dedup_emit_type): Set it.
* ctf-link.c (ctf_link_add_ctf_internal): Set the input
CU name uniquely when clashes are found.
(ctf_link_add): Document what repeated additions do.
(ctf_new_per_cu_name): New, come up with a consistent
name for a new per-CU dict.
(ctf_link_deduplicating): Use it.
(ctf_create_per_cu): Use it, and ctf_link_in_out, and set
ctf_link_in_out properly. Don't overwrite per-CU dicts with
per-CU dicts relating to different inputs.
(ctf_link_add_cu_mapping): Prevent per-CU mappings being set up
if we already have per-CU outputs.
(ctf_link_one_variable): Adjust ctf_link_per_cu call.
(ctf_link_deduplicating_one_symtypetab): Likewise.
(ctf_link_empty_outputs): New, delete all the ctf_link_outputs
and blank out ctf_link_in_out on the corresponding inputs.
(ctf_link): Clarify the effect of multiple ctf_link calls.
Empty ctf_link_outputs if it already exists rather than
having the old output leak into the new link. Fix a variable
name.
* testsuite/config/default.exp (AR): Add.
(OBJDUMP): Likewise.
* testsuite/libctf-regression/libctf-repeat-cu.exp: New test.
* testsuite/libctf-regression/libctf-repeat-cu*: Main program,
library, and expected results for the test.
|
|
When two types conflict and they are not types which can have forwards
(say, two arrays of different sizes with the same name in two different
TUs) the CTF deduplicator uses a popularity contest to decide what to
do: the type cited by the most other types ends up put into the shared
dict, while the others are relegated to per-CU child dicts.
This works well as long as one type *is* most popular -- but what if
there is a tie? If several types have the same popularity count,
we end up picking the first we run across and promoting it, and
unfortunately since we are working over a dynhash in essentially
arbitrary order, this means we promote a random one. So multiple
runs of ld with the same inputs can produce different outputs!
All the outputs are valid, but this is still undesirable.
Adjust things to use the same strategy used to sort types on the output:
when there is a tie, always put the type that appears in a CU that
appeared earlier on the link line (and if there is somehow still a tie,
which should be impossible, pick the type with the lowest type ID).
Add a testcase -- and since this emerged when trying out extern arrays,
check that those work as well (this requires a newer GCC, but since all
GCCs that can emit CTF at all are unreleased this is probably OK as
well).
Fix up one testcase that has slight type ordering changes as a result
of this change.
libctf/ChangeLog:
* ctf-dedup.c (ctf_dedup_detect_name_ambiguity): Use
cd_output_first_gid to break ties.
ld/ChangeLog:
* testsuite/ld-ctf/array-conflicted-ordering.d: New test, using...
* testsuite/ld-ctf/array-char-conflicting-1.c: ... this...
* testsuite/ld-ctf/array-char-conflicting-2.c: ... and this.
* testsuite/ld-ctf/array-extern.d: New test, using...
* testsuite/ld-ctf/array-extern.c: ... this.
* testsuite/ld-ctf/conflicting-typedefs.d: Adjust for ordering
changes.
|
|
My previous nm patch handled all cases but one -- if the user set NM in
the environment to a path which contained an option, libtool's nm
detection tries to run nm against a copy of nm with the options in it:
e.g. if NM was set to "nm --blargle", and nm was found in /usr/bin, the
test would try to run "/usr/bin/nm --blargle /usr/bin/nm --blargle".
This is unlikely to be desirable: in this case we should run
"/usr/bin/nm --blargle /usr/bin/nm".
Furthermore, as part of this nm has to detect when the passed-in $NM
contains a path, and in that case avoid doing a path search itself.
This too was thrown off if an option contained something that looked
like a path, e.g. NM="nm -B../prev-gcc"; libtool then tries to run
"nm -B../prev-gcc nm" which rarely works well (and indeed it looks
to see whether that nm exists, finds it doesn't, and wrongly concludes
that nm -p or whatever does not work).
Fix all of these by clipping all options (defined as everything
including and after the first " -") before deciding whether nm
contains a path (but not using the clipped value for anything else),
and then removing all options from the path-modified nm before
looking to see whether that nm existed.
NM=my-nm now does a path search and runs e.g.
/usr/bin/my-nm -B /usr/bin/my-nm
NM=/usr/bin/my-nm now avoids a path search and runs e.g.
/usr/bin/my-nm -B /usr/bin/my-nm
NM="my-nm -p../wombat" now does a path search and runs e.g.
/usr/bin/my-nm -p../wombat -B /usr/bin/my-nm
NM="../prev-binutils/new-nm -B../prev-gcc" now avoids a path search:
../prev-binutils/my-nm -B../prev-gcc -B ../prev-binutils/my-nm
This seems to be all combinations, including those used by GCC bootstrap
(which, before this commit, fails to bootstrap when configured
--with-build-config=bootstrap-lto, because the lto plugin is now using
--export-symbols-regex, which requires libtool to find a working nm,
while also using -B../prev-gcc to point at the lto plugin associated
with the GCC just built.)
Regenerate all affected configure scripts.
* libtool.m4 (LT_PATH_NM): Handle user-specified NM with
options, including options containing paths.
|
|
libctf has always handled endianness differences by detecting
foreign-endian CTF dicts on the input and endian-flipping them: dicts
are always written in native endianness. This makes endian-awareness
very low overhead, but it means that the foreign-endian code paths
almost never get routinely tested, since "make check" usually reads in
dicts ld has just written out: only a few corrupted-CTF tests are
actually in fixed endianness, and even they only test the foreign-
endian code paths when you run make check on a big-endian machine.
(And the fix is surely not to add more .s-based tests like that, because
they are a nightmare to maintain compared to the C-code-based ones.)
To improve on this, add a new environment variable,
LIBCTF_WRITE_FOREIGN_ENDIAN, which causes libctf to unconditionally
endian-flip at ctf_write time, so the output is always in the wrong
endianness. This then tests the foreign-endian read paths properly
at open time.
Make this easier by restructuring the writeout code in ctf-serialize.c,
which duplicates the maybe-gzip-and-write-out code three times (once
for ctf_write_mem, with thresholding, and once each for
ctf_compress_write and ctf_write just so those can avoid thresholding
and/or compression). Instead, have the latter two call the former
with thresholds of 0 or (size_t) -1, respectively.
The endian-flipping code itself gains a bit of complexity, because
one single endian-flipper (flip_types) was assuming the input to be
in foreign-endian form and assuming it could pull things out of the
input once they had been flipped and make sense of them. At the
cost of a few lines of duplicated initializations, teach it to
read before flipping if we're flipping to foreign-endianness instead
of away from it.
libctf/
* ctf-impl.h (ctf_flip_header): No longer static.
(ctf_flip): Likewise.
* ctf-open.c (flip_header): Rename to...
(ctf_flip_header): ... this, now it is not private to one file.
(flip_ctf): Rename...
(ctf_flip): ... this too. Add FOREIGN_ENDIAN arg.
(flip_types): Likewise. Use it.
(ctf_bufopen_internal): Adjust calls.
* ctf-serialize.c (ctf_write_mem): Add flip_endian path via
a newly-allocated bounce buffer.
(ctf_compress_write): Move below ctf_write_mem and reimplement
in terms of it.
(ctf_write): Likewise.
(ctf_gzwrite): Note that this obscure writeout function does not
support endian-flipping.
|
|
The last section in a CTF dict is the string table, at an offset
represented by the cth_stroff header field. Its length is recorded in
the next field, cth_strlen, and the two added together are taken as the
size of the CTF dict. Upon opening a dict, we check that none of the
header offsets exceed this size, and we check when uncompressing a
compressed dict that the result of the uncompression is the same length:
but CTF dicts need not be compressed, and short ones are not.
Uncompressed dicts just use the ctf_size without checking it. This
field is thankfully almost unused: it is mostly used when reserializing
a dict, which can't be done to dicts read off disk since they're
read-only.
However, when opening an uncompressed foreign-endian dict we have to
copy it out of the mmaped region it is stored in so we can endian-
swap it, and we use ctf_size when doing that. When the cth_strlen is
corrupt, this can overrun.
Fix this by checking the ctf_size in all uncompressed cases, just as we
already do in the compressed case. Add a new test.
This came to light because various corrupted-CTF raw-asm tests had an
incorrect cth_strlen: fix all of them so they produce the expected
error again.
libctf/
PR libctf/28933
* ctf-open.c (ctf_bufopen_internal): Always check uncompressed
CTF dict sizes against the section size in case the cth_strlen is
corrupt.
ld/
PR libctf/28933
* testsuite/ld-ctf/diag-strlen-invalid.*: New test,
derived from diag-cttname-invalid.s.
* testsuite/ld-ctf/diag-cttname-invalid.s: Fix incorrect cth_strlen.
* testsuite/ld-ctf/diag-cttname-null.s: Likewise.
* testsuite/ld-ctf/diag-cuname.s: Likewise.
* testsuite/ld-ctf/diag-parlabel.s: Likewise.
* testsuite/ld-ctf/diag-parname.s: Likewise.
|
|
The CTF variable section is an optional (usually-not-present) section in
the CTF dict which contains name -> type mappings corresponding to data
symbols that are present in the linker input but not in the output
symbol table: the idea is that programs that use their own symbol-
resolution mechanisms can use this section to look up the types of
symbols they have found using their own mechanism.
Because these removed symbols (mostly static variables, functions, etc)
all have names that are unlikely to appear in the ELF symtab and because
very few programs have their own symbol-resolution mechanisms, a special
linker flag (--ctf-variables) is needed to emit this section.
Historically, we emitted only removed data symbols into the variable
section. This seemed to make sense at the time, but in hindsight it
really doesn't: functions are symbols too, and a C program can look them
up just like any other type. So extend the variable section so that it
contains all static function symbols too (if it is emitted at all), with
types of kind CTF_K_FUNCTION.
This is a little fiddly. We relied on compiler assistance for data
symbols: the compiler simply emits all data symbols twice, once into the
symtypetab as an indexed symbol and once into the variable section.
Rather than wait for a suitably adjusted compiler that does the same for
function symbols, we can pluck unreported function symbols out of the
symtab and add them to the variable section ourselves. While we're at
it, we do the same with data symbols: this is redundant right now
because the compiler does it, but it costs very little time and lets the
compiler drop this kludge and save a little space in .o files.
include/
* ctf.h: Mention the new things we can see in the variable
section.
ld/
* testsuite/ld-ctf/data-func-conflicted-vars.d: New test.
libctf/
* ctf-link.c (ctf_link_deduplicating_variables): Duplicate
symbols into the variable section too.
* ctf-serialize.c (symtypetab_delete_nonstatic_vars): Rename
to...
(symtypetab_delete_nonstatics): ... this. Check the funchash
when pruning redundant variables.
(ctf_symtypetab_sect_sizes): Adjust accordingly.
* NEWS: Describe this change.
|
|
It's not clear what this was meant for, but it's not used by anything,
and the info pages still generate fine without it.
|
|
|
|
The result of running etc/update-copyright.py --this-year, fixing all
the files whose mode is changed by the script, plus a build with
--enable-maintainer-mode --enable-cgen-maint=yes, then checking
out */po/*.pot which we don't update frequently.
The copy of cgen was with commit d1dd5fcc38ead reverted as that commit
breaks building of bfp opcodes files.
|
|
It looks like automake makes assumptions about its ability to build info
pages based on the GNU standard behavior of shipping info pages with the
distributions. So even though the info pages were conditionalized, and
automake disabled some of the targets, it was still creeping in by way
of unconditional INFO_DEPS settings.
We can workaround this by adding a stub target for the info page when
building info pages are disabled. This tricks automake into disabling
its own extended generation target. I'll follow up with the automake
folks to see what they think.
|
|
When configuring libctf, I get:
config.status: error: cannot find input file: `doc/Makefile.in'
This is because configure is out-of-date, re-generate it.
Change-Id: Ie69acd33012211a4620661582c7d24ad6d2cd169
|
|
This avoids a recursive make into the doc subdir and speeds up the
build slightly. It also allows for more parallelism.
|
|
Also add $(AM_V_xxx) to various manual rules in here.
|
|
Remove @validatemenus from ctf-spec.texi, which has been removed from
texinfo by
commit a16dd1a9ece08568a1980b9a65a3a9090717997f
Author: Gavin Smith <gavinsmith0123@gmail.com>
Date: Mon Oct 12 16:32:37 2020 +0100
* doc/texinfo.texi
(Writing a Menu, Customization Variables for @-Commands)
(Command List),
* doc/refcard/txirefcard.tex
Remove @validatemenus.
* tp/Texinfo/XS/Makefile.am (command_ids.h): Use gawk instead
of awk. Avoid discouraged "$p" usage, using "$(p)" instead.
* tp/Texinfo/XS/configure.ac: Check for gawk.
commit 128acab3889b51809dc3bd3c6c74b61d13f7f5f4
Author: Gavin Smith <gavinsmith0123@gmail.com>
Date: Thu Jan 3 14:51:53 2019 +0000
Update refcard.
* doc/refcard/txirefcard.tex: @setfilename is no longer
mandatory. Do not mention @validatemenus or explicitly giving
@node pointers, as these are not very important features.
PR libctf/28567
* doc/ctf-spec.texi: Remove "@validatemenus off".
|
|
It's been a long time since most of this was written: it's long past
time to put it in the binutils source tree. It's believed correct and
complete insofar as it goes: it documents format v3 (the current
version) but not the libctf API or any earlier versions. (The
earlier versions can be read by libctf but not generated by it, and you
are highly unlikely ever to see an example of any of them.)
libctf/ChangeLog
2021-11-08 Nick Alcock <nick.alcock@oracle.com>
* doc/ctf-spec.texi: New file.
* configure.ac (MAKEINFO): Add.
(BUILD_INFO): Likewise.
(AC_CONFIG_FILES) [doc/Makefile]: Add.
* Makefile.am [BUILD_INFO] (SUBDIRS): Add doc/.
* doc/Makefile.am: New file.
* doc/Makefile.in: Likewise.
* configure: Regenerated.
* Makefile.in: Likewise.
|
|
ctf_type_visit (used, among other things, by the type dumping code) was
aborting when it saw a nonrepresentable type anywhere: even a single
structure member with a nonrepresentable type caused an abort with
ECTF_NONREPRESENTABLE. This is not useful behaviour, given that the
abort comes from a type-resolution we are only doing in order to
determine whether the type is a structure or union. We know
nonrepresentable types can't be either, so handle that case and
pass the nonrepresentable type down.
(The added test verifies that the dumper now handles this case and
prints nonrepresentable structure members as it already does
nonrepresentable top-level types, rather than skipping the whole
structure -- or, without the previous commit, skipping the whole types
section.)
ld/ChangeLog
2021-10-25 Nick Alcock <nick.alcock@oracle.com>
* testsuite/ld-ctf/nonrepresentable-member.*: New test.
libctf/ChangeLog
2021-10-25 Nick Alcock <nick.alcock@oracle.com>
* ctf-types.c (ctf_type_rvisit): Handle nonrepresentable types.
|
|
If dumping of a single type fails, we obviously can't dump it; but just
as obviously this doesn't make the other types in the types section
invalid or undumpable. So we should not propagate errors seen when
type-dumping, but rather ignore them and carry on, so we dump as many
types as we can (leaving out the ones we can't grok).
libctf/ChangeLog
2021-10-25 Nick Alcock <nick.alcock@oracle.com>
* ctf-dump.c (ctf_dump_type): Do not abort on error.
|
|
An off-by-one bug in the check for pptrtab lookup meant that we could
access the pptrtab past its bounds (*well* past its bounds),
particularly if we called ctf_lookup_by_name in a child dict with "*foo"
where "foo" is a type that exists in the parent but not the child and no
previous lookups by name have been carried out. (Note that "*foo" is
not even a valid thing to call ctf_lookup_by_name with: foo * is.
Nonetheless, users sometimes do call ctf_lookup_by_name with invalid
content, and it should return ECTF_NOTYPE, not crash.)
ctf_pptrtab_len, as its name suggests (and as other tests of it in
ctf-lookup.c confirm), is one higher than the maximum valid permissible
index, so the comparison is wrong.
(Test added, which should fail pretty reliably in the presence of this
bug on any machine with 4KiB pages.)
libctf/ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
* ctf-lookup.c (ctf_lookup_by_name_internal): Fix pptrtab bounds.
* testsuite/libctf-writable/pptrtab-writable-page-deep-lookup.*:
New test.
|
|
These warnings are all off by default, but if they do fire you get
spurious ERRORs when running make check-libctf.
libctf/ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
* testsuite/libctf-lookup/enum-symbol.c: Remove unused label.
* testsuite/libctf-lookup/conflicting-type-syms.c: Remove unused
variables.
* testsuite/libctf-regression/pptrtab.c: Likewise.
* testsuite/libctf-regression/type-add-unnamed-struct.c: Likewise.
* testsuite/libctf-writable/pptrtab.c: Likewise.
* testsuite/libctf-writable/reserialize-strtab-corruption.c:
Likewise.
* testsuite/libctf-regression/nonstatic-var-section-ld-r.c: Fix
format string.
* testsuite/libctf-regression/nonstatic-var-section-ld.c:
Likewise.
* testsuite/libctf-regression/nonstatic-var-section-ld.lk: Adjust.
* testsuite/libctf-writable/symtypetab-nonlinker-writeout.c: Fix
initializer.
|
|
Older (pre-upstreaming) GCC emits a function symtypetab section of a
format never read by any extant libctf. We can detect such CTF dicts by
the lack of the CTF_F_NEWFUNCINFO flag in their header, and we do so
when reading in the symtypetab section -- but if the set of symbols with
types is sufficiently sparse, even an older GCC will emit a function
index section.
In NEWFUNCINFO-capable compilers, this section will always be the exact
same length as the corresponding function section (each is an array of
uint32_t, associated 1:1 with each other). But this is not true for the
older compiler, for which the sections are different lengths. We check
to see if the function symtypetab section and its index are the same
length, but we fail to skip this check when this is not a NEWFUNCINFO
dict, and emit a spurious corruption error for a CTF dict we could
have perfectly well opened and used.
Fix trivial: check the flag (and fix the terrible grammar of the error
message at the same time).
libctf/ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
* ctf-open.c (ctf_bufopen_internal): Don't complain about corrupt
function index symtypetab sections if this is an old-format
function symtypetab section (which should be ignored in any case).
Fix bad grammar.
|
|
(including sim/, which has no changelog.)
bfd/ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
* configure: Regenerate.
binutils/ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
* configure: Regenerate.
gas/ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
* configure: Regenerate.
gprof/ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
* configure: Regenerate.
ld/ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
* configure: Regenerate.
libctf/ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
* configure: Regenerate.
* Makefile.in: Regenerate.
opcodes/ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
* configure: Regenerate.
zlib/ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
* configure: Regenerate.
|
|
Checking for linker versioning by just grepping ld --help output for
mentions of --version-script is inadequate now that Solaris 11.4
implements a --version-script with different semantics. Try linking a
test program with a small wildcard-using version script with each
supported set of flags in turn, to make sure that linker versioning is
not only advertised but actually works.
The Solaris "GNU-compatible" linker versioning is not quite
GNU-compatible enough, but we can work around the differences by
generating a new version script that removes the comments from the
original (Solaris ld requires #-style comments), and making another
version script for libctf-nonbfd in particular which doesn't mention any
of the symbols that appear in libctf.la, to avoid Solaris ld introducing
corresponding new NOTYPE symbols to match the version script.
libctf/ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
PR libctf/27967
* configure.ac (VERSION_FLAGS): Replace with...
(ac_cv_libctf_version_script): ... this multiple test.
(VERSION_FLAGS_NOBFD): Substitute this too.
* Makefile.am (libctf_nobfd_la_LDFLAGS): Use it. Split out...
(libctf_ldflags_nover): ... non-versioning flags here.
(libctf_la_LDFLAGS): Use it.
* libctf.ver: Give every symbol not in libctf-nobfd a comment on
the same line noting as much.
|
|
This ensures that the CTF_LIBADD, which always contains at least this
when doing a shared link:
-L`pwd`/../libiberty/pic -liberty
appears in the link line before any requirements pulled in by libbfd.la,
which include -liberty but because it is install-time do not include the
-L`pwd`/../libiberty/pic portion (in an indirect dep like this, the path
comes from the libbfd.la file, and is an install path). libiberty also
appears after libbfd in the link line by virtue of libctf-nobfd.la,
because libctf-nobfd has to follow libbfd in the link line, and that
needs symbols from libiberty too.
Without this, an installed liberty might well be pulled in by libbfd,
and if --enable-install-libiberty is not specified this libiberty might
be completely incompatible with what is being installed and break either
or boht of libbfd and libctf. (The specific problem observed here is
that bsearch_r was not present, but other problems might easily be
observed in future too.)
Because ld links against libctf, this has a tendency to break the system
linker at install time too, if installing with --prefix=/usr. That's
quite unpleasant to recover from.
libctf/ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
PR libctf/27360
* Makefile.am (libctf_la_LIBADD): Link against libiberty
before pulling in libbfd.la or pulling in libctf-nobfd.la.
* Makefile.in: Regenerate.
|
|
The top level Makefile, the ld Makefile and others, define
CC_FOR_TARGET to be a compiler for the binutils target machine. This
is the compiler that should be used for almost all tests with C
source. There are _FOR_TARGET versions of CFLAGS, CXX, and CXXFLAGS
too. This was all supposed to work with the testsuite .exp files
using CC for the target compiler, and CC_FOR_HOST for the host
compiler, with the makefiles passing CC=$CC_FOR_TARGET and
CC_FOR_HOST=$CC to the runtest invocation.
One exception to the rule of using CC_FOR_TARGET is the native-only ld
bootstrap test, which uses the newly built ld to link a copy of
itself. Since the files being linked were created with the host
compiler, the boostrap test should use CC and CFLAGS, in case some
host compiler option provides needed libraries automatically.
However, bootstrap.exp used CC where it should have used CC_FOR_HOST.
I set about fixing that problem, then decided that playing games in
the makefiles with CC was a bad idea. Not only is it confusing, but
other dejagnu code knows about CC_FOR_TARGET. See dejagnu/target.exp.
So this patch gets rid of the makefile variable renaming and changes
all the .exp files to use the correct _FOR_TARGET variables.
CC_FOR_HOST and CFLAGS_FOR_HOST disappear. A followup patch will
correct bootstrap.exp to use CFLAGS, and a number of other things I
noticed.
binutils/
* testsuite/lib/binutils-common.exp (run_dump_test): Use
CC_FOR_TARGET and CFLAGS_FOR_TARGET rather than CC and CFLAGS.
ld/
* Makefile.am (check-DEJAGNU): Don't set CC to CC_FOR_TARGET
and similar. Pass variables with unchanged names. Don't set
CC_FOR_HOST or CFLAGS_FOR_HOST.
* Makefile.in: Regenerate.
* testsuite/config/default.exp: Update default CC and similar.
(compiler_supports, plug_opt): Use CC_FOR_TARGET.
* testsuite/ld-cdtest/cdtest.exp: Replace all uses of CC with
CC_FOR_TARGET, and similarly for CFLAGS, CXX and CXXFLAGS.
* testsuite/ld-auto-import/auto-import.exp: Likewise.
* testsuite/ld-cygwin/exe-export.exp: Likewise.
* testsuite/ld-elf/dwarf.exp: Likewise.
* testsuite/ld-elf/indirect.exp: Likewise.
* testsuite/ld-elf/shared.exp: Likewise.
* testsuite/ld-elfcomm/elfcomm.exp: Likewise.
* testsuite/ld-elfvers/vers.exp: Likewise.
* testsuite/ld-elfvsb/elfvsb.exp: Likewise.
* testsuite/ld-elfweak/elfweak.exp: Likewise.
* testsuite/ld-gc/gc.exp: Likewise.
* testsuite/ld-ifunc/ifunc.exp: Likewise.
* testsuite/ld-mn10300/mn10300.exp: Likewise.
* testsuite/ld-pe/pe-compile.exp: Likewise.
* testsuite/ld-pe/pe-run.exp: Likewise.
* testsuite/ld-pe/pe-run2.exp: Likewise.
* testsuite/ld-pie/pie.exp: Likewise.
* testsuite/ld-plugin/lto.exp: Likewise.
* testsuite/ld-plugin/plugin.exp: Likewise.
* testsuite/ld-scripts/crossref.exp: Likewise.
* testsuite/ld-selective/selective.exp: Likewise.
* testsuite/ld-sh/sh.exp: Likewise.
* testsuite/ld-shared/shared.exp: Likewise.
* testsuite/ld-srec/srec.exp: Likewise.
* testsuite/ld-undefined/undefined.exp: Likewise.
* testsuite/ld-unique/unique.exp: Likewise.
* testsuite/ld-x86-64/tls.exp: Likewise.
* testsuite/lib/ld-lib.exp: Likewise.
libctf/
* Makefile.am (check-DEJAGNU): Don't set CC to CC_FOR_TARGET.
Pass CC and CC_FOR_TARGET. Don't set CC_FOR_HOST.
* Makefile.in: Regenerate.
* testsuite/config/default.exp: Update default CC and similar.
* testsuite/lib/ctf-lib.exp (run_native_host_cmd): Use CC rather
than CC_FOR_HOST.
(run_lookup_test): Use CC_FOR_TARGET and CFLAGS_FOR_TARGET.
|
|
* ctf-open.c (init_symtab): Avoid ubsan error.
|
|
--enable-maintainer-mode showed a number of files needing to be
regenerated, and in the case of ld/Makefile.in that the file was
regenerated by hand. Nothing to see here really.
ld/
* Makefile.am (ALL_64_EMULATION_SOURCES): Sort haiku entry.
* Makefile.in: Regenerate.
* po/BLD-POTFILES.in: Regenerate.
libctf/
* configure: Regenerate.
zlib/
* configure: Regenerate.
|
|
|
|
* ctf-impl.h (ctf_dynset_eq_string): Don't declare.
* ctf-hash.c (ctf_dynset_eq_string): Delete function.
* ctf-dedup.c (make_set_element): Use htab_eq_string.
(ctf_dedup_atoms_init, ADD_CITER, ctf_dedup_init): Likewise.
(ctf_dedup_conflictify_unshared): Likewise.
(ctf_dedup_walk_output_mapping): Likewise.
|
|
The tests currently in binutils are aimed at the original GCC-based
implementation of CTF, which emitted CTF directly from GCC's internal
representation. The approach now under review emits CTF from DWARF,
with an eye to eventually doing this for all non-DWARF debuginfo-like
formats GCC supports. It also uses a different flag to enable
CTF emission (-gctf rather than -gt).
Adjust the testsuite accordingly.
Given that the ld testsuite results are dependent on type ordering,
which we do not guarantee at all, it's amazing how little changes. We
see a few type ordering differences, slices change because the old GCC
was buggy (slices were emitted "backwards", from the wrong end of the
machine word) and its expected results were wrong, and GCC now emits the
underlying integral type for enumerated types, though CTF has no way to
record this yet (coming in v4).
GCC also now emits even hidden symbols into the symtab (and thus
symtypetab), so one symtypetab test changes its expected results
slightly to compensate.
Also add tests for the CTF_K_UNKNOWN nonrepresentable type: this
couldn't be done before now since the only GCC that emits CTF_K_UNKNOWN
for nonrepresentable types is the new one.
ld/ChangeLog
2021-05-06 Nick Alcock <nick.alcock@oracle.com>
* testsuite/ld-ctf/ctf.exp: Use -gctf, not -gt.
* testsuite/lib/ld-lib.exp: Likewise.
* testsuite/ld-ctf/nonrepresentable-1.c: New test for nonrepresentable types.
* testsuite/ld-ctf/nonrepresentable-2.c: Likewise.
* testsuite/ld-ctf/nonrepresentable.d: Likewise.
* testsuite/ld-ctf/array.d: Larger type section.
* testsuite/ld-ctf/data-func-conflicted.d: Likewise.
* testsuite/ld-ctf/enums.d: Likewise.
* testsuite/ld-ctf/conflicting-enums.d: Don't compare types.
* testsuite/ld-ctf/cross-tu-cyclic-conflicting.d: Changed type order.
* testsuite/ld-ctf/cross-tu-noncyclic.d: Likewise.
* testsuite/ld-ctf/slice.d: Adjust for improved slice emission.
libctf/ChangeLog
2021-05-06 Nick Alcock <nick.alcock@oracle.com>
* testsuite/lib/ctf-lib.exp: Use -gctf, not -gt.
* testsuite/libctf-regression/nonstatic-var-section-ld-r.lk:
Hidden symbols now get into the symtypetab anyway.
|
|
Before now, types that could not be encoded in CTF were represented as
references to type ID 0, which does not itself appear in the
dictionary. This choice is annoying in several ways, principally that it
forces generators and consumers of CTF to grow special cases for types
that are referenced in valid dicts but don't appear.
Allow an alternative representation (which will become the only
representation in format v4) whereby nonrepresentable types are encoded
as actual types with kind CTF_K_UNKNOWN (an already-existing kind
theoretically but not in practice used for padding, with value 0).
This is backward-compatible, because CTF_K_UNKNOWN was not used anywhere
before now: it was used in old-format function symtypetabs, but these
were never emitted by any compiler and the code to handle them in libctf
likely never worked and was removed last year, in favour of new-format
symtypetabs that contain only type IDs, not type kinds.
In order to link this type, we need an API addition to let us add types
of unknown kind to the dict: we let them optionally have names so that
GCC can emit many different unknown types and those types with identical
names will be deduplicated together. There are also small tweaks to the
deduplicator to actually dedup such types, to let opening of dicts with
unknown types with names work, to return the ECTF_NONREPRESENTABLE error
on resolution of such types (like ID 0), and to print their names as
something useful but not a valid C identifier, mostly for the sake of
the dumper.
Tests added in the next commit.
include/ChangeLog
2021-05-06 Nick Alcock <nick.alcock@oracle.com>
* ctf.h (CTF_K_UNKNOWN): Document that it can be used for
nonrepresentable types, not just padding.
* ctf-api.h (ctf_add_unknown): New.
libctf/ChangeLog
2021-05-06 Nick Alcock <nick.alcock@oracle.com>
* ctf-open.c (init_types): Unknown types may have names.
* ctf-types.c (ctf_type_resolve): CTF_K_UNKNOWN is as
non-representable as type ID 0.
(ctf_type_aname): Print unknown types.
* ctf-dedup.c (ctf_dedup_hash_type): Do not early-exit for
CTF_K_UNKNOWN types: they have real hash values now.
(ctf_dedup_rwalk_one_output_mapping): Treat CTF_K_UNKNOWN types
like other types with no referents: call the callback and do not
skip them.
(ctf_dedup_emit_type): Emit via...
* ctf-create.c (ctf_add_unknown): ... this new function.
* libctf.ver (LIBCTF_1.2): Add it.
|
|
The address sanitizer contains a redirector that captures dlopen calls,
so checks for dlopen with AC_SEARCH_LIBS will always conclude that
dlopen is present when the sanitizer is on. This means it won't add
-ldl to LIBS even if needed, and the immediately-following attempt to
actually link with -lbfd will fail because libbfd also needs dlsym,
which ASAN does *not* contain a redirector for.
If we check for dlsym instead of dlopen, the check works whether ASAN is
on or off. (bfd uses both in close proximity: if it needs one, it will
always need the other.)
libctf/ChangeLog
2021-03-25 Nick Alcock <nick.alcock@oracle.com>
* configure.ac: Check for dlsym, not dlopen.
* configure: Regenerate.
|
|
Harmless, but causes noise that makes it harder to spot other leaks.
libctf/ChangeLog
2021-03-25 Nick Alcock <nick.alcock@oracle.com>
* testsuite/libctf-writable/symtypetab-nonlinker-writeout.c: Don't
leak buf.
|
|
isqualifier, which is used by ctf_lookup_by_name to figure out if a
given word in a type name is a qualifier, takes the address of a
possibly out-of-bounds location before checking its bounds.
In any reasonable compiler this will just lead to a harmless address
computation that is then discarded if out-of-bounds, but it's still
undefined behaviour and the sanitizer rightly complains.
libctf/ChangeLog
2021-03-25 Nick Alcock <nick.alcock@oracle.com>
PR libctf/27628
* ctf-lookup.c (isqualifier): Don't dereference out-of-bounds
qhash values.
|
|
This makes it possible to use LIBCTF_DEBUG to debug things that happen
before the ctf_bfdopen_internal call that ctf_bfdopen_ctfsect eventually
thunks down to (symtab/strtab lookup, archive opening, etc).
This is not important for ctf_open callers, since ctf_fdopen already
calls libctf_init_debug, but ctf_bfdopen_ctfsect is a public entry point
that can be called directly (e.g. objdump and readelf both do so).
libctf/ChangeLog
2021-03-25 Nick Alcock <nick.alcock@oracle.com>
* ctf-open-bfd.c (ctf_bfdopen_ctfsect): Initialize debugging.
|
|
Every place that accesses a function's dtd_vlen accesses it only if the
number of args is nonzero, except the serializer, which always tries to
memcpy it. The number of bytes it memcpys in this case is zero, but it
is still undefined behaviour to copy zero bytes from a null pointer.
So check for this case explicitly.
libctf/ChangeLog
2021-03-25 Nick Alcock <nick.alcock@oracle.com>
PR libctf/27628
* ctf-serialize.c (ctf_emit_type_sect): Allow for a NULL vlen in
CTF_K_FUNCTION types.
|
|
When we dump normal types, we emit their size and/or alignment:
but size and alignment dumping can return errors if the type is
part of a chain that terminates in a forward.
Emitting 0xffffffff as a size or alignment is unhelpful, so simply
skip emitting this info for any type for which size or alignment
checks return an error, no matter what the error is.
libctf/ChangeLog
2021-03-25 Nick Alcock <nick.alcock@oracle.com>
* ctf-dump.c (ctf_dump_format_type): Don't emit size or alignment
on error.
|
|
bfd/
* bfd-in.h (startswith): New inline.
(CONST_STRNEQ): Use startswith.
* bfd-in2.h: Regenerate.
gdbsupport/
* common-utils.h (startswith): Delete version now supplied by bfd.h.
libctf/
* ctf-impl.h: Include string.h.
|