diff options
author | David Malcolm <dmalcolm@redhat.com> | 2018-07-20 15:37:23 +0000 |
---|---|---|
committer | David Malcolm <dmalcolm@gcc.gnu.org> | 2018-07-20 15:37:23 +0000 |
commit | 4a4412b9de23b153481a60cd69abcee71c3e34fb (patch) | |
tree | bfca81ed27910bef95adb2304ad0b3e69af27dd2 /gcc/doc/invoke.texi | |
parent | bf0086f1c8ff9998fb55a27f6606bccbba873e09 (diff) | |
download | gcc-4a4412b9de23b153481a60cd69abcee71c3e34fb.zip gcc-4a4412b9de23b153481a60cd69abcee71c3e34fb.tar.gz gcc-4a4412b9de23b153481a60cd69abcee71c3e34fb.tar.bz2 |
Add "-fsave-optimization-record"
This patch implements a -fsave-optimization-record option, which
leads to a JSON file being written out, recording the dump_* calls
made (via the optinfo infrastructure).
The patch includes a minimal version of the JSON patch I posted last
year, with just enough support needed for optimization records (I
removed all of the parser code, leaving just the code for building
in-memory JSON trees and writing them to a pretty_printer).
gcc/ChangeLog:
* Makefile.in (OBJS): Add json.o and optinfo-emit-json.o.
(CFLAGS-optinfo-emit-json.o): Define TARGET_NAME.
* common.opt (fsave-optimization-record): New option.
* coretypes.h (struct kv_pair): Move here from dumpfile.c.
* doc/invoke.texi (-fsave-optimization-record): New option.
* dumpfile.c: Include "optinfo-emit-json.h".
(struct kv_pair): Move to coretypes.h.
(optgroup_options): Make non-static.
(dump_context::end_scope): Call
optimization_records_maybe_pop_dump_scope.
* dumpfile.h (optgroup_options): New decl.
* json.cc: New file.
* json.h: New file.
* optinfo-emit-json.cc: New file.
* optinfo-emit-json.h: New file.
* optinfo.cc: Include "optinfo-emit-json.h".
(optinfo::emit): Call optimization_records_maybe_record_optinfo.
(optinfo_enabled_p): Check optimization_records_enabled_p.
(optinfo_wants_inlining_info_p): Likewise.
* optinfo.h: Update comment.
* profile-count.c (profile_quality_as_string): New function.
* profile-count.h (profile_quality_as_string): New decl.
(profile_count::quality): New accessor.
* selftest-run-tests.c (selftest::run_tests): Call json_cc_tests
and optinfo_emit_json_cc_tests.
* selftest.h (selftest::json_cc_tests): New decl.
(selftest::optinfo_emit_json_cc_tests): New decl.
* toplev.c: Include "optinfo-emit-json.h".
(compile_file): Call optimization_records_finish.
(do_compile): Call optimization_records_start.
* tree-ssa-live.c: Include optinfo.h.
(remove_unused_scope_block_p): Retain inlining information if
optinfo_wants_inlining_info_p returns true.
From-SVN: r262905
Diffstat (limited to 'gcc/doc/invoke.texi')
-rw-r--r-- | gcc/doc/invoke.texi | 48 |
1 files changed, 47 insertions, 1 deletions
diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index 485f599..9d75edb 100644 --- a/gcc/doc/invoke.texi +++ b/gcc/doc/invoke.texi @@ -419,7 +419,8 @@ Objective-C and Objective-C++ Dialects}. -freorder-blocks-algorithm=@var{algorithm} @gol -freorder-blocks-and-partition -freorder-functions @gol -frerun-cse-after-loop -freschedule-modulo-scheduled-loops @gol --frounding-math -fsched2-use-superblocks -fsched-pressure @gol +-frounding-math -fsave-optimization-record @gol +-fsched2-use-superblocks -fsched-pressure @gol -fsched-spec-load -fsched-spec-load-dangerous @gol -fsched-stalled-insns-dep[=@var{n}] -fsched-stalled-insns[=@var{n}] @gol -fsched-group-heuristic -fsched-critical-path-heuristic @gol @@ -13969,6 +13970,51 @@ the first option takes effect and the subsequent options are ignored. Thus only @file{vec.miss} is produced which contains dumps from the vectorizer about missed opportunities. +@item -fsave-optimization-record +@opindex fsave-optimization-record +Write a SRCFILE.opt-record.json file detailing what optimizations +were performed, for those optimizations that support @option{-fopt-info}. + +This option is experimental and the format of the data within the JSON +file is subject to change. + +It is roughly equivalent to a machine-readable version of +@option{-fopt-info-all}, as a collection of messages with source file, +line number and column number, with the following additional data for +each message: + +@itemize @bullet + +@item +the execution count of the code being optimized, along with metadata about +whether this was from actual profile data, or just an estimate, allowing +consumers to prioritize messages by code hotness, + +@item +the function name of the code being optimized, where applicable, + +@item +the ``inlining chain'' for the code being optimized, so that when +a function is inlined into several different places (which might +themselves be inlined), the reader can distinguish between the copies, + +@item +objects identifying those parts of the message that refer to expressions, +statements or symbol-table nodes, which of these categories they are, and, +when available, their source code location, + +@item +the GCC pass that emitted the message, and + +@item +the location in GCC's own code from which the message was emitted + +@end itemize + +Additionally, some messages are logically nested within other +messages, reflecting implementation details of the optimization +passes. + @item -fsched-verbose=@var{n} @opindex fsched-verbose On targets that use instruction scheduling, this option controls the |