aboutsummaryrefslogtreecommitdiff
path: root/benchtests/scripts
AgeCommit message (Collapse)AuthorFilesLines
2016-01-04Update copyright dates with scripts/update-copyrights.Joseph Myers4-4/+4
2015-11-17benchtests: Mark output variables as usedSiddhesh Poyarekar1-1/+1
Prevent function calls that don't return anything from being optimized out by the compiler by marking its input variables as used. This prevents the sincos function call from being optimized out in the benchmark.
2015-06-01benchtest: script to compare two benchmarksSiddhesh Poyarekar2-0/+280
This script is a sample implementation that uses import_bench to construct two benchmark objects and compare them. If detailed timing information is available (when one does `make DETAILED=1 bench`), it writes out graphs for all functions it benchmarks and prints significant differences in timings of the two benchmark runs. If detailed timing information is not available, it points out significant differences in aggregate times. Call this script as follows: compare_bench.py schema_file.json bench1.out bench2.out Alternatively, if one wants to set a different threshold for warnings (default is a 10% difference): compare_bench.py schema_file.json bench1.out bench2.out 25 The threshold in the example above is 25%. schema_file.json is the JSON schema (which is $srcdir/benchtests/scripts/benchout.schema.json for the benchmark output file) and bench1.out and bench2.out are the two benchmark output files to compare. The key functionality here is the compress_timings function which groups together points that are close together into a single point that is the mean of all its representative points. Any point in such a group is at most 1.5x the smallest point in that group. The detailed derivation is a comment in the function. * benchtests/scripts/compare_bench.py: New file. * benchtests/scripts/import_bench.py (mean): New function. (split_list): Likewise. (do_for_all_timings): Likewise. (compress_timings): Likewise.
2015-06-01New module to import and process benchmark outputSiddhesh Poyarekar2-25/+71
This is the beginning of a module to import and process benchmark outputs. The module currently supports importing of a bench.out and validating it against a schema file. In future this could grow a set of routines that benchmark consumers may find useful to build their own analysis tools. I have altered validate_bench to use this module too. * benchtests/scripts/import_bench.py: New file. * benchtests/scripts/validate_benchout.py: Import import_bench instead of jsonschema. (validate_bench): Remove function. (main): Use import_bench.
2015-01-02Update copyright dates with scripts/update-copyrights.Joseph Myers2-2/+2
2014-06-11Validate bench.out against a JSON schemaSiddhesh Poyarekar2-0/+127
This patch adds a JSON schema for the benchmark output file and also adds a script that validates the generated output against the schema.
2014-05-26benchtests: Add new directive for benchmark initialization hookSiddhesh Poyarekar1-1/+6
Add a new 'init' directive that specifies the name of the function to call to do function-specific initialization. This is useful for benchmarks that need to do a one-time initialization before the functions are executed.
2014-03-29Detailed benchmark outputs for functionsSiddhesh Poyarekar1-1/+5
This patch adds an option to get detailed benchmark output for functions. Invoking the benchmark with 'make DETAILED=1 bench' causes each benchmark program to store a mean execution time for each input it works on. This is useful to give a more comprehensive picture of performance of functions compared to just the single mean figure.
2014-03-29Make bench.out in json formatSiddhesh Poyarekar1-1/+1
This patch changes the output format of the main benchmark output file (bench.out) to an extensible format. I chose JSON over XML because in addition to being extensible, it is also not too verbose. Additionally it has good support in python. The significant change I have made in terms of functionality is to put timing information as an attribute in JSON instead of a string and to do that, there is a separate program that prints out a JSON snippet mentioning the type of timing (hp_timing or clock_gettime). The mean timing has now changed from iterations per unit to actual timing per iteration.
2014-03-24benchtests: Move bench.py to benchtests/scripts/Siddhesh Poyarekar1-0/+299
It makes much more sense to have all benchmarking-related scripts in a single place away from everything else.