diff options
63 files changed, 1199 insertions, 585 deletions
diff --git a/docs/markdown/Contributing.md b/docs/markdown/Contributing.md index 2881837..f8f1824 100644 --- a/docs/markdown/Contributing.md +++ b/docs/markdown/Contributing.md @@ -291,3 +291,16 @@ Environment variables are like global variables, except that they are also hidden by default. Envvars should be avoided whenever possible, all functionality should be exposed in better ways such as command line switches. + +## Random design points that fit nowhere else + +- All features should follow the 90/9/1 rule. 90% of all use cases + should be easy, 9% should be possible and it is totally fine to not + support the final 1% if it would make things too complicated. + +- Any build directory will have at most two toolchains: one native and + one cross. + +- Prefer specific solutions to generic frameworks. Solve the end + user's problems rather than providing them tools to do it + themselves. diff --git a/docs/markdown/Dlang-module.md b/docs/markdown/Dlang-module.md index ca9a381..677359d 100644 --- a/docs/markdown/Dlang-module.md +++ b/docs/markdown/Dlang-module.md @@ -7,7 +7,7 @@ This module provides tools related to the D programming language. To use this module, just do: **`dlang = import('dlang')`**. You can, of course, replace the name `dlang` with anything else. -The module only exposes one fucntion, `generate_dub_file`, used to +The module only exposes one function, `generate_dub_file`, used to automatically generate Dub configuration files. ### generate_dub_file() @@ -40,4 +40,4 @@ initial one. The module will only update the values specified in `generate_dub_file()`. Although not required, you will need to have a `description` and -`license` if you want to publish the package in the [D package registry](https://code.dlang.org/).
\ No newline at end of file +`license` if you want to publish the package in the [D package registry](https://code.dlang.org/). diff --git a/docs/markdown/FAQ.md b/docs/markdown/FAQ.md index 0208c1a..139e192 100644 --- a/docs/markdown/FAQ.md +++ b/docs/markdown/FAQ.md @@ -365,3 +365,37 @@ compiler. - If the compiler is freely available, consider adding it to the CI system. + +## Why does building my project with MSVC output static libraries called `libfoo.a`? + +The naming convention for static libraries on Windows is usually `foo.lib`. +Unfortunately, import libraries are also called `foo.lib`. + +This causes filename collisions with the default library type where we build +both shared and static libraries, and also causes collisions during +installation since all libraries are installed to the same directory by default. + +To resolve this, we decided to default to creating static libraries of the form +`libfoo.a` when building with MSVC. This has the following advantages: + +1. Filename collisions are completely avoided. +1. The format for MSVC static libraries is `ar`, which is the same as the GNU + static library format, so using this extension is semantically correct. +1. The static library filename format is now the same on all platforms and with + all toolchains. +1. Both Clang and GNU compilers can search for `libfoo.a` when specifying + a library as `-lfoo`. This does not work for alternative naming schemes for + static libraries such as `libfoo.lib`. +1. Since `-lfoo` works out of the box, pkgconfig files will work correctly for + projects built with both MSVC, GCC, and Clang on Windows. +1. MSVC does not have arguments to search for library filenames, and [it does + not care what the extension is](https://docs.microsoft.com/en-us/cpp/build/reference/link-input-files?view=vs-2019), + so specifying `libfoo.a` instead of `foo.lib` does not change the workflow, + and is an improvement since it's less ambiguous. + +If, for some reason, you really need your project to output static libraries of +the form `foo.lib` when building with MSVC, you can set the +[`name_prefix:`](https://mesonbuild.com/Reference-manual.html#library) +kwarg to `''` and the [`name_suffix:`](https://mesonbuild.com/Reference-manual.html#library) +kwarg to `'lib'`. To get the default behaviour for each, you can either not +specify the kwarg, or pass `[]` (an empty array) to it. diff --git a/docs/markdown/Reference-manual.md b/docs/markdown/Reference-manual.md index 29da7c6..477790b 100644 --- a/docs/markdown/Reference-manual.md +++ b/docs/markdown/Reference-manual.md @@ -714,6 +714,9 @@ following: - `arguments` a list of template strings that will be the command line arguments passed to the executable +- `depends` is an array of build targets that must be built before this + generator can be run. This is used if you have a generator that calls + a second executable that is built in this project. Available since 0.51.0 - `depfile` is a template string pointing to a dependency file that a generator can write listing all the additional files this target depends on, for example a C compiler would list all the header files diff --git a/docs/markdown/Reference-tables.md b/docs/markdown/Reference-tables.md index d3a6815..682e508 100644 --- a/docs/markdown/Reference-tables.md +++ b/docs/markdown/Reference-tables.md @@ -5,29 +5,30 @@ These are return values of the `get_id` (Compiler family) and `get_argument_syntax` (Argument syntax) method in a compiler object. -| Value | Compiler family | Argument syntax | -| ----- | --------------- | --------------- | -| arm | ARM compiler | | -| armclang | ARMCLANG compiler | | -| ccrx | Renesas RX Family C/C++ compiler | | -| clang | The Clang compiler | gcc | -| clang-cl | The Clang compiler (MSVC compatible driver) | msvc | -| dmd | D lang reference compiler | | -| flang | Flang Fortran compiler | | -| g95 | The G95 Fortran compiler | | -| gcc | The GNU Compiler Collection | gcc | -| intel | Intel compiler | msvc on windows, otherwise gcc | -| lcc | Elbrus C/C++/Fortran Compiler | | -| llvm | LLVM-based compiler (Swift, D) | | -| mono | Xamarin C# compiler | | -| msvc | Microsoft Visual Studio | msvc | -| nagfor | The NAG Fortran compiler | | -| open64 | The Open64 Fortran Compiler | | -| pathscale | The Pathscale Fortran compiler | | -| pgi | Portland PGI C/C++/Fortran compilers | | -| rustc | Rust compiler | | -| sun | Sun Fortran compiler | | -| valac | Vala compiler | | +| Value | Compiler family | Argument syntax | +| ----- | --------------- | --------------- | +| arm | ARM compiler | | +| armclang | ARMCLANG compiler | | +| ccrx | Renesas RX Family C/C++ compiler | | +| clang | The Clang compiler | gcc | +| clang-cl | The Clang compiler (MSVC compatible driver) | msvc | +| dmd | D lang reference compiler | | +| flang | Flang Fortran compiler | | +| g95 | The G95 Fortran compiler | | +| gcc | The GNU Compiler Collection | gcc | +| intel | Intel compiler (Linux and Mac) | gcc | +| intel-cl | Intel compiler (Windows) | msvc | +| lcc | Elbrus C/C++/Fortran Compiler | | +| llvm | LLVM-based compiler (Swift, D) | | +| mono | Xamarin C# compiler | | +| msvc | Microsoft Visual Studio | msvc | +| nagfor | The NAG Fortran compiler | | +| open64 | The Open64 Fortran Compiler | | +| pathscale | The Pathscale Fortran compiler | | +| pgi | Portland PGI C/C++/Fortran compilers | | +| rustc | Rust compiler | | +| sun | Sun Fortran compiler | | +| valac | Vala compiler | | ## Script environment variables diff --git a/docs/markdown/snippets/gendeps.md b/docs/markdown/snippets/gendeps.md new file mode 100644 index 0000000..e724994 --- /dev/null +++ b/docs/markdown/snippets/gendeps.md @@ -0,0 +1,16 @@ +## Generators have a new `depends` keyword argument + +Generators can now specify extra dependencies with the `depends` +keyword argument. It matches the behaviour of the same argument in +other functions and specifies that the given targets must be built +before the generator can be run. This is used in cases such as this +one where you need to tell a generator to indirectly invoke a +different program. + +```meson +exe = executable(...) +cg = generator(program_runner, + output: ['@BASENAME@.c'], + arguments: ['--use-tool=' + exe.full_path(), '@INPUT@', '@OUTPUT@'], + depends: exe) +``` diff --git a/docs/markdown/snippets/intel-cl.md b/docs/markdown/snippets/intel-cl.md new file mode 100644 index 0000000..d866e70 --- /dev/null +++ b/docs/markdown/snippets/intel-cl.md @@ -0,0 +1,13 @@ +## Support for the Intel Compiler on Windows (ICL) + +Support has been added for ICL.EXE and ifort on windows. The support should be +on part with ICC support on Linux/MacOS. The ICL C/C++ compiler behaves like +Microsoft's CL.EXE rather than GCC/Clang like ICC does, and has a different id, +`intel-cl` to differentiate it. + +```meson +cc = meson.get_compiler('c') +if cc.get_id == 'intel-cl' + add_project_argument('/Qfoobar:yes', language : 'c') +endif +``` diff --git a/mesonbuild/ast/interpreter.py b/mesonbuild/ast/interpreter.py index eb9cb9f..5354710 100644 --- a/mesonbuild/ast/interpreter.py +++ b/mesonbuild/ast/interpreter.py @@ -260,6 +260,12 @@ class AstInterpreter(interpreterbase.InterpreterBase): id_loop_detect = [] flattend_args = [] + if isinstance(args, BaseNode): + assert(hasattr(args, 'ast_id')) + if args.ast_id in id_loop_detect: + return [] # Loop detected + id_loop_detect += [args.ast_id] + if isinstance(args, ArrayNode): args = [x for x in args.args.arguments] @@ -301,8 +307,8 @@ class AstInterpreter(interpreterbase.InterpreterBase): # Resolve the contents of args for i in args: - if isinstance(i, IdNode) and i.value not in id_loop_detect: - flattend_args += self.flatten_args(quick_resolve(i), include_unknown_args, id_loop_detect + [i.value]) + if isinstance(i, IdNode): + flattend_args += self.flatten_args(quick_resolve(i), include_unknown_args, id_loop_detect) elif isinstance(i, (ArrayNode, ArgumentNode, ArithmeticNode, MethodNode)): flattend_args += self.flatten_args(i, include_unknown_args, id_loop_detect) elif isinstance(i, mparser.ElementaryNode): diff --git a/mesonbuild/ast/introspection.py b/mesonbuild/ast/introspection.py index b6ec450..5ac6133 100644 --- a/mesonbuild/ast/introspection.py +++ b/mesonbuild/ast/introspection.py @@ -118,7 +118,7 @@ class IntrospectionInterpreter(AstInterpreter): subproject_dir_abs = os.path.join(self.environment.get_source_dir(), self.subproject_dir) subpr = os.path.join(subproject_dir_abs, dirname) try: - subi = IntrospectionInterpreter(subpr, '', self.backend, cross_file=self.cross_file, subproject=dirname, subproject_dir=self.subproject_dir, env=self.environment) + subi = IntrospectionInterpreter(subpr, '', self.backend, cross_file=self.cross_file, subproject=dirname, subproject_dir=self.subproject_dir, env=self.environment, visitors=self.visitors) subi.analyze() subi.project_data['name'] = dirname self.project_data['subprojects'] += [subi.project_data] diff --git a/mesonbuild/backend/backends.py b/mesonbuild/backend/backends.py index 0565de3..d10e1e9 100644 --- a/mesonbuild/backend/backends.py +++ b/mesonbuild/backend/backends.py @@ -27,6 +27,7 @@ from ..compilers import CompilerArgs, VisualStudioLikeCompiler from collections import OrderedDict import shlex from functools import lru_cache +import typing class CleanTrees: @@ -83,8 +84,12 @@ class ExecutableSerialisation: self.capture = capture class TestSerialisation: - def __init__(self, name, project, suite, fname, is_cross_built, exe_wrapper, is_parallel, - cmd_args, env, should_fail, timeout, workdir, extra_paths, protocol): + def __init__(self, name: str, project: str, suite: str, fname: typing.List[str], + is_cross_built: bool, exe_wrapper: typing.Optional[build.Executable], + is_parallel: bool, cmd_args: typing.List[str], + env: build.EnvironmentVariables, should_fail: bool, + timeout: typing.Optional[int], workdir: typing.Optional[str], + extra_paths: typing.List[str], protocol: str): self.name = name self.project_name = project self.suite = suite @@ -103,8 +108,7 @@ class TestSerialisation: self.protocol = protocol class OptionProxy: - def __init__(self, name, value): - self.name = name + def __init__(self, value): self.value = value class OptionOverrideProxy: @@ -122,7 +126,7 @@ class OptionOverrideProxy: def _get_override(self, option_name, base_opt): if option_name in self.overrides: - return OptionProxy(base_opt.name, base_opt.validate_value(self.overrides[option_name])) + return OptionProxy(base_opt.validate_value(self.overrides[option_name])) return base_opt def get_backend_from_name(backend, build): diff --git a/mesonbuild/backend/ninjabackend.py b/mesonbuild/backend/ninjabackend.py index e0a6440..591b2f4 100644 --- a/mesonbuild/backend/ninjabackend.py +++ b/mesonbuild/backend/ninjabackend.py @@ -219,8 +219,12 @@ class NinjaBackend(backends.Backend): def detect_vs_dep_prefix(self, tempfilename): '''VS writes its dependency in a locale dependent format. Detect the search prefix to use.''' - for compiler in self.build.compilers.values(): + for compiler in self.environment.coredata.compilers.values(): # Have to detect the dependency format + + # IFort on windows is MSVC like, but doesn't have /showincludes + if isinstance(compiler, FortranCompiler): + continue if isinstance(compiler, VisualStudioLikeCompiler): break else: @@ -310,9 +314,9 @@ int dummy; # http://clang.llvm.org/docs/JSONCompilationDatabase.html def generate_compdb(self): - pch_compilers = ['%s_PCH' % i for i in self.build.compilers] - native_compilers = ['%s_COMPILER' % i for i in self.build.compilers] - cross_compilers = ['%s_CROSS_COMPILER' % i for i in self.build.cross_compilers] + pch_compilers = ['%s_PCH' % i for i in self.environment.coredata.compilers] + native_compilers = ['%s_COMPILER' % i for i in self.environment.coredata.compilers] + cross_compilers = ['%s_CROSS_COMPILER' % i for i in self.environment.coredata.cross_compilers] ninja_compdb = [self.ninja_command, '-t', 'compdb'] + pch_compilers + native_compilers + cross_compilers builddir = self.environment.get_build_dir() try: @@ -1488,7 +1492,7 @@ int dummy; def generate_static_link_rules(self, is_cross): num_pools = self.environment.coredata.backend_options['backend_max_links'].value - if 'java' in self.build.compilers: + if 'java' in self.environment.coredata.compilers: if not is_cross: self.generate_java_link() if is_cross: @@ -1528,11 +1532,8 @@ int dummy; def generate_dynamic_link_rules(self): num_pools = self.environment.coredata.backend_options['backend_max_links'].value - ctypes = [(self.build.compilers, False)] - if self.environment.is_cross_build(): - ctypes.append((self.build.cross_compilers, True)) - else: - ctypes.append((self.build.cross_compilers, True)) + ctypes = [(self.environment.coredata.compilers, False), + (self.environment.coredata.cross_compilers, True)] for (complist, is_cross) in ctypes: for langname, compiler in complist.items(): if langname == 'java' \ @@ -1714,13 +1715,13 @@ https://gcc.gnu.org/bugzilla/show_bug.cgi?id=47485''')) depfile=depfile)) def generate_compile_rules(self): - for langname, compiler in self.build.compilers.items(): + for langname, compiler in self.environment.coredata.compilers.items(): if compiler.get_id() == 'clang': self.generate_llvm_ir_compile_rule(compiler, False) self.generate_compile_rule_for(langname, compiler, False) self.generate_pch_rule_for(langname, compiler, False) if self.environment.is_cross_build(): - cclist = self.build.cross_compilers + cclist = self.environment.coredata.cross_compilers for langname, compiler in cclist.items(): if compiler.get_id() == 'clang': self.generate_llvm_ir_compile_rule(compiler, True) @@ -1799,6 +1800,7 @@ https://gcc.gnu.org/bugzilla/show_bug.cgi?id=47485''')) cmd = cmdlist elem = NinjaBuildElement(self.all_outputs, outfiles, rulename, infilename) + elem.add_dep([self.get_target_filename(x) for x in generator.depends]) if generator.depfile is not None: elem.add_item('DEPFILE', depfile) if len(extra_dependencies) > 0: @@ -1818,7 +1820,7 @@ https://gcc.gnu.org/bugzilla/show_bug.cgi?id=47485''')) Find all module and submodule made available in a Fortran code file. """ compiler = None - for lang, c in self.build.compilers.items(): + for lang, c in self.environment.coredata.compilers.items(): if lang == 'fortran': compiler = c break diff --git a/mesonbuild/build.py b/mesonbuild/build.py index 603e0d0..65b2c20 100644 --- a/mesonbuild/build.py +++ b/mesonbuild/build.py @@ -19,6 +19,7 @@ import itertools, pathlib import hashlib import pickle from functools import lru_cache +import typing from . import environment from . import dependencies @@ -107,15 +108,12 @@ class Build: all dependencies and so on. """ - def __init__(self, environment): + def __init__(self, environment: environment.Environment): self.project_name = 'name of master project' self.project_version = None self.environment = environment self.projects = {} self.targets = OrderedDict() - # Coredata holds the state. This is just here for convenience. - self.compilers = environment.coredata.compilers - self.cross_compilers = environment.coredata.cross_compilers self.global_args = {} self.projects_args = {} self.global_link_args = {} @@ -140,7 +138,7 @@ class Build: self.dep_manifest_name = None self.dep_manifest = {} self.cross_stdlibs = {} - self.test_setups = {} + self.test_setups = {} # type: typing.Dict[str, TestSetup] self.test_setup_default_name = None self.find_overrides = {} self.searched_programs = set() # The list of all programs that have been searched for. @@ -148,10 +146,6 @@ class Build: def copy(self): other = Build(self.environment) for k, v in self.__dict__.items(): - if k in ['compilers', 'cross_compilers']: - # These alias coredata's fields of the same name, and must not - # become copies. - continue if isinstance(v, (list, dict, set, OrderedDict)): other.__dict__[k] = v.copy() else: @@ -335,7 +329,7 @@ class EnvironmentVariables: return value - def get_env(self, full_env): + def get_env(self, full_env: typing.Dict[str, str]) -> typing.Dict[str, str]: env = full_env.copy() for method, name, values, kwargs in self.envvars: env[name] = method(full_env, name, values, kwargs) @@ -1249,7 +1243,7 @@ You probably should put it in link_with instead.''') ''' linker, _ = self.get_clink_dynamic_linker_and_stdlibs() # Mixing many languages with MSVC is not supported yet so ignore stdlibs. - if linker and linker.get_id() in ['msvc', 'clang-cl', 'llvm', 'dmd']: + if linker and linker.get_id() in {'msvc', 'clang-cl', 'intel-cl', 'llvm', 'dmd'}: return True return False @@ -1279,6 +1273,7 @@ class Generator: self.exe = exe self.depfile = None self.capture = False + self.depends = [] self.process_kwargs(kwargs) def __repr__(self): @@ -1327,6 +1322,12 @@ class Generator: if not isinstance(capture, bool): raise InvalidArguments('Capture must be boolean.') self.capture = capture + if 'depends' in kwargs: + depends = listify(kwargs['depends'], unholder=True) + for d in depends: + if not isinstance(d, BuildTarget): + raise InvalidArguments('Depends entries must be build targets.') + self.depends.append(d) def get_base_outnames(self, inname): plainname = os.path.basename(inname) @@ -2359,7 +2360,8 @@ class RunScript(dict): self['args'] = args class TestSetup: - def __init__(self, *, exe_wrapper=None, gdb=None, timeout_multiplier=None, env=None): + def __init__(self, exe_wrapper: typing.Optional[typing.List[str]], gdb: bool, + timeout_multiplier: int, env: EnvironmentVariables): self.exe_wrapper = exe_wrapper self.gdb = gdb self.timeout_multiplier = timeout_multiplier @@ -2384,7 +2386,7 @@ def get_sources_string_names(sources): raise AssertionError('Unknown source type: {!r}'.format(s)) return names -def load(build_dir): +def load(build_dir: str) -> Build: filename = os.path.join(build_dir, 'meson-private', 'build.dat') load_fail_msg = 'Build data file {!r} is corrupted. Try with a fresh build tree.'.format(filename) nonexisting_fail_msg = 'No such build data file as "{!r}".'.format(filename) diff --git a/mesonbuild/compilers/__init__.py b/mesonbuild/compilers/__init__.py index 3c06f38..ea65c21 100644 --- a/mesonbuild/compilers/__init__.py +++ b/mesonbuild/compilers/__init__.py @@ -65,10 +65,14 @@ __all__ = [ 'FlangFortranCompiler', 'GnuObjCCompiler', 'GnuObjCPPCompiler', - 'IntelCompiler', + 'IntelGnuLikeCompiler', + 'IntelVisualStudioLikeCompiler', 'IntelCCompiler', 'IntelCPPCompiler', + 'IntelClCCompiler', + 'IntelClCPPCompiler', 'IntelFortranCompiler', + 'IntelClFortranCompiler', 'JavaCompiler', 'LLVMDCompiler', 'MonoCompiler', @@ -119,9 +123,10 @@ from .compilers import ( ClangCompiler, CompilerArgs, GnuCompiler, - IntelCompiler, + IntelGnuLikeCompiler, CcrxCompiler, VisualStudioLikeCompiler, + IntelVisualStudioLikeCompiler, ) from .c import ( CCompiler, @@ -132,6 +137,7 @@ from .c import ( GnuCCompiler, ElbrusCCompiler, IntelCCompiler, + IntelClCCompiler, PGICCompiler, CcrxCCompiler, VisualStudioCCompiler, @@ -145,6 +151,7 @@ from .cpp import ( GnuCPPCompiler, ElbrusCPPCompiler, IntelCPPCompiler, + IntelClCPPCompiler, PGICPPCompiler, CcrxCPPCompiler, VisualStudioCPPCompiler, @@ -164,6 +171,7 @@ from .fortran import ( ElbrusFortranCompiler, FlangFortranCompiler, IntelFortranCompiler, + IntelClFortranCompiler, NAGFortranCompiler, Open64FortranCompiler, PathScaleFortranCompiler, diff --git a/mesonbuild/compilers/c.py b/mesonbuild/compilers/c.py index 73a4083..5d78ba6 100644 --- a/mesonbuild/compilers/c.py +++ b/mesonbuild/compilers/c.py @@ -16,7 +16,7 @@ import os.path import typing from .. import coredata -from ..mesonlib import MesonException, version_compare +from ..mesonlib import MesonException, version_compare, mlog from .c_function_attributes import C_FUNC_ATTRIBUTES from .clike import CLikeCompiler @@ -30,7 +30,8 @@ from .compilers import ( CompilerType, GnuCompiler, ElbrusCompiler, - IntelCompiler, + IntelGnuLikeCompiler, + IntelVisualStudioLikeCompiler, PGICompiler, CcrxCompiler, VisualStudioLikeCompiler, @@ -95,7 +96,7 @@ class ClangCCompiler(ClangCompiler, CCompiler): if version_compare(self.version, v): c_stds += ['c17'] g_stds += ['gnu17'] - opts.update({'c_std': coredata.UserComboOption('c_std', 'C language standard to use', + opts.update({'c_std': coredata.UserComboOption('C language standard to use', ['none'] + c_stds + g_stds, 'none')}) return opts @@ -129,7 +130,7 @@ class ArmclangCCompiler(ArmclangCompiler, CCompiler): def get_options(self): opts = CCompiler.get_options(self) - opts.update({'c_std': coredata.UserComboOption('c_std', 'C language standard to use', + opts.update({'c_std': coredata.UserComboOption('C language standard to use', ['none', 'c90', 'c99', 'c11', 'gnu90', 'gnu99', 'gnu11'], 'none')}) @@ -164,12 +165,12 @@ class GnuCCompiler(GnuCompiler, CCompiler): if version_compare(self.version, v): c_stds += ['c17', 'c18'] g_stds += ['gnu17', 'gnu18'] - opts.update({'c_std': coredata.UserComboOption('c_std', 'C language standard to use', + opts.update({'c_std': coredata.UserComboOption('C language standard to use', ['none'] + c_stds + g_stds, 'none')}) if self.compiler_type.is_windows_compiler: opts.update({ - 'c_winlibs': coredata.UserArrayOption('c_winlibs', 'Standard Win libraries to link against', + 'c_winlibs': coredata.UserArrayOption('Standard Win libraries to link against', gnu_winlibs), }) return opts @@ -203,7 +204,7 @@ class ElbrusCCompiler(GnuCCompiler, ElbrusCompiler): # It does support some various ISO standards and c/gnu 90, 9x, 1x in addition to those which GNU CC supports. def get_options(self): opts = CCompiler.get_options(self) - opts.update({'c_std': coredata.UserComboOption('c_std', 'C language standard to use', + opts.update({'c_std': coredata.UserComboOption('C language standard to use', ['none', 'c89', 'c90', 'c9x', 'c99', 'c1x', 'c11', 'gnu89', 'gnu90', 'gnu9x', 'gnu99', 'gnu1x', 'gnu11', 'iso9899:2011', 'iso9899:1990', 'iso9899:199409', 'iso9899:1999'], @@ -221,10 +222,10 @@ class ElbrusCCompiler(GnuCCompiler, ElbrusCompiler): dependencies=dependencies) -class IntelCCompiler(IntelCompiler, CCompiler): +class IntelCCompiler(IntelGnuLikeCompiler, CCompiler): def __init__(self, exelist, version, compiler_type, is_cross, exe_wrapper=None, **kwargs): CCompiler.__init__(self, exelist, version, is_cross, exe_wrapper, **kwargs) - IntelCompiler.__init__(self, compiler_type) + IntelGnuLikeCompiler.__init__(self, compiler_type) self.lang_header = 'c-header' default_warn_args = ['-Wall', '-w3', '-diag-disable:remark'] self.warn_args = {'0': [], @@ -238,7 +239,7 @@ class IntelCCompiler(IntelCompiler, CCompiler): g_stds = ['gnu89', 'gnu99'] if version_compare(self.version, '>=16.0.0'): c_stds += ['c11'] - opts.update({'c_std': coredata.UserComboOption('c_std', 'C language standard to use', + opts.update({'c_std': coredata.UserComboOption('C language standard to use', ['none'] + c_stds + g_stds, 'none')}) return opts @@ -257,8 +258,7 @@ class VisualStudioLikeCCompilerMixin: def get_options(self): opts = super().get_options() - opts.update({'c_winlibs': coredata.UserArrayOption('c_winlibs', - 'Windows libs to link against.', + opts.update({'c_winlibs': coredata.UserArrayOption('Windows libs to link against.', msvc_winlibs)}) return opts @@ -279,6 +279,36 @@ class ClangClCCompiler(VisualStudioLikeCompiler, VisualStudioLikeCCompilerMixin, self.id = 'clang-cl' +class IntelClCCompiler(IntelVisualStudioLikeCompiler, VisualStudioLikeCCompilerMixin, CCompiler): + + """Intel "ICL" compiler abstraction.""" + + __have_warned = False + + def __init__(self, exelist, version, is_cross, exe_wrap, target): + CCompiler.__init__(self, exelist, version, is_cross, exe_wrap) + IntelVisualStudioLikeCompiler.__init__(self, target) + + def get_options(self): + opts = super().get_options() + c_stds = ['none', 'c89', 'c99', 'c11'] + opts.update({'c_std': coredata.UserComboOption('c_std', 'C language standard to use', + c_stds, + 'none')}) + return opts + + def get_option_compile_args(self, options): + args = [] + std = options['c_std'] + if std.value == 'c89': + if not self.__have_warned: + self.__have_warned = True + mlog.warning("ICL doesn't explicitly implement c89, setting the standard to 'none', which is close.") + elif std.value != 'none': + args.append('/Qstd:' + std.value) + return args + + class ArmCCompiler(ArmCompiler, CCompiler): def __init__(self, exelist, version, compiler_type, is_cross, exe_wrapper=None, **kwargs): CCompiler.__init__(self, exelist, version, is_cross, exe_wrapper, **kwargs) @@ -286,7 +316,7 @@ class ArmCCompiler(ArmCompiler, CCompiler): def get_options(self): opts = CCompiler.get_options(self) - opts.update({'c_std': coredata.UserComboOption('c_std', 'C language standard to use', + opts.update({'c_std': coredata.UserComboOption('C language standard to use', ['none', 'c90', 'c99'], 'none')}) return opts @@ -309,7 +339,7 @@ class CcrxCCompiler(CcrxCompiler, CCompiler): def get_options(self): opts = CCompiler.get_options(self) - opts.update({'c_std': coredata.UserComboOption('c_std', 'C language standard to use', + opts.update({'c_std': coredata.UserComboOption('C language standard to use', ['none', 'c89', 'c99'], 'none')}) return opts diff --git a/mesonbuild/compilers/clike.py b/mesonbuild/compilers/clike.py index e9d5d1d..3665f1f 100644 --- a/mesonbuild/compilers/clike.py +++ b/mesonbuild/compilers/clike.py @@ -170,10 +170,22 @@ class CLikeCompiler: stdo = p.stdo return stdo - @staticmethod - def _split_fetch_real_dirs(pathstr, sep=':'): - paths = [] - for p in pathstr.split(sep): + def _split_fetch_real_dirs(self, pathstr): + # We need to use the path separator used by the compiler for printing + # lists of paths ("gcc --print-search-dirs"). By default + # we assume it uses the platform native separator. + pathsep = os.pathsep + + # clang uses ':' instead of ';' on Windows https://reviews.llvm.org/D61121 + # so we need to repair things like 'C:\foo:C:\bar' + if pathsep == ';': + pathstr = re.sub(r':([^/\\])', r';\1', pathstr) + + # pathlib treats empty paths as '.', so filter those out + paths = [p for p in pathstr.split(pathsep) if p] + + result = [] + for p in paths: # GCC returns paths like this: # /usr/lib/gcc/x86_64-linux-gnu/8/../../../../x86_64-linux-gnu/lib # It would make sense to normalize them to get rid of the .. parts @@ -185,15 +197,15 @@ class CLikeCompiler: pobj = Path(p) unresolved = pobj.as_posix() if pobj.exists(): - if unresolved not in paths: - paths.append(unresolved) + if unresolved not in result: + result.append(unresolved) try: resolved = Path(p).resolve().as_posix() - if resolved not in paths: - paths.append(resolved) + if resolved not in result: + result.append(resolved) except FileNotFoundError: pass - return tuple(paths) + return tuple(result) def get_compiler_dirs(self, env, name): ''' @@ -776,7 +788,7 @@ class CLikeCompiler: return True, cached # MSVC does not have compiler __builtin_-s. - if self.get_id() == 'msvc': + if self.get_id() in {'msvc', 'intel-cl'}: return False, False # Detect function as a built-in @@ -849,7 +861,7 @@ class CLikeCompiler: ''' args = self.get_compiler_check_args() n = 'symbols_have_underscore_prefix' - with self.compile(code, args, 'compile', want_output=True) as p: + with self.compile(code, extra_args=args, mode='compile', want_output=True) as p: if p.returncode != 0: m = 'BUG: Unable to compile {!r} check: {}' raise RuntimeError(m.format(n, p.stdo)) diff --git a/mesonbuild/compilers/compilers.py b/mesonbuild/compilers/compilers.py index a3401e8..3ad80fe 100644 --- a/mesonbuild/compilers/compilers.py +++ b/mesonbuild/compilers/compilers.py @@ -381,35 +381,30 @@ msvc_debug_args = {False: [], ccrx_debug_args = {False: [], True: ['-debug']} -base_options = {'b_pch': coredata.UserBooleanOption('b_pch', 'Use precompiled headers', True), - 'b_lto': coredata.UserBooleanOption('b_lto', 'Use link time optimization', False), - 'b_sanitize': coredata.UserComboOption('b_sanitize', - 'Code sanitizer to use', +base_options = {'b_pch': coredata.UserBooleanOption('Use precompiled headers', True), + 'b_lto': coredata.UserBooleanOption('Use link time optimization', False), + 'b_sanitize': coredata.UserComboOption('Code sanitizer to use', ['none', 'address', 'thread', 'undefined', 'memory', 'address,undefined'], 'none'), - 'b_lundef': coredata.UserBooleanOption('b_lundef', 'Use -Wl,--no-undefined when linking', True), - 'b_asneeded': coredata.UserBooleanOption('b_asneeded', 'Use -Wl,--as-needed when linking', True), - 'b_pgo': coredata.UserComboOption('b_pgo', 'Use profile guided optimization', + 'b_lundef': coredata.UserBooleanOption('Use -Wl,--no-undefined when linking', True), + 'b_asneeded': coredata.UserBooleanOption('Use -Wl,--as-needed when linking', True), + 'b_pgo': coredata.UserComboOption('Use profile guided optimization', ['off', 'generate', 'use'], 'off'), - 'b_coverage': coredata.UserBooleanOption('b_coverage', - 'Enable coverage tracking.', + 'b_coverage': coredata.UserBooleanOption('Enable coverage tracking.', False), - 'b_colorout': coredata.UserComboOption('b_colorout', 'Use colored output', + 'b_colorout': coredata.UserComboOption('Use colored output', ['auto', 'always', 'never'], 'always'), - 'b_ndebug': coredata.UserComboOption('b_ndebug', 'Disable asserts', + 'b_ndebug': coredata.UserComboOption('Disable asserts', ['true', 'false', 'if-release'], 'false'), - 'b_staticpic': coredata.UserBooleanOption('b_staticpic', - 'Build static libraries as position independent', + 'b_staticpic': coredata.UserBooleanOption('Build static libraries as position independent', True), - 'b_pie': coredata.UserBooleanOption('b_pie', - 'Build executables as position independent', + 'b_pie': coredata.UserBooleanOption('Build executables as position independent', False), - 'b_bitcode': coredata.UserBooleanOption('b_bitcode', - 'Generate and embed bitcode (only macOS and iOS)', + 'b_bitcode': coredata.UserBooleanOption('Generate and embed bitcode (only macOS and iOS)', False), - 'b_vscrt': coredata.UserComboOption('b_vscrt', 'VS run-time library type to use.', + 'b_vscrt': coredata.UserComboOption('VS run-time library type to use.', ['none', 'md', 'mdd', 'mt', 'mtd', 'from_buildtype'], 'from_buildtype'), } @@ -1044,11 +1039,9 @@ class Compiler: description = 'Extra arguments passed to the {}'.format(self.get_display_language()) opts.update({ self.language + '_args': coredata.UserArrayOption( - self.language + '_args', description + ' compiler', [], shlex_split=True, user_input=True, allow_dups=True), self.language + '_link_args': coredata.UserArrayOption( - self.language + '_link_args', description + ' linker', [], shlex_split=True, user_input=True, allow_dups=True), }) @@ -1146,7 +1139,7 @@ class Compiler: return os.path.join(dirname, 'output.' + suffix) @contextlib.contextmanager - def compile(self, code, extra_args=None, mode='link', want_output=False): + def compile(self, code, extra_args=None, *, mode='link', want_output=False): if extra_args is None: extra_args = [] try: @@ -1199,7 +1192,7 @@ class Compiler: pass @contextlib.contextmanager - def cached_compile(self, code, cdata: coredata.CoreData, extra_args=None, mode: str = 'link'): + def cached_compile(self, code, cdata: coredata.CoreData, *, extra_args=None, mode: str = 'link'): assert(isinstance(cdata, coredata.CoreData)) # Calculate the key @@ -1307,6 +1300,9 @@ class Compiler: paths = paths + ':' + padding args.append('-Wl,-rpath,' + paths) + if mesonlib.is_sunos(): + return args + if get_compiler_is_linuxlike(self): # Rpaths to use while linking must be absolute. These are not # written to the binary. Needed only with GNU ld: @@ -1462,14 +1458,14 @@ def get_compiler_uses_gnuld(c): # FIXME: Perhaps we should detect the linker in the environment? # FIXME: Assumes that *BSD use GNU ld, but they might start using lld soon compiler_type = getattr(c, 'compiler_type', None) - return compiler_type in ( + return compiler_type in { CompilerType.GCC_STANDARD, CompilerType.GCC_MINGW, CompilerType.GCC_CYGWIN, CompilerType.CLANG_STANDARD, CompilerType.CLANG_MINGW, CompilerType.ICC_STANDARD, - CompilerType.ICC_WIN) + } def get_largefile_args(compiler): ''' @@ -1788,16 +1784,7 @@ class VisualStudioLikeCompiler(metaclass=abc.ABCMeta): return None return vs32_instruction_set_args.get(instruction_set, None) - def get_toolset_version(self): - if self.id == 'clang-cl': - # I have no idea - return '14.1' - - # See boost/config/compiler/visualc.cpp for up to date mapping - try: - version = int(''.join(self.version.split('.')[0:2])) - except ValueError: - return None + def _calculate_toolset_version(self, version: int) -> Optional[str]: if version < 1310: return '7.0' elif version < 1400: @@ -1821,6 +1808,18 @@ class VisualStudioLikeCompiler(metaclass=abc.ABCMeta): mlog.warning('Could not find toolset for version {!r}'.format(self.version)) return None + def get_toolset_version(self): + if self.id == 'clang-cl': + # I have no idea + return '14.1' + + # See boost/config/compiler/visualc.cpp for up to date mapping + try: + version = int(''.join(self.version.split('.')[0:2])) + except ValueError: + return None + return self._calculate_toolset_version(version) + def get_default_include_dirs(self): if 'INCLUDE' not in os.environ: return [] @@ -2285,7 +2284,7 @@ class ArmclangCompiler: # Tested on linux for ICC 14.0.3, 15.0.6, 16.0.4, 17.0.1, 19.0.0 -class IntelCompiler(GnuLikeCompiler): +class IntelGnuLikeCompiler(GnuLikeCompiler): def __init__(self, compiler_type): super().__init__(compiler_type) @@ -2341,6 +2340,48 @@ class IntelCompiler(GnuLikeCompiler): return ['-prof-use'] +class IntelVisualStudioLikeCompiler(VisualStudioLikeCompiler): + + """Abstractions for ICL, the Intel compiler on Windows.""" + + def __init__(self, target: str): + super().__init__(target) + self.compiler_type = CompilerType.ICC_WIN + self.id = 'intel-cl' + + def compile(self, code, *, extra_args=None, **kwargs): + # This covers a case that .get('foo', []) doesn't, that extra_args is + if kwargs.get('mode', 'compile') != 'link': + extra_args = extra_args.copy() if extra_args is not None else [] + extra_args.extend([ + '/Qdiag-error:10006', # ignoring unknown option + '/Qdiag-error:10148', # Option not supported + '/Qdiag-error:10155', # ignoring argument required + '/Qdiag-error:10156', # ignoring not argument allowed + '/Qdiag-error:10157', # Ignoring argument of the wrong type + '/Qdiag-error:10158', # Argument must be separate. Can be hit by trying an option like -foo-bar=foo when -foo=bar is a valid option but -foo-bar isn't + ]) + return super().compile(code, extra_args, **kwargs) + + def get_toolset_version(self) -> Optional[str]: + # Avoid circular dependencies.... + from ..environment import search_version + + # ICL provides a cl.exe that returns the version of MSVC it tries to + # emulate, so we'll get the version from that and pass it to the same + # function the real MSVC uses to calculate the toolset version. + _, _, err = Popen_safe(['cl.exe']) + v1, v2, *_ = search_version(err).split('.') + version = int(v1 + v2) + return self._calculate_toolset_version(version) + + def get_linker_exelist(self): + return ['xilink'] + + def openmp_flags(self): + return ['/Qopenmp'] + + class ArmCompiler: # Functionality that is common to all ARM family compilers. def __init__(self, compiler_type): diff --git a/mesonbuild/compilers/cpp.py b/mesonbuild/compilers/cpp.py index 12644a2..253525a 100644 --- a/mesonbuild/compilers/cpp.py +++ b/mesonbuild/compilers/cpp.py @@ -12,6 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. +import copy import functools import os.path import typing @@ -26,7 +27,8 @@ from .compilers import ( ClangCompiler, GnuCompiler, ElbrusCompiler, - IntelCompiler, + IntelGnuLikeCompiler, + IntelVisualStudioLikeCompiler, PGICompiler, ArmCompiler, ArmclangCompiler, @@ -37,6 +39,13 @@ from .compilers import ( from .c_function_attributes import CXX_FUNC_ATTRIBUTES, C_FUNC_ATTRIBUTES from .clike import CLikeCompiler +def non_msvc_eh_options(eh, args): + if eh == 'none': + args.append('-fno-exceptions') + elif eh == 's' or eh == 'c': + mlog.warning('non-MSVC compilers do not support ' + eh + ' exception handling.' + + 'You may want to set eh to \'default\'.') + class CPPCompiler(CLikeCompiler, Compiler): @classmethod @@ -97,7 +106,7 @@ class CPPCompiler(CLikeCompiler, Compiler): # 2. even if it did have an env object, that might contain another more # recent -std= argument, which might lead to a cascaded failure. CPP_TEST = 'int i = static_cast<int>(0);' - with self.compile(code=CPP_TEST, extra_args=[cpp_std_value], mode='compile') as p: + with self.compile(CPP_TEST, extra_args=[cpp_std_value], mode='compile') as p: if p.returncode == 0: mlog.debug('Compiler accepts {}:'.format(cpp_std_value), 'YES') return True @@ -149,11 +158,10 @@ class ClangCPPCompiler(ClangCompiler, CPPCompiler): def get_options(self): opts = CPPCompiler.get_options(self) - opts.update({'cpp_eh': coredata.UserComboOption('cpp_eh', - 'C++ exception handling type.', - ['none', 'default'], + opts.update({'cpp_eh': coredata.UserComboOption('C++ exception handling type.', + ['none', 'default', 'a', 's', 'sc'], 'default'), - 'cpp_std': coredata.UserComboOption('cpp_std', 'C++ language standard to use', + 'cpp_std': coredata.UserComboOption('C++ language standard to use', ['none', 'c++98', 'c++03', 'c++11', 'c++14', 'c++17', 'c++1z', 'c++2a', 'gnu++11', 'gnu++14', 'gnu++17', 'gnu++1z', 'gnu++2a'], 'none')}) @@ -164,8 +172,9 @@ class ClangCPPCompiler(ClangCompiler, CPPCompiler): std = options['cpp_std'] if std.value != 'none': args.append(self._find_best_cpp_std(std.value)) - if options['cpp_eh'].value == 'none': - args.append('-fno-exceptions') + + non_msvc_eh_options(options['cpp_eh'].value, args) + return args def get_option_link_args(self, options): @@ -189,9 +198,9 @@ class ArmclangCPPCompiler(ArmclangCompiler, CPPCompiler): opts = CPPCompiler.get_options(self) opts.update({'cpp_eh': coredata.UserComboOption('cpp_eh', 'C++ exception handling type.', - ['none', 'default'], + ['none', 'default', 'a', 's', 'sc'], 'default'), - 'cpp_std': coredata.UserComboOption('cpp_std', 'C++ language standard to use', + 'cpp_std': coredata.UserComboOption('C++ language standard to use', ['none', 'c++98', 'c++03', 'c++11', 'c++14', 'c++17', 'gnu++98', 'gnu++03', 'gnu++11', 'gnu++14', 'gnu++17'], 'none')}) @@ -202,8 +211,9 @@ class ArmclangCPPCompiler(ArmclangCompiler, CPPCompiler): std = options['cpp_std'] if std.value != 'none': args.append('-std=' + std.value) - if options['cpp_eh'].value == 'none': - args.append('-fno-exceptions') + + non_msvc_eh_options(options['cpp_eh'].value, args) + return args def get_option_link_args(self, options): @@ -222,20 +232,18 @@ class GnuCPPCompiler(GnuCompiler, CPPCompiler): def get_options(self): opts = CPPCompiler.get_options(self) - opts.update({'cpp_eh': coredata.UserComboOption('cpp_eh', - 'C++ exception handling type.', - ['none', 'default'], + opts.update({'cpp_eh': coredata.UserComboOption('C++ exception handling type.', + ['none', 'default', 'a', 's', 'sc'], 'default'), - 'cpp_std': coredata.UserComboOption('cpp_std', 'C++ language standard to use', + 'cpp_std': coredata.UserComboOption('C++ language standard to use', ['none', 'c++98', 'c++03', 'c++11', 'c++14', 'c++17', 'c++1z', 'c++2a', 'gnu++03', 'gnu++11', 'gnu++14', 'gnu++17', 'gnu++1z', 'gnu++2a'], 'none'), - 'cpp_debugstl': coredata.UserBooleanOption('cpp_debugstl', - 'STL debug mode', + 'cpp_debugstl': coredata.UserBooleanOption('STL debug mode', False)}) if self.compiler_type.is_windows_compiler: opts.update({ - 'cpp_winlibs': coredata.UserArrayOption('cpp_winlibs', 'Standard Win libraries to link against', + 'cpp_winlibs': coredata.UserArrayOption('Standard Win libraries to link against', gnu_winlibs), }) return opts @@ -244,8 +252,9 @@ class GnuCPPCompiler(GnuCompiler, CPPCompiler): std = options['cpp_std'] if std.value != 'none': args.append(self._find_best_cpp_std(std.value)) - if options['cpp_eh'].value == 'none': - args.append('-fno-exceptions') + + non_msvc_eh_options(options['cpp_eh'].value, args) + if options['cpp_debugstl'].value: args.append('-D_GLIBCXX_DEBUG=1') return args @@ -276,16 +285,14 @@ class ElbrusCPPCompiler(GnuCPPCompiler, ElbrusCompiler): # It does not support c++/gnu++ 17 and 1z, but still does support 0x, 1y, and gnu++98. def get_options(self): opts = CPPCompiler.get_options(self) - opts.update({'cpp_eh': coredata.UserComboOption('cpp_eh', - 'C++ exception handling type.', - ['none', 'default'], + opts.update({'cpp_eh': coredata.UserComboOption('C++ exception handling type.', + ['none', 'default', 'a', 's', 'sc'], 'default'), - 'cpp_std': coredata.UserComboOption('cpp_std', 'C++ language standard to use', + 'cpp_std': coredata.UserComboOption('C++ language standard to use', ['none', 'c++98', 'c++03', 'c++0x', 'c++11', 'c++14', 'c++1y', 'gnu++98', 'gnu++03', 'gnu++0x', 'gnu++11', 'gnu++14', 'gnu++1y'], 'none'), - 'cpp_debugstl': coredata.UserBooleanOption('cpp_debugstl', - 'STL debug mode', + 'cpp_debugstl': coredata.UserBooleanOption('STL debug mode', False)}) return opts @@ -300,10 +307,10 @@ class ElbrusCPPCompiler(GnuCPPCompiler, ElbrusCompiler): dependencies=dependencies) -class IntelCPPCompiler(IntelCompiler, CPPCompiler): +class IntelCPPCompiler(IntelGnuLikeCompiler, CPPCompiler): def __init__(self, exelist, version, compiler_type, is_cross, exe_wrap, **kwargs): CPPCompiler.__init__(self, exelist, version, is_cross, exe_wrap, **kwargs) - IntelCompiler.__init__(self, compiler_type) + IntelGnuLikeCompiler.__init__(self, compiler_type) self.lang_header = 'c++-header' default_warn_args = ['-Wall', '-w3', '-diag-disable:remark', '-Wpch-messages', '-Wnon-virtual-dtor'] @@ -326,15 +333,13 @@ class IntelCPPCompiler(IntelCompiler, CPPCompiler): c_stds += ['c++17'] if version_compare(self.version, '>=17.0.0'): g_stds += ['gnu++14'] - opts.update({'cpp_eh': coredata.UserComboOption('cpp_eh', - 'C++ exception handling type.', - ['none', 'default'], + opts.update({'cpp_eh': coredata.UserComboOption('C++ exception handling type.', + ['none', 'default', 'a', 's', 'sc'], 'default'), - 'cpp_std': coredata.UserComboOption('cpp_std', 'C++ language standard to use', + 'cpp_std': coredata.UserComboOption('C++ language standard to use', ['none'] + c_stds + g_stds, 'none'), - 'cpp_debugstl': coredata.UserBooleanOption('cpp_debugstl', - 'STL debug mode', + 'cpp_debugstl': coredata.UserBooleanOption('STL debug mode', False)}) return opts @@ -361,20 +366,27 @@ class VisualStudioLikeCPPCompilerMixin: """Mixin for C++ specific method overrides in MSVC-like compilers.""" + VC_VERSION_MAP = { + 'none': (True, None), + 'vc++11': (True, 11), + 'vc++14': (True, 14), + 'vc++17': (True, 17), + 'c++11': (False, 11), + 'c++14': (False, 14), + 'c++17': (False, 17), + } + def get_option_link_args(self, options): return options['cpp_winlibs'].value[:] def _get_options_impl(self, opts, cpp_stds: typing.List[str]): - opts.update({'cpp_eh': coredata.UserComboOption('cpp_eh', - 'C++ exception handling type.', - ['none', 'a', 's', 'sc', 'default'], + opts.update({'cpp_eh': coredata.UserComboOption('C++ exception handling type.', + ['none', 'default', 'a', 's', 'sc'], 'default'), - 'cpp_std': coredata.UserComboOption('cpp_std', - 'C++ language standard to use', + 'cpp_std': coredata.UserComboOption('C++ language standard to use', cpp_stds, 'none'), - 'cpp_winlibs': coredata.UserArrayOption('cpp_winlibs', - 'Windows libs to link against.', + 'cpp_winlibs': coredata.UserArrayOption('Windows libs to link against.', msvc_winlibs)}) return opts @@ -387,34 +399,12 @@ class VisualStudioLikeCPPCompilerMixin: elif eh.value != 'none': args.append('/EH' + eh.value) - vc_version_map = { - 'none': (True, None), - 'vc++11': (True, 11), - 'vc++14': (True, 14), - 'vc++17': (True, 17), - 'c++11': (False, 11), - 'c++14': (False, 14), - 'c++17': (False, 17)} - - permissive, ver = vc_version_map[options['cpp_std'].value] - - if ver is None: - pass - elif ver == 11: - # Note: there is no explicit flag for supporting C++11; we attempt to do the best we can - # which means setting the C++ standard version to C++14, in compilers that support it - # (i.e., after VS2015U3) - # if one is using anything before that point, one cannot set the standard. - if self.id == 'clang-cl' or version_compare(self.version, '>=19.00.24210'): - mlog.warning('MSVC does not support C++11; ' - 'attempting best effort; setting the standard to C++14') - args.append('/std:c++14') - else: - mlog.warning('This version of MSVC does not support cpp_std arguments') - else: + permissive, ver = self.VC_VERSION_MAP[options['cpp_std'].value] + + if ver is not None: args.append('/std:c++{}'.format(ver)) - if not permissive and version_compare(self.version, '>=19.11'): + if not permissive: args.append('/permissive-') return args @@ -424,7 +414,33 @@ class VisualStudioLikeCPPCompilerMixin: return CLikeCompiler.get_compiler_check_args(self) -class VisualStudioCPPCompiler(VisualStudioLikeCPPCompilerMixin, VisualStudioLikeCompiler, CPPCompiler): +class CPP11AsCPP14Mixin: + + """Mixin class for VisualStudio and ClangCl to replace C++11 std with C++14. + + This is a limitation of Clang and MSVC that ICL doesn't share. + """ + + def get_option_compile_args(self, options): + # Note: there is no explicit flag for supporting C++11; we attempt to do the best we can + # which means setting the C++ standard version to C++14, in compilers that support it + # (i.e., after VS2015U3) + # if one is using anything before that point, one cannot set the standard. + if options['cpp_std'].value in {'vc++11', 'c++11'}: + mlog.warning(self.id, 'does not support C++11;', + 'attempting best effort; setting the standard to C++14') + # Don't mutate anything we're going to change, we need to use + # deepcopy since we're messing with members, and we can't simply + # copy the members because the option proxy doesn't support it. + options = copy.deepcopy(options) + if options['cpp_std'].value == 'vc++11': + options['cpp_std'].value = 'vc++14' + else: + options['cpp_std'].value = 'c++14' + return super().get_option_compile_args(options) + + +class VisualStudioCPPCompiler(CPP11AsCPP14Mixin, VisualStudioLikeCPPCompilerMixin, VisualStudioLikeCompiler, CPPCompiler): def __init__(self, exelist, version, is_cross, exe_wrap, target): CPPCompiler.__init__(self, exelist, version, is_cross, exe_wrap) VisualStudioLikeCompiler.__init__(self, target) @@ -435,14 +451,29 @@ class VisualStudioCPPCompiler(VisualStudioLikeCPPCompilerMixin, VisualStudioLike cpp_stds = ['none', 'c++11', 'vc++11'] # Visual Studio 2015 and later if version_compare(self.version, '>=19'): - cpp_stds.extend(['c++14', 'vc++14', 'c++latest', 'vc++latest']) + cpp_stds.extend(['c++14', 'c++latest', 'vc++latest']) # Visual Studio 2017 and later if version_compare(self.version, '>=19.11'): - cpp_stds.extend(['c++17', 'vc++17']) + cpp_stds.extend(['vc++14', 'c++17', 'vc++17']) return self._get_options_impl(super().get_options(), cpp_stds) + def get_option_compile_args(self, options): + if options['cpp_std'].value != 'none' and version_compare(self.version, '<19.00.24210'): + mlog.warning('This version of MSVC does not support cpp_std arguments') + options = copy.copy(options) + options['cpp_std'].value = 'none' + + args = super().get_option_compile_args(options) + + if version_compare(self.version, '<19.11'): + try: + i = args.index('/permissive-') + except ValueError: + return args + del args[i] + return args -class ClangClCPPCompiler(VisualStudioLikeCPPCompilerMixin, VisualStudioLikeCompiler, CPPCompiler): +class ClangClCPPCompiler(CPP11AsCPP14Mixin, VisualStudioLikeCPPCompilerMixin, VisualStudioLikeCompiler, CPPCompiler): def __init__(self, exelist, version, is_cross, exe_wrap, target): CPPCompiler.__init__(self, exelist, version, is_cross, exe_wrap) VisualStudioLikeCompiler.__init__(self, target) @@ -453,6 +484,18 @@ class ClangClCPPCompiler(VisualStudioLikeCPPCompilerMixin, VisualStudioLikeCompi return self._get_options_impl(super().get_options(), cpp_stds) +class IntelClCPPCompiler(VisualStudioLikeCPPCompilerMixin, IntelVisualStudioLikeCompiler, CPPCompiler): + + def __init__(self, exelist, version, is_cross, exe_wrap, target): + CPPCompiler.__init__(self, exelist, version, is_cross, exe_wrap) + IntelVisualStudioLikeCompiler.__init__(self, target) + + def get_options(self): + # This has only been tested with verison 19.0, + cpp_stds = ['none', 'c++11', 'vc++11', 'c++14', 'vc++14', 'c++17', 'vc++17', 'c++latest'] + return self._get_options_impl(super().get_options(), cpp_stds) + + class ArmCPPCompiler(ArmCompiler, CPPCompiler): def __init__(self, exelist, version, compiler_type, is_cross, exe_wrap=None, **kwargs): CPPCompiler.__init__(self, exelist, version, is_cross, exe_wrap, **kwargs) @@ -460,7 +503,7 @@ class ArmCPPCompiler(ArmCompiler, CPPCompiler): def get_options(self): opts = CPPCompiler.get_options(self) - opts.update({'cpp_std': coredata.UserComboOption('cpp_std', 'C++ language standard to use', + opts.update({'cpp_std': coredata.UserComboOption('C++ language standard to use', ['none', 'c++03', 'c++11'], 'none')}) return opts diff --git a/mesonbuild/compilers/fortran.py b/mesonbuild/compilers/fortran.py index 3fee43b..5de1de4 100644 --- a/mesonbuild/compilers/fortran.py +++ b/mesonbuild/compilers/fortran.py @@ -26,10 +26,12 @@ from .compilers import ( GnuCompiler, ClangCompiler, ElbrusCompiler, - IntelCompiler, + IntelGnuLikeCompiler, PGICompiler, + IntelVisualStudioLikeCompiler, ) from .clike import CLikeCompiler +from .. import mlog from mesonbuild.mesonlib import ( EnvironmentException, MachineChoice, is_osx, LibType @@ -66,6 +68,7 @@ class FortranCompiler(CLikeCompiler, Compiler): for_machine = MachineChoice.HOST extra_flags = environment.coredata.get_external_args(for_machine, self.language) extra_flags += environment.coredata.get_external_link_args(for_machine, self.language) + extra_flags += self.get_always_args() # %% build the test executable pc = subprocess.Popen(self.exelist + extra_flags + [str(source_name), '-o', str(binary_name)]) pc.wait() @@ -143,6 +146,25 @@ class FortranCompiler(CLikeCompiler, Compiler): end program main''' return self.find_library_impl(libname, env, extra_dirs, code, libtype) + def has_multi_arguments(self, args, env): + for arg in args[:]: + # some compilers, e.g. GCC, don't warn for unsupported warning-disable + # flags, so when we are testing a flag like "-Wno-forgotten-towel", also + # check the equivalent enable flag too "-Wforgotten-towel" + if arg.startswith('-Wno-'): + args.append('-W' + arg[5:]) + if arg.startswith('-Wl,'): + mlog.warning('{} looks like a linker argument, ' + 'but has_argument and other similar methods only ' + 'support checking compiler arguments. Using them ' + 'to check linker arguments are never supported, ' + 'and results are likely to be wrong regardless of ' + 'the compiler you are using. has_link_argument or ' + 'other similar method can be used instead.' + .format(arg)) + code = 'program main\ncall exit(0)\nend program main' + return self.has_arguments(args, env, code, mode='compile') + class GnuFortranCompiler(GnuCompiler, FortranCompiler): def __init__(self, exelist, version, compiler_type, is_cross, exe_wrapper=None, defines=None, **kwargs): @@ -213,13 +235,13 @@ class SunFortranCompiler(FortranCompiler): return ['-xopenmp'] -class IntelFortranCompiler(IntelCompiler, FortranCompiler): +class IntelFortranCompiler(IntelGnuLikeCompiler, FortranCompiler): def __init__(self, exelist, version, is_cross, exe_wrapper=None, **kwags): self.file_suffixes = ('f90', 'f', 'for', 'ftn', 'fpp') FortranCompiler.__init__(self, exelist, version, is_cross, exe_wrapper, **kwags) # FIXME: Add support for OS X and Windows in detect_fortran_compiler so # we are sent the type of compiler - IntelCompiler.__init__(self, CompilerType.ICC_STANDARD) + IntelGnuLikeCompiler.__init__(self, CompilerType.ICC_STANDARD) self.id = 'intel' default_warn_args = ['-warn', 'general', '-warn', 'truncated_source'] self.warn_args = {'0': [], @@ -239,6 +261,36 @@ class IntelFortranCompiler(IntelCompiler, FortranCompiler): def language_stdlib_only_link_flags(self): return ['-lifcore', '-limf'] +class IntelClFortranCompiler(IntelVisualStudioLikeCompiler, FortranCompiler): + + file_suffixes = ['f90', 'f', 'for', 'ftn', 'fpp'] + always_args = ['/nologo'] + + BUILD_ARGS = { + 'plain': [], + 'debug': ["/Zi", "/Od"], + 'debugoptimized': ["/Zi", "/O1"], + 'release': ["/O2"], + 'minsize': ["/Os"], + 'custom': [], + } + + def __init__(self, exelist, version, is_cross, target: str, exe_wrapper=None): + FortranCompiler.__init__(self, exelist, version, is_cross, exe_wrapper) + IntelVisualStudioLikeCompiler.__init__(self, target) + + default_warn_args = ['/warn:general', '/warn:truncated_source'] + self.warn_args = {'0': [], + '1': default_warn_args, + '2': default_warn_args + ['/warn:unused'], + '3': ['/warn:all']} + + def get_module_outdir_args(self, path) -> List[str]: + return ['/module:' + path] + + def get_buildtype_args(self, buildtype: str) -> List[str]: + return self.BUILD_ARGS[buildtype] + class PathScaleFortranCompiler(FortranCompiler): def __init__(self, exelist, version, is_cross, exe_wrapper=None, **kwags): diff --git a/mesonbuild/coredata.py b/mesonbuild/coredata.py index 183b333..ac620d7 100644 --- a/mesonbuild/coredata.py +++ b/mesonbuild/coredata.py @@ -26,17 +26,24 @@ from .wrap import WrapMode import ast import argparse import configparser -from typing import Optional, Any, TypeVar, Generic, Type, List +from typing import Optional, Any, TypeVar, Generic, Type, List, Union +import typing +import enum + +if typing.TYPE_CHECKING: + from . import dependencies version = '0.50.999' backendlist = ['ninja', 'vs', 'vs2010', 'vs2015', 'vs2017', 'vs2019', 'xcode'] default_yielding = False -class UserOption: - def __init__(self, name, description, choices, yielding): +# Can't bind this near the class method it seems, sadly. +_T = TypeVar('_T') + +class UserOption(Generic[_T]): + def __init__(self, description, choices, yielding): super().__init__() - self.name = name self.choices = choices self.description = description if yielding is None: @@ -51,31 +58,31 @@ class UserOption: # Check that the input is a valid value and return the # "cleaned" or "native" version. For example the Boolean # option could take the string "true" and return True. - def validate_value(self, value): + def validate_value(self, value: Any) -> _T: raise RuntimeError('Derived option class did not override validate_value.') def set_value(self, newvalue): self.value = self.validate_value(newvalue) -class UserStringOption(UserOption): - def __init__(self, name, description, value, choices=None, yielding=None): - super().__init__(name, description, choices, yielding) +class UserStringOption(UserOption[str]): + def __init__(self, description, value, choices=None, yielding=None): + super().__init__(description, choices, yielding) self.set_value(value) def validate_value(self, value): if not isinstance(value, str): - raise MesonException('Value "%s" for string option "%s" is not a string.' % (str(value), self.name)) + raise MesonException('Value "%s" for string option is not a string.' % str(value)) return value -class UserBooleanOption(UserOption): - def __init__(self, name, description, value, yielding=None): - super().__init__(name, description, [True, False], yielding) +class UserBooleanOption(UserOption[bool]): + def __init__(self, description, value, yielding=None): + super().__init__(description, [True, False], yielding) self.set_value(value) - def __bool__(self): + def __bool__(self) -> bool: return self.value - def validate_value(self, value): + def validate_value(self, value) -> bool: if isinstance(value, bool): return value if value.lower() == 'true': @@ -84,9 +91,9 @@ class UserBooleanOption(UserOption): return False raise MesonException('Value %s is not boolean (true or false).' % value) -class UserIntegerOption(UserOption): - def __init__(self, name, description, min_value, max_value, value, yielding=None): - super().__init__(name, description, [True, False], yielding) +class UserIntegerOption(UserOption[int]): + def __init__(self, description, min_value, max_value, value, yielding=None): + super().__init__(description, [True, False], yielding) self.min_value = min_value self.max_value = max_value self.set_value(value) @@ -97,7 +104,7 @@ class UserIntegerOption(UserOption): c.append('<=' + str(max_value)) self.choices = ', '.join(c) - def validate_value(self, value): + def validate_value(self, value) -> int: if isinstance(value, str): value = self.toint(value) if not isinstance(value, int): @@ -108,15 +115,15 @@ class UserIntegerOption(UserOption): raise MesonException('New value %d is more than maximum value %d.' % (value, self.max_value)) return value - def toint(self, valuestring): + def toint(self, valuestring) -> int: try: return int(valuestring) except ValueError: raise MesonException('Value string "%s" is not convertable to an integer.' % valuestring) -class UserUmaskOption(UserIntegerOption): - def __init__(self, name, description, value, yielding=None): - super().__init__(name, description, 0, 0o777, value, yielding) +class UserUmaskOption(UserIntegerOption, UserOption[Union[str, int]]): + def __init__(self, description, value, yielding=None): + super().__init__(description, 0, 0o777, value, yielding) self.choices = ['preserve', '0000-0777'] def printable_value(self): @@ -135,9 +142,9 @@ class UserUmaskOption(UserIntegerOption): except ValueError as e: raise MesonException('Invalid mode: {}'.format(e)) -class UserComboOption(UserOption): - def __init__(self, name, description, choices, value, yielding=None): - super().__init__(name, description, choices, yielding) +class UserComboOption(UserOption[str]): + def __init__(self, description, choices: List[str], value, yielding=None): + super().__init__(description, choices, yielding) if not isinstance(self.choices, list): raise MesonException('Combo choices must be an array.') for i in self.choices: @@ -148,17 +155,17 @@ class UserComboOption(UserOption): def validate_value(self, value): if value not in self.choices: optionsstring = ', '.join(['"%s"' % (item,) for item in self.choices]) - raise MesonException('Value "%s" for combo option "%s" is not one of the choices. Possible choices are: %s.' % (value, self.name, optionsstring)) + raise MesonException('Value "%s" for combo option is not one of the choices. Possible choices are: %s.' % (value, optionsstring)) return value -class UserArrayOption(UserOption): - def __init__(self, name, description, value, shlex_split=False, user_input=False, allow_dups=False, **kwargs): - super().__init__(name, description, kwargs.get('choices', []), yielding=kwargs.get('yielding', None)) +class UserArrayOption(UserOption[List[str]]): + def __init__(self, description, value, shlex_split=False, user_input=False, allow_dups=False, **kwargs): + super().__init__(description, kwargs.get('choices', []), yielding=kwargs.get('yielding', None)) self.shlex_split = shlex_split self.allow_dups = allow_dups self.value = self.validate_value(value, user_input=user_input) - def validate_value(self, value, user_input=True): + def validate_value(self, value, user_input=True) -> List[str]: # User input is for options defined on the command line (via -D # options). Users can put their input in as a comma separated # string, but for defining options in meson_options.txt the format @@ -182,8 +189,8 @@ class UserArrayOption(UserOption): raise MesonException('"{0}" should be a string array, but it is not'.format(str(newvalue))) if not self.allow_dups and len(set(newvalue)) != len(newvalue): - msg = 'Duplicated values in array option "%s" is deprecated. ' \ - 'This will become a hard error in the future.' % (self.name) + msg = 'Duplicated values in array option is deprecated. ' \ + 'This will become a hard error in the future.' mlog.deprecation(msg) for i in newvalue: if not isinstance(i, str): @@ -199,8 +206,8 @@ class UserArrayOption(UserOption): class UserFeatureOption(UserComboOption): static_choices = ['enabled', 'disabled', 'auto'] - def __init__(self, name, description, value, yielding=None): - super().__init__(name, description, self.static_choices, value, yielding) + def __init__(self, description, value, yielding=None): + super().__init__(description, self.static_choices, value, yielding) def is_enabled(self): return self.value == 'enabled' @@ -219,6 +226,112 @@ def load_configs(filenames: List[str]) -> configparser.ConfigParser: return config +if typing.TYPE_CHECKING: + CacheKeyType = typing.Tuple[typing.Tuple[typing.Any, ...], ...] + SubCacheKeyType = typing.Tuple[typing.Any, ...] + + +class DependencyCacheType(enum.Enum): + + OTHER = 0 + PKG_CONFIG = 1 + + @classmethod + def from_type(cls, dep: 'dependencies.Dependency') -> 'DependencyCacheType': + from . import dependencies + # As more types gain search overrides they'll need to be added here + if isinstance(dep, dependencies.PkgConfigDependency): + return cls.PKG_CONFIG + return cls.OTHER + + +class DependencySubCache: + + def __init__(self, type_: DependencyCacheType): + self.types = [type_] + self.__cache = {} # type: typing.Dict[SubCacheKeyType, dependencies.Dependency] + + def __getitem__(self, key: 'SubCacheKeyType') -> 'dependencies.Dependency': + return self.__cache[key] + + def __setitem__(self, key: 'SubCacheKeyType', value: 'dependencies.Dependency') -> None: + self.__cache[key] = value + + def __contains__(self, key: 'SubCacheKeyType') -> bool: + return key in self.__cache + + def values(self) -> typing.Iterable['dependencies.Dependency']: + return self.__cache.values() + + +class DependencyCache: + + """Class that stores a cache of dependencies. + + This class is meant to encapsulate the fact that we need multiple keys to + successfully lookup by providing a simple get/put interface. + """ + + def __init__(self, builtins: typing.Dict[str, UserOption[typing.Any]], cross: bool): + self.__cache = OrderedDict() # type: typing.MutableMapping[CacheKeyType, DependencySubCache] + self.__builtins = builtins + self.__is_cross = cross + + def __calculate_subkey(self, type_: DependencyCacheType) -> typing.Tuple[typing.Any, ...]: + if type_ is DependencyCacheType.PKG_CONFIG: + if self.__is_cross: + return tuple(self.__builtins['cross_pkg_config_path'].value) + return tuple(self.__builtins['pkg_config_path'].value) + assert type_ is DependencyCacheType.OTHER, 'Someone forgot to update subkey calculations for a new type' + return tuple() + + def __iter__(self) -> typing.Iterator['CacheKeyType']: + return self.keys() + + def put(self, key: 'CacheKeyType', dep: 'dependencies.Dependency') -> None: + t = DependencyCacheType.from_type(dep) + if key not in self.__cache: + self.__cache[key] = DependencySubCache(t) + subkey = self.__calculate_subkey(t) + self.__cache[key][subkey] = dep + + def get(self, key: 'CacheKeyType') -> typing.Optional['dependencies.Dependency']: + """Get a value from the cache. + + If there is no cache entry then None will be returned. + """ + try: + val = self.__cache[key] + except KeyError: + return None + + for t in val.types: + subkey = self.__calculate_subkey(t) + try: + return val[subkey] + except KeyError: + pass + return None + + def values(self) -> typing.Iterator['dependencies.Dependency']: + for c in self.__cache.values(): + yield from c.values() + + def keys(self) -> typing.Iterator['CacheKeyType']: + return iter(self.__cache.keys()) + + def items(self) -> typing.Iterator[typing.Tuple['CacheKeyType', typing.List['dependencies.Dependency']]]: + for k, v in self.__cache.items(): + vs = [] + for t in v.types: + subkey = self.__calculate_subkey(t) + if subkey in v: + vs.append(v[subkey]) + yield k, vs + + def clear(self) -> None: + self.__cache.clear() + # This class contains all data that must persist over multiple # invocations of Meson. It is roughly the same thing as # cmakecache. @@ -241,12 +354,19 @@ class CoreData: self.init_builtins() self.backend_options = {} self.user_options = {} - self.compiler_options = PerMachine({}, {}, {}) + self.compiler_options = PerMachine({}, {}) self.base_options = {} self.cross_files = self.__load_config_files(options.cross_file, 'cross') self.compilers = OrderedDict() self.cross_compilers = OrderedDict() - self.deps = OrderedDict() + + build_cache = DependencyCache(self.builtins, False) + if self.cross_files: + host_cache = DependencyCache(self.builtins, True) + else: + host_cache = build_cache + self.deps = PerMachine(build_cache, host_cache) # type: PerMachine[DependencyCache] + self.compiler_check_cache = OrderedDict() # Only to print a warning if it changes between Meson invocations. self.config_files = self.__load_config_files(options.native_file, 'native') @@ -334,22 +454,20 @@ class CoreData: # Create builtin options with default values self.builtins = {} for key, opt in builtin_options.items(): - self.builtins[key] = opt.init_option(key) + self.builtins[key] = opt.init_option() if opt.separate_cross: - self.builtins['cross_' + key] = opt.init_option(key) + self.builtins['cross_' + key] = opt.init_option() def init_backend_options(self, backend_name): if backend_name == 'ninja': self.backend_options['backend_max_links'] = \ UserIntegerOption( - 'backend_max_links', 'Maximum number of linker processes to run or 0 for no ' 'limit', 0, None, 0) elif backend_name.startswith('vs'): self.backend_options['backend_startup_project'] = \ UserStringOption( - 'backend_startup_project', 'Default project to execute in Visual Studio', '') @@ -433,7 +551,11 @@ class CoreData: for opts in self.get_all_options(): if option_name in opts: opt = opts[option_name] - return opt.validate_value(override_value) + try: + return opt.validate_value(override_value) + except MesonException as e: + raise type(e)(('Validation failed for option %s: ' % option_name) + str(e)) \ + .with_traceback(sys.exc_into()[2]) raise MesonException('Tried to validate unknown option %s.' % option_name) def get_external_args(self, for_machine: MachineChoice, lang): @@ -506,8 +628,11 @@ class CoreData: # Some options default to environment variables if they are # unset, set those now. These will either be overwritten - # below, or they won't. - options['pkg_config_path'] = os.environ.get('PKG_CONFIG_PATH', '').split(':') + # below, or they won't. These should only be set on the first run. + if env.first_invocation: + p_env = os.environ.get('PKG_CONFIG_PATH') + if p_env: + options['pkg_config_path'] = p_env.split(':') for k, v in env.cmd_line_options.items(): if subproject: @@ -695,9 +820,9 @@ def parse_cmd_line_options(args): delattr(args, name) -_U = TypeVar('_U', bound=UserOption) +_U = TypeVar('_U', bound=UserOption[_T]) -class BuiltinOption(Generic[_U]): +class BuiltinOption(Generic[_T, _U]): """Class for a builtin option type. @@ -713,12 +838,12 @@ class BuiltinOption(Generic[_U]): self.yielding = yielding self.separate_cross = separate_cross - def init_option(self, name: str) -> _U: + def init_option(self) -> _U: """Create an instance of opt_type and return it.""" keywords = {'yielding': self.yielding, 'value': self.default} if self.choices: keywords['choices'] = self.choices - return self.opt_type(name, self.description, **keywords) + return self.opt_type(self.description, **keywords) def _argparse_action(self) -> Optional[str]: if self.default is True: diff --git a/mesonbuild/dependencies/base.py b/mesonbuild/dependencies/base.py index 65ec9c6..fd2a219 100644 --- a/mesonbuild/dependencies/base.py +++ b/mesonbuild/dependencies/base.py @@ -549,7 +549,7 @@ class ConfigToolDependency(ExternalDependency): class PkgConfigDependency(ExternalDependency): # The class's copy of the pkg-config path. Avoids having to search for it # multiple times in the same Meson invocation. - class_pkgbin = PerMachine(None, None, None) + class_pkgbin = PerMachine(None, None) # We cache all pkg-config subprocess invocations to avoid redundant calls pkgbin_cache = {} @@ -1001,9 +1001,9 @@ class CMakeTarget: class CMakeDependency(ExternalDependency): # The class's copy of the CMake path. Avoids having to search for it # multiple times in the same Meson invocation. - class_cmakebin = PerMachine(None, None, None) - class_cmakevers = PerMachine(None, None, None) - class_cmakeinfo = PerMachine(None, None, None) + class_cmakebin = PerMachine(None, None) + class_cmakevers = PerMachine(None, None) + class_cmakeinfo = PerMachine(None, None) # We cache all pkg-config subprocess invocations to avoid redundant calls cmake_cache = {} # Version string for the minimum CMake version @@ -2043,7 +2043,8 @@ class DubDependency(ExternalDependency): class ExternalProgram: windows_exts = ('exe', 'msc', 'com', 'bat', 'cmd') - def __init__(self, name, command=None, silent=False, search_dir=None): + def __init__(self, name: str, command: typing.Optional[typing.List[str]] = None, + silent: bool = False, search_dir: typing.Optional[str] = None): self.name = name if command is not None: self.command = listify(command) @@ -2067,11 +2068,11 @@ class ExternalProgram: else: mlog.log('Program', mlog.bold(name), 'found:', mlog.red('NO')) - def __repr__(self): + def __repr__(self) -> str: r = '<{} {!r} -> {!r}>' return r.format(self.__class__.__name__, self.name, self.command) - def description(self): + def description(self) -> str: '''Human friendly description of the command''' return ' '.join(self.command) @@ -2230,7 +2231,7 @@ class ExternalProgram: # all executables whether in PATH or with an absolute path return [command] - def found(self): + def found(self) -> bool: return self.command[0] is not None def get_command(self): @@ -2418,8 +2419,8 @@ class ExtraFrameworkDependency(ExternalDependency): return 'framework' -def get_dep_identifier(name, kwargs, want_cross: bool) -> Tuple: - identifier = (name, want_cross) +def get_dep_identifier(name, kwargs) -> Tuple: + identifier = (name, ) for key, value in kwargs.items(): # 'version' is irrelevant for caching; the caller must check version matches # 'native' is handled above with `want_cross` diff --git a/mesonbuild/envconfig.py b/mesonbuild/envconfig.py index 70f964e..0cdb4c4 100644 --- a/mesonbuild/envconfig.py +++ b/mesonbuild/envconfig.py @@ -16,7 +16,7 @@ import configparser, os, shlex, subprocess import typing from . import mesonlib -from .mesonlib import EnvironmentException, MachineChoice, PerMachine +from .mesonlib import EnvironmentException from . import mlog _T = typing.TypeVar('_T') @@ -255,40 +255,6 @@ class MachineInfo: def libdir_layout_is_win(self) -> bool: return self.is_windows() or self.is_cygwin() -class PerMachineDefaultable(PerMachine[typing.Optional[_T]]): - """Extends `PerMachine` with the ability to default from `None`s. - """ - def __init__(self) -> None: - super().__init__(None, None, None) - - def default_missing(self) -> None: - """Default host to buid and target to host. - - This allows just specifying nothing in the native case, just host in the - cross non-compiler case, and just target in the native-built - cross-compiler case. - """ - if self.host is None: - self.host = self.build - if self.target is None: - self.target = self.host - - def miss_defaulting(self) -> None: - """Unset definition duplicated from their previous to None - - This is the inverse of ''default_missing''. By removing defaulted - machines, we can elaborate the original and then redefault them and thus - avoid repeating the elaboration explicitly. - """ - if self.target == self.host: - self.target = None - if self.host == self.build: - self.host = None - -class MachineInfos(PerMachineDefaultable[MachineInfo]): - def matches_build_machine(self, machine: MachineChoice) -> bool: - return self.build == self[machine] - class BinaryTable(HasEnvVarFallback): def __init__( self, diff --git a/mesonbuild/environment.py b/mesonbuild/environment.py index dd1d4cf..6b536a4 100644 --- a/mesonbuild/environment.py +++ b/mesonbuild/environment.py @@ -15,16 +15,17 @@ import os, platform, re, sys, shlex, shutil, subprocess, typing from . import coredata -from .linkers import ArLinker, ArmarLinker, VisualStudioLinker, DLinker, CcrxLinker +from .linkers import ArLinker, ArmarLinker, VisualStudioLinker, DLinker, CcrxLinker, IntelVisualStudioLinker from . import mesonlib from .mesonlib import ( MesonException, EnvironmentException, MachineChoice, Popen_safe, + PerMachineDefaultable, PerThreeMachineDefaultable ) from . import mlog from .envconfig import ( - BinaryTable, Directories, MachineInfo, MachineInfos, MesonConfigFile, - PerMachineDefaultable, Properties, known_cpu_families, + BinaryTable, Directories, MachineInfo, MesonConfigFile, + Properties, known_cpu_families, ) from . import compilers from .compilers import ( @@ -60,7 +61,10 @@ from .compilers import ( ElbrusFortranCompiler, IntelCCompiler, IntelCPPCompiler, + IntelClCCompiler, + IntelClCPPCompiler, IntelFortranCompiler, + IntelClFortranCompiler, JavaCompiler, MonoCompiler, CudaCompiler, @@ -111,7 +115,7 @@ def find_coverage_tools(): return gcovr_exe, gcovr_new_rootdir, lcov_exe, genhtml_exe -def detect_ninja(version='1.5', log=False): +def detect_ninja(version: str = '1.5', log: bool = False) -> str: env_ninja = os.environ.get('NINJA', None) for n in [env_ninja] if env_ninja else ['ninja', 'ninja-build', 'samu']: try: @@ -400,48 +404,62 @@ class Environment: # Just create a fresh coredata in this case self.create_new_coredata(options) - self.machines = MachineInfos() - # Will be fully initialized later using compilers later. - self.detect_build_machine() + ## locally bind some unfrozen configuration - # Similar to coredata.compilers and build.compilers, but lower level in - # that there is no meta data, only names/paths. - self.binaries = PerMachineDefaultable() + # Stores machine infos, the only *three* machine one because we have a + # target machine info on for the user (Meson never cares about the + # target machine.) + machines = PerThreeMachineDefaultable() - # Misc other properties about each machine. - self.properties = PerMachineDefaultable() + # Similar to coredata.compilers, but lower level in that there is no + # meta data, only names/paths. + binaries = PerMachineDefaultable() - # Just uses hard-coded defaults and environment variables. Might be - # overwritten by a native file. - self.binaries.build = BinaryTable() - self.properties.build = Properties() + # Misc other properties about each machine. + properties = PerMachineDefaultable() # Store paths for native and cross build files. There is no target # machine information here because nothing is installed for the target # architecture, just the build and host architectures - self.paths = PerMachineDefaultable() + paths = PerMachineDefaultable() + + ## Setup build machine defaults + + # Will be fully initialized later using compilers later. + machines.build = detect_machine_info() + + # Just uses hard-coded defaults and environment variables. Might be + # overwritten by a native file. + binaries.build = BinaryTable() + properties.build = Properties() + + ## Read in native file(s) to override build machine configuration if self.coredata.config_files is not None: config = MesonConfigFile.from_config_parser( coredata.load_configs(self.coredata.config_files)) - self.binaries.build = BinaryTable(config.get('binaries', {})) - self.paths.build = Directories(**config.get('paths', {})) + binaries.build = BinaryTable(config.get('binaries', {})) + paths.build = Directories(**config.get('paths', {})) + + ## Read in cross file(s) to override host machine configuration if self.coredata.cross_files: config = MesonConfigFile.from_config_parser( coredata.load_configs(self.coredata.cross_files)) - self.properties.host = Properties(config.get('properties', {}), False) - self.binaries.host = BinaryTable(config.get('binaries', {}), False) + properties.host = Properties(config.get('properties', {}), False) + binaries.host = BinaryTable(config.get('binaries', {}), False) if 'host_machine' in config: - self.machines.host = MachineInfo.from_literal(config['host_machine']) + machines.host = MachineInfo.from_literal(config['host_machine']) if 'target_machine' in config: - self.machines.target = MachineInfo.from_literal(config['target_machine']) - self.paths.host = Directories(**config.get('paths', {})) + machines.target = MachineInfo.from_literal(config['target_machine']) + paths.host = Directories(**config.get('paths', {})) - self.machines.default_missing() - self.binaries.default_missing() - self.properties.default_missing() - self.paths.default_missing() + ## "freeze" now initialized configuration, and "save" to the class. + + self.machines = machines.default_missing() + self.binaries = binaries.default_missing() + self.properties = properties.default_missing() + self.paths = paths.default_missing() exe_wrapper = self.binaries.host.lookup_entry('exe_wrapper') if exe_wrapper is not None: @@ -457,12 +475,18 @@ class Environment: # List of potential compilers. if mesonlib.is_windows(): # Intel C and C++ compiler is icl on Windows, but icc and icpc elsewhere. - self.default_c = ['cl', 'cc', 'gcc', 'clang', 'clang-cl', 'pgcc', 'icl'] + # Search for icl before cl, since Intel "helpfully" provides a + # cl.exe that returns *exactly the same thing* that microsofts + # cl.exe does, and if icl is present, it's almost certainly what + # you want. + self.default_c = ['icl', 'cl', 'cc', 'gcc', 'clang', 'clang-cl', 'pgcc'] # There is currently no pgc++ for Windows, only for Mac and Linux. - self.default_cpp = ['cl', 'c++', 'g++', 'clang++', 'clang-cl', 'icl'] + self.default_cpp = ['icl', 'cl', 'c++', 'g++', 'clang++', 'clang-cl'] + self.default_fortran = ['ifort', 'gfortran', 'flang', 'pgfortran', 'g95'] else: self.default_c = ['cc', 'gcc', 'clang', 'pgcc', 'icc'] self.default_cpp = ['c++', 'g++', 'clang++', 'pgc++', 'icpc'] + self.default_fortran = ['gfortran', 'flang', 'pgfortran', 'ifort', 'g95'] if mesonlib.is_windows(): self.default_cs = ['csc', 'mcs'] else: @@ -470,7 +494,6 @@ class Environment: self.default_objc = ['cc'] self.default_objcpp = ['c++'] self.default_d = ['ldc2', 'ldc', 'gdc', 'dmd'] - self.default_fortran = ['gfortran', 'flang', 'pgfortran', 'ifort', 'g95'] self.default_java = ['javac'] self.default_cuda = ['nvcc'] self.default_rust = ['rustc'] @@ -512,7 +535,7 @@ class Environment: self.coredata.meson_command = mesonlib.meson_command self.first_invocation = True - def is_cross_build(self): + def is_cross_build(self) -> bool: return not self.machines.matches_build_machine(MachineChoice.HOST) def dump_coredata(self): @@ -677,8 +700,12 @@ class Environment: arg = '--vsn' elif 'ccrx' in compiler[0]: arg = '-v' + elif 'icl' in compiler[0]: + # if you pass anything to icl you get stuck in a pager + arg = '' else: arg = '--version' + try: p, out, err = Popen_safe(compiler + [arg]) except OSError as e: @@ -753,6 +780,11 @@ class Environment: compiler_type = CompilerType.CLANG_STANDARD cls = ClangCCompiler if lang == 'c' else ClangCPPCompiler return cls(ccache + compiler, version, compiler_type, is_cross, exe_wrap, full_version=full_version) + if 'Intel(R) C++ Intel(R)' in err: + version = search_version(err) + target = 'x86' if 'IA-32' in err else 'x86_64' + cls = IntelClCCompiler if lang == 'c' else IntelClCPPCompiler + return cls(compiler, version, is_cross, exe_wrap, target) if 'Microsoft' in out or 'Microsoft' in err: # Latest versions of Visual Studio print version # number to stderr but earlier ones print version @@ -886,6 +918,11 @@ class Environment: version = search_version(err) return SunFortranCompiler(compiler, version, is_cross, exe_wrap, full_version=full_version) + if 'Intel(R) Visual Fortran' in err: + version = search_version(err) + target = 'x86' if 'IA-32' in err else 'x86_64' + return IntelClFortranCompiler(compiler, version, is_cross, target, exe_wrap) + if 'ifort (IFORT)' in out: return IntelFortranCompiler(compiler, version, is_cross, exe_wrap, full_version=full_version) @@ -1172,11 +1209,14 @@ class Environment: linkers = [self.vs_static_linker, self.clang_cl_static_linker, compiler.get_linker_exelist()] else: linkers = [self.default_static_linker, compiler.get_linker_exelist()] + elif isinstance(compiler, IntelClCCompiler): + # Intel has it's own linker that acts like microsoft's lib + linkers = ['xilib'] else: linkers = [self.default_static_linker] popen_exceptions = {} for linker in linkers: - if not set(['lib', 'lib.exe', 'llvm-lib', 'llvm-lib.exe']).isdisjoint(linker): + if not {'lib', 'lib.exe', 'llvm-lib', 'llvm-lib.exe', 'xilib', 'xilib.exe'}.isdisjoint(linker): arg = '/?' else: arg = '--version' @@ -1185,6 +1225,8 @@ class Environment: except OSError as e: popen_exceptions[' '.join(linker + [arg])] = e continue + if "xilib: executing 'lib'" in err: + return IntelVisualStudioLinker(linker, getattr(compiler, 'machine', None)) if '/OUT:' in out.upper() or '/OUT:' in err.upper(): return VisualStudioLinker(linker, getattr(compiler, 'machine', None)) if p.returncode == 0 and ('armar' in linker or 'armar.exe' in linker): @@ -1206,9 +1248,6 @@ class Environment: self._handle_exceptions(popen_exceptions, linkers, 'linker') raise EnvironmentException('Unknown static linker "%s"' % ' '.join(linkers)) - def detect_build_machine(self, compilers = None): - self.machines.build = detect_machine_info(compilers) - def get_source_dir(self): return self.source_dir diff --git a/mesonbuild/interpreter.py b/mesonbuild/interpreter.py index 6ae1b27..30be5ae 100644 --- a/mesonbuild/interpreter.py +++ b/mesonbuild/interpreter.py @@ -39,6 +39,7 @@ from collections import namedtuple from itertools import chain from pathlib import PurePath import functools +import typing import importlib @@ -60,12 +61,12 @@ def stringifyUserArguments(args): class FeatureOptionHolder(InterpreterObject, ObjectHolder): - def __init__(self, env, option): + def __init__(self, env, name, option): InterpreterObject.__init__(self) ObjectHolder.__init__(self, option) if option.is_auto(): self.held_object = env.coredata.builtins['auto_features'] - self.name = option.name + self.name = name self.methods.update({'enabled': self.enabled_method, 'disabled': self.disabled_method, 'auto': self.auto_method, @@ -872,8 +873,10 @@ class RunTargetHolder(InterpreterObject, ObjectHolder): return r.format(self.__class__.__name__, h.get_id(), h.command) class Test(InterpreterObject): - def __init__(self, name, project, suite, exe, depends, is_parallel, - cmd_args, env, should_fail, timeout, workdir, protocol): + def __init__(self, name: str, project: str, suite: typing.List[str], exe: build.Executable, + depends: typing.List[typing.Union[build.CustomTarget, build.BuildTarget]], + is_parallel: bool, cmd_args: typing.List[str], env: build.EnvironmentVariables, + should_fail: bool, timeout: int, workdir: typing.Optional[str], protocol: str): InterpreterObject.__init__(self) self.name = name self.suite = suite @@ -1646,7 +1649,7 @@ class CompilerHolder(InterpreterObject): ModuleState = namedtuple('ModuleState', [ 'build_to_src', 'subproject', 'subdir', 'current_lineno', 'environment', - 'project_name', 'project_version', 'backend', 'compilers', 'targets', + 'project_name', 'project_version', 'backend', 'targets', 'data', 'headers', 'man', 'global_args', 'project_args', 'build_machine', 'host_machine', 'target_machine', 'current_node']) @@ -1681,7 +1684,6 @@ class ModuleHolder(InterpreterObject, ObjectHolder): # The backend object is under-used right now, but we will need it: # https://github.com/mesonbuild/meson/issues/1419 backend=self.interpreter.backend, - compilers=self.interpreter.build.compilers, targets=self.interpreter.build.targets, data=self.interpreter.build.data, headers=self.interpreter.build.get_headers(), @@ -1838,9 +1840,9 @@ class MesonMain(InterpreterObject): if not isinstance(native, bool): raise InterpreterException('Type of "native" must be a boolean.') if native: - clist = self.build.compilers + clist = self.interpreter.coredata.compilers else: - clist = self.build.cross_compilers + clist = self.interpreter.coredata.cross_compilers if cname in clist: return CompilerHolder(clist[cname], self.build.environment, self.interpreter.subproject) raise InterpreterException('Tried to access compiler for unspecified language "%s".' % cname) @@ -1997,7 +1999,12 @@ permitted_kwargs = {'add_global_arguments': {'language', 'native'}, }, 'executable': build.known_exe_kwargs, 'find_program': {'required', 'native'}, - 'generator': {'arguments', 'output', 'depfile', 'capture', 'preserve_path_from'}, + 'generator': {'arguments', + 'output', + 'depends', + 'depfile', + 'capture', + 'preserve_path_from'}, 'include_directories': {'is_system'}, 'install_data': {'install_dir', 'install_mode', 'rename', 'sources'}, 'install_headers': {'install_dir', 'install_mode', 'subdir'}, @@ -2063,12 +2070,12 @@ class Interpreter(InterpreterBase): if not mock: self.parse_project() - # Initialize machine descriptions. We can do a better job now because we + # Re-initialize machine descriptions. We can do a better job now because we # have the compilers needed to gain more knowledge, so wipe out old # inferrence and start over. - self.build.environment.machines.miss_defaulting() - self.build.environment.detect_build_machine(self.coredata.compilers) - self.build.environment.machines.default_missing() + machines = self.build.environment.machines.miss_defaulting() + machines.build = environment.detect_machine_info(self.coredata.compilers) + self.build.environment.machines = machines.default_missing() assert self.build.environment.machines.build.cpu is not None assert self.build.environment.machines.host.cpu is not None assert self.build.environment.machines.target.cpu is not None @@ -2222,7 +2229,7 @@ class Interpreter(InterpreterBase): def check_cross_stdlibs(self): if self.build.environment.is_cross_build(): props = self.build.environment.properties.host - for l in self.build.cross_compilers.keys(): + for l in self.coredata.cross_compilers.keys(): try: di = mesonlib.stringlistify(props.get_stdlib(l)) if len(di) != 2: @@ -2552,7 +2559,7 @@ external dependencies (including libraries) must go to "dependencies".''') 'options of other subprojects.') opt = self.get_option_internal(optname) if isinstance(opt, coredata.UserFeatureOption): - return FeatureOptionHolder(self.environment, opt) + return FeatureOptionHolder(self.environment, optname, opt) elif isinstance(opt, coredata.UserOption): return opt.value return opt @@ -2873,14 +2880,13 @@ external dependencies (including libraries) must go to "dependencies".''') # FIXME: Not all dependencies support such a distinction right now, # and we repeat this check inside dependencies that do. We need to # consolidate this somehow. - is_cross = self.environment.is_cross_build() - if 'native' in kwargs and is_cross: - want_cross = not kwargs['native'] + if self.environment.is_cross_build() and kwargs.get('native', False): + for_machine = MachineChoice.BUILD else: - want_cross = is_cross + for_machine = MachineChoice.HOST - identifier = dependencies.get_dep_identifier(name, kwargs, want_cross) - cached_dep = self.coredata.deps.get(identifier) + identifier = dependencies.get_dep_identifier(name, kwargs) + cached_dep = self.coredata.deps[for_machine].get(identifier) if cached_dep: if not cached_dep.found(): mlog.log('Dependency', mlog.bold(name), @@ -3032,7 +3038,11 @@ external dependencies (including libraries) must go to "dependencies".''') # cannot cache them. They must always be evaluated else # we won't actually read all the build files. if dep.found(): - self.coredata.deps[identifier] = dep + if self.environment.is_cross_build() and kwargs.get('native', False): + for_machine = MachineChoice.BUILD + else: + for_machine = MachineChoice.HOST + self.coredata.deps[for_machine].put(identifier, dep) return DependencyHolder(dep, self.subproject) if has_fallback: @@ -3270,7 +3280,7 @@ This will become a hard error in the future.''' % kwargs['input'], location=self def func_test(self, node, args, kwargs): self.add_test(node, args, kwargs, True) - def unpack_env_kwarg(self, kwargs): + def unpack_env_kwarg(self, kwargs) -> build.EnvironmentVariables: envlist = kwargs.get('env', EnvironmentVariablesHolder()) if isinstance(envlist, EnvironmentVariablesHolder): env = envlist.held_object @@ -3778,10 +3788,7 @@ different subdirectory. 'is_default can be set to true only once' % self.build.test_setup_default_name) self.build.test_setup_default_name = setup_name env = self.unpack_env_kwarg(kwargs) - self.build.test_setups[setup_name] = build.TestSetup(exe_wrapper=exe_wrapper, - gdb=gdb, - timeout_multiplier=timeout_multiplier, - env=env) + self.build.test_setups[setup_name] = build.TestSetup(exe_wrapper, gdb, timeout_multiplier, env) def get_argdict_on_crossness(self, native_dict, cross_dict, kwargs): for_native = kwargs.get('native', not self.environment.is_cross_build()) @@ -3874,7 +3881,7 @@ different subdirectory. self.print_extra_warnings() def print_extra_warnings(self): - for c in self.build.compilers.values(): + for c in self.coredata.compilers.values(): if c.get_id() == 'clang': self.check_clang_asan_lundef() break @@ -4091,7 +4098,7 @@ This will become a hard error in the future.''', location=self.current_node) def get_used_languages(self, target): result = {} for i in target.sources: - for lang, c in self.build.compilers.items(): + for lang, c in self.coredata.compilers.items(): if c.can_compile(i): result[lang] = True break diff --git a/mesonbuild/linkers.py b/mesonbuild/linkers.py index c6302bf..648d1ef 100644 --- a/mesonbuild/linkers.py +++ b/mesonbuild/linkers.py @@ -23,7 +23,7 @@ class StaticLinker: return mesonlib.is_windows() -class VisualStudioLinker(StaticLinker): +class VisualStudioLikeLinker: always_args = ['/NOLOGO'] def __init__(self, exelist, machine): @@ -31,7 +31,7 @@ class VisualStudioLinker(StaticLinker): self.machine = machine def get_exelist(self): - return self.exelist[:] + return self.exelist.copy() def get_std_link_args(self): return [] @@ -50,10 +50,10 @@ class VisualStudioLinker(StaticLinker): return [] def get_always_args(self): - return VisualStudioLinker.always_args[:] + return self.always_args.copy() def get_linker_always_args(self): - return VisualStudioLinker.always_args[:] + return self.always_args.copy() def build_rpath_args(self, build_dir, from_dir, rpath_paths, build_rpath, install_rpath): return [] @@ -77,6 +77,16 @@ class VisualStudioLinker(StaticLinker): return [] +class VisualStudioLinker(VisualStudioLikeLinker, StaticLinker): + + """Microsoft's lib static linker.""" + + +class IntelVisualStudioLinker(VisualStudioLikeLinker, StaticLinker): + + """Intel's xilib static linker.""" + + class ArLinker(StaticLinker): def __init__(self, exelist): diff --git a/mesonbuild/mconf.py b/mesonbuild/mconf.py index cd9d35a..6e0d2d0 100644 --- a/mesonbuild/mconf.py +++ b/mesonbuild/mconf.py @@ -14,6 +14,7 @@ import os from . import coredata, environment, mesonlib, build, mintro, mlog +from .ast import AstIDGenerator def add_arguments(parser): coredata.register_builtin_arguments(parser) @@ -52,7 +53,7 @@ class Conf: # Make sure that log entries in other parts of meson don't interfere with the JSON output mlog.disable() self.source_dir = os.path.abspath(os.path.realpath(self.build_dir)) - intr = mintro.IntrospectionInterpreter(self.source_dir, '', 'ninja') + intr = mintro.IntrospectionInterpreter(self.source_dir, '', 'ninja', visitors = [AstIDGenerator()]) intr.analyze() # Reenable logging just in case mlog.enable() @@ -62,7 +63,8 @@ class Conf: raise ConfException('Directory {} is neither a Meson build directory nor a project source directory.'.format(build_dir)) def clear_cache(self): - self.coredata.deps = {} + self.coredata.deps.host.clear() + self.coredata.deps.build.clear() def set_options(self, options): self.coredata.set_options(options) diff --git a/mesonbuild/mesonlib.py b/mesonbuild/mesonlib.py index f78fa35..eb59a1c 100644 --- a/mesonbuild/mesonlib.py +++ b/mesonbuild/mesonlib.py @@ -313,37 +313,123 @@ class OrderedEnum(Enum): return self.value < other.value return NotImplemented + class MachineChoice(OrderedEnum): - """Enum class representing one of the three possible values for binaries, - the build, host, and target machines. + """Enum class representing one of the two abstract machine names used in + most places: the build, and host, machines. """ BUILD = 0 HOST = 1 - TARGET = 2 + class PerMachine(typing.Generic[_T]): - def __init__(self, build: _T, host: _T, target: _T): + def __init__(self, build: _T, host: _T): self.build = build self.host = host - self.target = target def __getitem__(self, machine: MachineChoice) -> _T: return { MachineChoice.BUILD: self.build, MachineChoice.HOST: self.host, - MachineChoice.TARGET: self.target }[machine] def __setitem__(self, machine: MachineChoice, val: _T) -> None: key = { MachineChoice.BUILD: 'build', MachineChoice.HOST: 'host', - MachineChoice.TARGET: 'target' }[machine] setattr(self, key, val) + def miss_defaulting(self) -> "PerMachineDefaultable[typing.Optional[_T]]": + """Unset definition duplicated from their previous to None + + This is the inverse of ''default_missing''. By removing defaulted + machines, we can elaborate the original and then redefault them and thus + avoid repeating the elaboration explicitly. + """ + unfreeze = PerMachineDefaultable() # type: PerMachineDefaultable[typing.Optional[_T]] + unfreeze.build = self.build + unfreeze.host = self.host + if unfreeze.host == unfreeze.build: + unfreeze.host = None + return unfreeze + + +class PerThreeMachine(PerMachine[_T]): + """Like `PerMachine` but includes `target` too. + + It turns out just one thing do we need track the target machine. There's no + need to computer the `target` field so we don't bother overriding the + `__getitem__`/`__setitem__` methods. + """ + def __init__(self, build: _T, host: _T, target: _T): + super().__init__(build, host) + self.target = target + + def miss_defaulting(self) -> "PerThreeMachineDefaultable[typing.Optional[_T]]": + """Unset definition duplicated from their previous to None + + This is the inverse of ''default_missing''. By removing defaulted + machines, we can elaborate the original and then redefault them and thus + avoid repeating the elaboration explicitly. + """ + unfreeze = PerThreeMachineDefaultable() # type: PerThreeMachineDefaultable[typing.Optional[_T]] + unfreeze.build = self.build + unfreeze.host = self.host + unfreeze.target = self.target + if unfreeze.target == unfreeze.host: + unfreeze.target = None + if unfreeze.host == unfreeze.build: + unfreeze.host = None + return unfreeze + + def matches_build_machine(self, machine: MachineChoice) -> bool: + return self.build == self[machine] + +class PerMachineDefaultable(PerMachine[typing.Optional[_T]]): + """Extends `PerMachine` with the ability to default from `None`s. + """ + def __init__(self) -> None: + super().__init__(None, None) + + def default_missing(self) -> "PerMachine[typing.Optional[_T]]": + """Default host to buid + + This allows just specifying nothing in the native case, and just host in the + cross non-compiler case. + """ + freeze = PerMachine(self.build, self.host) + if freeze.host is None: + freeze.host = freeze.build + return freeze + + +class PerThreeMachineDefaultable(PerMachineDefaultable, PerThreeMachine[typing.Optional[_T]]): + """Extends `PerThreeMachine` with the ability to default from `None`s. + """ + def __init__(self) -> None: + PerThreeMachine.__init__(self, None, None, None) + + def default_missing(self) -> "PerThreeMachine[typing.Optional[_T]]": + """Default host to buid and target to host. + + This allows just specifying nothing in the native case, just host in the + cross non-compiler case, and just target in the native-built + cross-compiler case. + """ + freeze = PerThreeMachine(self.build, self.host, self.target) + if freeze.host is None: + freeze.host = freeze.build + if freeze.target is None: + freeze.target = freeze.host + return freeze + + +def is_sunos() -> bool: + return platform.system().lower() == 'sunos' + def is_osx() -> bool: return platform.system().lower() == 'darwin' @@ -1216,7 +1302,7 @@ def detect_subprojects(spdir_name, current_dir='', result=None): def get_error_location_string(fname: str, lineno: str) -> str: return '{}:{}:'.format(fname, lineno) -def substring_is_in_list(substr, strlist): +def substring_is_in_list(substr: str, strlist: typing.List[str]) -> bool: for s in strlist: if substr in s: return True diff --git a/mesonbuild/mintro.py b/mesonbuild/mintro.py index 8c8aa15..1d716ef 100644 --- a/mesonbuild/mintro.py +++ b/mesonbuild/mintro.py @@ -26,7 +26,7 @@ from .ast import IntrospectionInterpreter, build_target_functions, AstConditionL from . import mlog from .backend import backends from .mparser import FunctionNode, ArrayNode, ArgumentNode, StringNode -from typing import List, Optional +from typing import Dict, List, Optional import os import pathlib @@ -236,7 +236,7 @@ def list_buildoptions(coredata: cdata.CoreData) -> List[dict]: add_keys(optlist, test_options, 'test') return optlist -def add_keys(optlist, options, section): +def add_keys(optlist, options: Dict[str, cdata.UserOption], section): keys = list(options.keys()) keys.sort() for key in keys: @@ -282,7 +282,7 @@ def list_deps_from_source(intr: IntrospectionInterpreter): def list_deps(coredata: cdata.CoreData): result = [] - for d in coredata.deps.values(): + for d in coredata.deps.host.values(): if d.found(): result += [{'name': d.name, 'compile_args': d.get_compile_args(), diff --git a/mesonbuild/modules/cmake.py b/mesonbuild/modules/cmake.py index 6af4adb..d72ceca 100644 --- a/mesonbuild/modules/cmake.py +++ b/mesonbuild/modules/cmake.py @@ -52,7 +52,8 @@ class CmakeModule(ExtensionModule): super().__init__(interpreter) self.snippets.add('configure_package_config_file') - def detect_voidp_size(self, compilers, env): + def detect_voidp_size(self, env): + compilers = env.coredata.compilers compiler = compilers.get('c', None) if not compiler: compiler = compilers.get('cpp', None) @@ -115,7 +116,7 @@ class CmakeModule(ExtensionModule): conf = { 'CVF_VERSION': (version, ''), - 'CMAKE_SIZEOF_VOID_P': (str(self.detect_voidp_size(state.compilers, state.environment)), '') + 'CMAKE_SIZEOF_VOID_P': (str(self.detect_voidp_size(state.environment)), '') } mesonlib.do_conf_file(template_file, version_file, conf, 'meson') diff --git a/mesonbuild/modules/gnome.py b/mesonbuild/modules/gnome.py index 2d16956..e20bae6 100644 --- a/mesonbuild/modules/gnome.py +++ b/mesonbuild/modules/gnome.py @@ -1027,8 +1027,8 @@ This will become a hard error in the future.''') def _get_build_args(self, kwargs, state, depends): args = [] deps = extract_as_list(kwargs, 'dependencies', unholder=True) - cflags = OrderedSet() - cflags.update(mesonlib.stringlistify(kwargs.pop('c_args', []))) + cflags = [] + cflags.extend(mesonlib.stringlistify(kwargs.pop('c_args', []))) deps_cflags, internal_ldflags, external_ldflags, gi_includes = \ self._get_dependencies_flags(deps, state, depends, include_rpath=True) inc_dirs = mesonlib.extract_as_list(kwargs, 'include_directories') @@ -1037,23 +1037,23 @@ This will become a hard error in the future.''') raise MesonException( 'Gir include dirs should be include_directories().') - cflags.update(deps_cflags) - cflags.update(get_include_args(inc_dirs)) - ldflags = OrderedSet() - ldflags.update(internal_ldflags) - ldflags.update(external_ldflags) + cflags.extend(deps_cflags) + cflags.extend(get_include_args(inc_dirs)) + ldflags = [] + ldflags.extend(internal_ldflags) + ldflags.extend(external_ldflags) - cflags.update(state.environment.coredata.get_external_args(MachineChoice.HOST, 'c')) - ldflags.update(state.environment.coredata.get_external_link_args(MachineChoice.HOST, 'c')) + cflags.extend(state.environment.coredata.get_external_args(MachineChoice.HOST, 'c')) + ldflags.extend(state.environment.coredata.get_external_link_args(MachineChoice.HOST, 'c')) if state.environment.is_cross_build(): compiler = state.environment.coredata.cross_compilers.get('c') else: compiler = state.environment.coredata.compilers.get('c') compiler_flags = self._get_langs_compilers_flags(state, [('c', compiler)]) - cflags.update(compiler_flags[0]) - ldflags.update(compiler_flags[1]) - ldflags.update(compiler_flags[2]) + cflags.extend(compiler_flags[0]) + ldflags.extend(compiler_flags[1]) + ldflags.extend(compiler_flags[2]) if compiler: args += ['--cc=%s' % ' '.join([shlex.quote(x) for x in compiler.get_exelist()])] args += ['--ld=%s' % ' '.join([shlex.quote(x) for x in compiler.get_linker_exelist()])] @@ -1415,7 +1415,7 @@ GType c_file_kwargs['vtail'] = ''' { 0, NULL, NULL } }; if (g_once_init_enter (>ype_id)) { - GType new_type = g_@type@_register_static ("@EnumName@", values); + GType new_type = g_@type@_register_static (g_intern_static_string ("@EnumName@"), values); g_once_init_leave (>ype_id, new_type); } return (GType) gtype_id; diff --git a/mesonbuild/modules/pkgconfig.py b/mesonbuild/modules/pkgconfig.py index 2f8b533..d4d0625 100644 --- a/mesonbuild/modules/pkgconfig.py +++ b/mesonbuild/modules/pkgconfig.py @@ -267,7 +267,7 @@ class PkgConfigModule(ExtensionModule): # These always return paths relative to prefix libdir = PurePath(coredata.get_builtin_option('libdir')) incdir = PurePath(coredata.get_builtin_option('includedir')) - with open(fname, 'w') as ofile: + with open(fname, 'w', encoding='utf-8') as ofile: ofile.write('prefix={}\n'.format(self._escape(prefix))) ofile.write('libdir={}\n'.format(self._escape('${prefix}' / libdir))) ofile.write('includedir={}\n'.format(self._escape('${prefix}' / incdir))) diff --git a/mesonbuild/modules/rpm.py b/mesonbuild/modules/rpm.py index 7c1cefb..b99ae8d 100644 --- a/mesonbuild/modules/rpm.py +++ b/mesonbuild/modules/rpm.py @@ -83,7 +83,7 @@ class RPMModule(ExtensionModule): fn.write('BuildRequires: meson\n') for compiler in required_compilers: fn.write('BuildRequires: %s\n' % compiler) - for dep in coredata.environment.coredata.deps: + for dep in coredata.environment.coredata.deps.host: fn.write('BuildRequires: pkgconfig(%s)\n' % dep[0]) # ext_libs and ext_progs have been removed from coredata so the following code # no longer works. It is kept as a reminder of the idea should anyone wish diff --git a/mesonbuild/modules/windows.py b/mesonbuild/modules/windows.py index 87a83fe..3b4eb15 100644 --- a/mesonbuild/modules/windows.py +++ b/mesonbuild/modules/windows.py @@ -50,8 +50,8 @@ class WindowsModule(ExtensionModule): rescomp = ExternalProgram.from_bin_list(state.environment.binaries.host, 'windres') if not rescomp or not rescomp.found(): - comp = self.detect_compiler(state.compilers) - if comp.id == 'msvc' or comp.id == 'clang-cl': + comp = self.detect_compiler(state.environment.coredata.compilers) + if comp.id in {'msvc', 'clang-cl', 'intel-cl'}: rescomp = ExternalProgram('rc', silent=True) else: rescomp = ExternalProgram('windres', silent=True) diff --git a/mesonbuild/mtest.py b/mesonbuild/mtest.py index 8df8f48..b09de16 100644 --- a/mesonbuild/mtest.py +++ b/mesonbuild/mtest.py @@ -14,26 +14,36 @@ # A tool to run tests in many different ways. -import shlex -import subprocess, sys, os, argparse -import pickle -from mesonbuild import build -from mesonbuild import environment -from mesonbuild.dependencies import ExternalProgram -from mesonbuild.mesonlib import substring_is_in_list, MesonException -from mesonbuild import mlog - from collections import namedtuple -import io -import re -import tempfile -import time, datetime, multiprocessing, json +from copy import deepcopy +import argparse import concurrent.futures as conc +import datetime +import enum +import io +import json +import multiprocessing +import os +import pickle import platform -import signal import random -from copy import deepcopy -import enum +import re +import shlex +import signal +import subprocess +import sys +import tempfile +import time +import typing + +from . import build +from . import environment +from . import mlog +from .dependencies import ExternalProgram +from .mesonlib import substring_is_in_list, MesonException + +if typing.TYPE_CHECKING: + from .backend.backends import TestSerialisation # GNU autotools interprets a return code of 77 from tests it executes to # mean that the test should be skipped. @@ -43,15 +53,15 @@ GNU_SKIP_RETURNCODE = 77 # mean that the test failed even before testing what it is supposed to test. GNU_ERROR_RETURNCODE = 99 -def is_windows(): +def is_windows() -> bool: platname = platform.system().lower() return platname == 'windows' or 'mingw' in platname -def is_cygwin(): +def is_cygwin() -> bool: platname = platform.system().lower() return 'cygwin' in platname -def determine_worker_count(): +def determine_worker_count() -> int: varname = 'MESON_TESTTHREADS' if varname in os.environ: try: @@ -68,7 +78,7 @@ def determine_worker_count(): num_workers = 1 return num_workers -def add_arguments(parser): +def add_arguments(parser: argparse.ArgumentParser) -> None: parser.add_argument('--repeat', default=1, dest='repeat', type=int, help='Number of times to run the tests.') parser.add_argument('--no-rebuild', default=False, action='store_true', @@ -111,7 +121,7 @@ def add_arguments(parser): help='Optional list of tests to run') -def returncode_to_status(retcode): +def returncode_to_status(retcode: int) -> str: # Note: We can't use `os.WIFSIGNALED(result.returncode)` and the related # functions here because the status returned by subprocess is munged. It # returns a negative value if the process was killed by a signal rather than @@ -136,7 +146,7 @@ def returncode_to_status(retcode): signame = 'SIGinvalid' return '(exit status %d or signal %d %s)' % (retcode, signum, signame) -def env_tuple_to_str(env): +def env_tuple_to_str(env: typing.Iterable[typing.Tuple[str, str]]) -> str: return ''.join(["%s='%s' " % (k, v) for k, v in env]) @@ -156,7 +166,7 @@ class TestResult(enum.Enum): ERROR = 'ERROR' -class TAPParser(object): +class TAPParser: Plan = namedtuple('Plan', ['count', 'late', 'skipped', 'explanation']) Bailout = namedtuple('Bailout', ['message']) Test = namedtuple('Test', ['number', 'name', 'result', 'explanation']) @@ -167,18 +177,19 @@ class TAPParser(object): _AFTER_TEST = 2 _YAML = 3 - _RE_BAILOUT = r'Bail out!\s*(.*)' - _RE_DIRECTIVE = r'(?:\s*\#\s*([Ss][Kk][Ii][Pp]\S*|[Tt][Oo][Dd][Oo])\b\s*(.*))?' - _RE_PLAN = r'1\.\.([0-9]+)' + _RE_DIRECTIVE - _RE_TEST = r'((?:not )?ok)\s*(?:([0-9]+)\s*)?([^#]*)' + _RE_DIRECTIVE - _RE_VERSION = r'TAP version ([0-9]+)' - _RE_YAML_START = r'(\s+)---.*' - _RE_YAML_END = r'\s+\.\.\.\s*' + _RE_BAILOUT = re.compile(r'Bail out!\s*(.*)') + _RE_DIRECTIVE = re.compile(r'(?:\s*\#\s*([Ss][Kk][Ii][Pp]\S*|[Tt][Oo][Dd][Oo])\b\s*(.*))?') + _RE_PLAN = re.compile(r'1\.\.([0-9]+)' + _RE_DIRECTIVE.pattern) + _RE_TEST = re.compile(r'((?:not )?ok)\s*(?:([0-9]+)\s*)?([^#]*)' + _RE_DIRECTIVE.pattern) + _RE_VERSION = re.compile(r'TAP version ([0-9]+)') + _RE_YAML_START = re.compile(r'(\s+)---.*') + _RE_YAML_END = re.compile(r'\s+\.\.\.\s*') - def __init__(self, io): + def __init__(self, io: typing.Iterator[str]): self.io = io - def parse_test(self, ok, num, name, directive, explanation): + def parse_test(self, ok: bool, num: int, name: str, directive: typing.Optional[str], explanation: typing.Optional[str]) -> \ + typing.Generator[typing.Union['TAPParser.Test', 'TAPParser.Error'], None, None]: name = name.strip() explanation = explanation.strip() if explanation else None if directive is not None: @@ -195,14 +206,14 @@ class TAPParser(object): yield self.Test(num, name, TestResult.OK if ok else TestResult.FAIL, explanation) - def parse(self): + def parse(self) -> typing.Generator[typing.Union['TAPParser.Test', 'TAPParser.Error', 'TAPParser.Version', 'TAPParser.Plan', 'TAPParser.Bailout'], None, None]: found_late_test = False bailed_out = False plan = None lineno = 0 num_tests = 0 yaml_lineno = None - yaml_indent = None + yaml_indent = '' state = self._MAIN version = 12 while True: @@ -215,7 +226,7 @@ class TAPParser(object): # YAML blocks are only accepted after a test if state == self._AFTER_TEST: if version >= 13: - m = re.match(self._RE_YAML_START, line) + m = self._RE_YAML_START.match(line) if m: state = self._YAML yaml_lineno = lineno @@ -224,19 +235,19 @@ class TAPParser(object): state = self._MAIN elif state == self._YAML: - if re.match(self._RE_YAML_END, line): + if self._RE_YAML_END.match(line): state = self._MAIN continue if line.startswith(yaml_indent): continue - yield self.Error('YAML block not terminated (started on line %d)' % (yaml_lineno,)) + yield self.Error('YAML block not terminated (started on line {})'.format(yaml_lineno)) state = self._MAIN assert state == self._MAIN if line.startswith('#'): continue - m = re.match(self._RE_TEST, line) + m = self._RE_TEST.match(line) if m: if plan and plan.late and not found_late_test: yield self.Error('unexpected test after late plan') @@ -250,7 +261,7 @@ class TAPParser(object): state = self._AFTER_TEST continue - m = re.match(self._RE_PLAN, line) + m = self._RE_PLAN.match(line) if m: if plan: yield self.Error('more than one plan found') @@ -269,13 +280,13 @@ class TAPParser(object): yield plan continue - m = re.match(self._RE_BAILOUT, line) + m = self._RE_BAILOUT.match(line) if m: yield self.Bailout(m.group(1)) bailed_out = True continue - m = re.match(self._RE_VERSION, line) + m = self._RE_VERSION.match(line) if m: # The TAP version is only accepted as the first line if lineno != 1: @@ -291,7 +302,7 @@ class TAPParser(object): yield self.Error('unexpected input at line %d' % (lineno,)) if state == self._YAML: - yield self.Error('YAML block not terminated (started on line %d)' % (yaml_lineno,)) + yield self.Error('YAML block not terminated (started on line {})'.format(yaml_lineno)) if not bailed_out and plan and num_tests != plan.count: if num_tests < plan.count: @@ -301,8 +312,12 @@ class TAPParser(object): class TestRun: - @staticmethod - def make_exitcode(test, returncode, duration, stdo, stde, cmd): + + @classmethod + def make_exitcode(cls, test: 'TestSerialisation', test_env: typing.Dict[str, str], + returncode: int, duration: float, stdo: typing.Optional[str], + stde: typing.Optional[str], + cmd: typing.Optional[typing.List[str]]) -> 'TestRun': if returncode == GNU_SKIP_RETURNCODE: res = TestResult.SKIP elif returncode == GNU_ERROR_RETURNCODE: @@ -311,9 +326,12 @@ class TestRun: res = TestResult.EXPECTEDFAIL if bool(returncode) else TestResult.UNEXPECTEDPASS else: res = TestResult.FAIL if bool(returncode) else TestResult.OK - return TestRun(test, res, returncode, duration, stdo, stde, cmd) + return cls(test, test_env, res, returncode, duration, stdo, stde, cmd) - def make_tap(test, returncode, duration, stdo, stde, cmd): + @classmethod + def make_tap(cls, test: 'TestSerialisation', test_env: typing.Dict[str, str], + returncode: int, duration: float, stdo: str, stde: str, + cmd: typing.Optional[typing.List[str]]) -> 'TestRun': res = None num_tests = 0 failed = False @@ -346,9 +364,12 @@ class TestRun: else: res = TestResult.FAIL if failed else TestResult.OK - return TestRun(test, res, returncode, duration, stdo, stde, cmd) + return cls(test, test_env, res, returncode, duration, stdo, stde, cmd) - def __init__(self, test, res, returncode, duration, stdo, stde, cmd): + def __init__(self, test: 'TestSerialisation', test_env: typing.Dict[str, str], + res: TestResult, returncode: int, duration: float, + stdo: typing.Optional[str], stde: typing.Optional[str], + cmd: typing.Optional[typing.List[str]]): assert isinstance(res, TestResult) self.res = res self.returncode = returncode @@ -356,10 +377,10 @@ class TestRun: self.stdo = stdo self.stde = stde self.cmd = cmd - self.env = test.env + self.env = test_env self.should_fail = test.should_fail - def get_log(self): + def get_log(self) -> str: res = '--- command ---\n' if self.cmd is None: res += 'NONE\n' @@ -379,7 +400,7 @@ class TestRun: res += '-------\n\n' return res -def decode(stream): +def decode(stream: typing.Union[None, bytes]) -> str: if stream is None: return '' try: @@ -387,51 +408,50 @@ def decode(stream): except UnicodeDecodeError: return stream.decode('iso-8859-1', errors='ignore') -def write_json_log(jsonlogfile, test_name, result): +def write_json_log(jsonlogfile: typing.TextIO, test_name: str, result: TestRun) -> None: jresult = {'name': test_name, 'stdout': result.stdo, 'result': result.res.value, 'duration': result.duration, 'returncode': result.returncode, - 'command': result.cmd} - if isinstance(result.env, dict): - jresult['env'] = result.env - else: - jresult['env'] = result.env.get_env(os.environ) + 'env': result.env, + 'command': result.cmd} # type: typing.Dict[str, typing.Any] if result.stde: jresult['stderr'] = result.stde jsonlogfile.write(json.dumps(jresult) + '\n') -def run_with_mono(fname): +def run_with_mono(fname: str) -> bool: if fname.endswith('.exe') and not (is_windows() or is_cygwin()): return True return False -def load_benchmarks(build_dir): +def load_benchmarks(build_dir: str) -> typing.List['TestSerialisation']: datafile = os.path.join(build_dir, 'meson-private', 'meson_benchmark_setup.dat') if not os.path.isfile(datafile): raise TestException('Directory ${!r} does not seem to be a Meson build directory.'.format(build_dir)) with open(datafile, 'rb') as f: - obj = pickle.load(f) + obj = typing.cast(typing.List['TestSerialisation'], pickle.load(f)) return obj -def load_tests(build_dir): +def load_tests(build_dir: str) -> typing.List['TestSerialisation']: datafile = os.path.join(build_dir, 'meson-private', 'meson_test_setup.dat') if not os.path.isfile(datafile): raise TestException('Directory ${!r} does not seem to be a Meson build directory.'.format(build_dir)) with open(datafile, 'rb') as f: - obj = pickle.load(f) + obj = typing.cast(typing.List['TestSerialisation'], pickle.load(f)) return obj class SingleTestRunner: - def __init__(self, test, env, options): + def __init__(self, test: 'TestSerialisation', test_env: typing.Dict[str, str], + env: typing.Dict[str, str], options: argparse.Namespace): self.test = test + self.test_env = test_env self.env = env self.options = options - def _get_cmd(self): + def _get_cmd(self) -> typing.Optional[typing.List[str]]: if self.test.fname[0].endswith('.jar'): return ['java', '-jar'] + self.test.fname elif not self.test.is_cross_built and run_with_mono(self.test.fname[0]): @@ -451,19 +471,18 @@ class SingleTestRunner: else: return self.test.fname - def run(self): + def run(self) -> TestRun: cmd = self._get_cmd() if cmd is None: skip_stdout = 'Not run because can not execute cross compiled binaries.' - return TestRun(test=self.test, res=TestResult.SKIP, returncode=GNU_SKIP_RETURNCODE, - duration=0.0, stdo=skip_stdout, stde=None, cmd=None) + return TestRun(self.test, self.test_env, TestResult.SKIP, GNU_SKIP_RETURNCODE, 0.0, skip_stdout, None, None) else: wrap = TestHarness.get_wrapper(self.options) if self.options.gdb: self.test.timeout = None return self._run_cmd(wrap + cmd + self.test.cmd_args + self.options.test_args) - def _run_cmd(self, cmd): + def _run_cmd(self, cmd: typing.List[str]) -> TestRun: starttime = time.time() if len(self.test.extra_paths) > 0: @@ -500,7 +519,7 @@ class SingleTestRunner: # Make the meson executable ignore SIGINT while gdb is running. signal.signal(signal.SIGINT, signal.SIG_IGN) - def preexec_fn(): + def preexec_fn() -> None: if self.options.gdb: # Restore the SIGINT handler for the child process to # ensure it can handle it. @@ -529,7 +548,7 @@ class SingleTestRunner: p.communicate(timeout=timeout) except subprocess.TimeoutExpired: if self.options.verbose: - print('%s time out (After %d seconds)' % (self.test.name, timeout)) + print('{} time out (After {} seconds)'.format(self.test.name, timeout)) timed_out = True except KeyboardInterrupt: mlog.warning('CTRL-C detected while running %s' % (self.test.name)) @@ -566,9 +585,9 @@ class SingleTestRunner: try: p.communicate(timeout=1) except subprocess.TimeoutExpired: - additional_error = b'Test process could not be killed.' + additional_error = 'Test process could not be killed.' except ValueError: - additional_error = b'Could not read output. Maybe the process has redirected its stdout/stderr?' + additional_error = 'Could not read output. Maybe the process has redirected its stdout/stderr?' endtime = time.time() duration = endtime - starttime if additional_error is None: @@ -586,20 +605,20 @@ class SingleTestRunner: stdo = "" stde = additional_error if timed_out: - return TestRun(self.test, TestResult.TIMEOUT, p.returncode, duration, stdo, stde, cmd) + return TestRun(self.test, self.test_env, TestResult.TIMEOUT, p.returncode, duration, stdo, stde, cmd) else: if self.test.protocol == 'exitcode': - return TestRun.make_exitcode(self.test, p.returncode, duration, stdo, stde, cmd) + return TestRun.make_exitcode(self.test, self.test_env, p.returncode, duration, stdo, stde, cmd) else: if self.options.verbose: print(stdo, end='') - return TestRun.make_tap(self.test, p.returncode, duration, stdo, stde, cmd) + return TestRun.make_tap(self.test, self.test_env, p.returncode, duration, stdo, stde, cmd) class TestHarness: - def __init__(self, options): + def __init__(self, options: argparse.Namespace): self.options = options - self.collected_logs = [] + self.collected_logs = [] # type: typing.List[str] self.fail_count = 0 self.expectedfail_count = 0 self.unexpectedpass_count = 0 @@ -608,23 +627,26 @@ class TestHarness: self.timeout_count = 0 self.is_run = False self.tests = None - self.suites = None - self.logfilename = None - self.logfile = None - self.jsonlogfile = None + self.logfilename = None # type: typing.Optional[str] + self.logfile = None # type: typing.Optional[typing.TextIO] + self.jsonlogfile = None # type: typing.Optional[typing.TextIO] if self.options.benchmark: self.tests = load_benchmarks(options.wd) else: self.tests = load_tests(options.wd) - self.load_suites() + ss = set() + for t in self.tests: + for s in t.suite: + ss.add(s) + self.suites = list(ss) - def __del__(self): + def __del__(self) -> None: if self.logfile: self.logfile.close() if self.jsonlogfile: self.jsonlogfile.close() - def merge_suite_options(self, options, test): + def merge_suite_options(self, options: argparse.Namespace, test: 'TestSerialisation') -> typing.Dict[str, str]: if ':' in options.setup: if options.setup not in self.build_data.test_setups: sys.exit("Unknown test setup '%s'." % options.setup) @@ -648,7 +670,7 @@ class TestHarness: options.wrapper = current.exe_wrapper return current.env.get_env(os.environ.copy()) - def get_test_runner(self, test): + def get_test_runner(self, test: 'TestSerialisation') -> SingleTestRunner: options = deepcopy(self.options) if not options.setup: options.setup = self.build_data.test_setup_default_name @@ -656,12 +678,11 @@ class TestHarness: env = self.merge_suite_options(options, test) else: env = os.environ.copy() - if isinstance(test.env, build.EnvironmentVariables): - test.env = test.env.get_env(env) - env.update(test.env) - return SingleTestRunner(test, env, options) + test_env = test.env.get_env(env) + env.update(test_env) + return SingleTestRunner(test, test_env, env, options) - def process_test_result(self, result): + def process_test_result(self, result: TestRun) -> None: if result.res is TestResult.TIMEOUT: self.timeout_count += 1 elif result.res is TestResult.SKIP: @@ -677,7 +698,8 @@ class TestHarness: else: sys.exit('Unknown test result encountered: {}'.format(result.res)) - def print_stats(self, numlen, tests, name, result, i): + def print_stats(self, numlen: int, tests: typing.List['TestSerialisation'], + name: str, result: TestRun, i: int) -> None: startpad = ' ' * (numlen - len('%d' % (i + 1))) num = '%s%d/%d' % (startpad, i + 1, len(tests)) padding1 = ' ' * (38 - len(name)) @@ -712,7 +734,7 @@ class TestHarness: if self.jsonlogfile: write_json_log(self.jsonlogfile, name, result) - def print_summary(self): + def print_summary(self) -> None: msg = ''' Ok: %4d Expected Fail: %4d @@ -726,7 +748,7 @@ Timeout: %4d if self.logfile: self.logfile.write(msg) - def print_collected_logs(self): + def print_collected_logs(self) -> None: if len(self.collected_logs) > 0: if len(self.collected_logs) > 10: print('\nThe output from 10 first failed tests:\n') @@ -745,10 +767,10 @@ Timeout: %4d line = line.encode('ascii', errors='replace').decode() print(line) - def total_failure_count(self): + def total_failure_count(self) -> int: return self.fail_count + self.unexpectedpass_count + self.timeout_count - def doit(self): + def doit(self) -> int: if self.is_run: raise RuntimeError('Test harness object can only be used once.') self.is_run = True @@ -759,14 +781,16 @@ Timeout: %4d return self.total_failure_count() @staticmethod - def split_suite_string(suite): + def split_suite_string(suite: str) -> typing.Tuple[str, str]: if ':' in suite: - return suite.split(':', 1) + # mypy can't figure out that str.split(n, 1) will return a list of + # length 2, so we have to help it. + return typing.cast(typing.Tuple[str, str], tuple(suite.split(':', 1))) else: return suite, "" @staticmethod - def test_in_suites(test, suites): + def test_in_suites(test: 'TestSerialisation', suites: typing.List[str]) -> bool: for suite in suites: (prj_match, st_match) = TestHarness.split_suite_string(suite) for prjst in test.suite: @@ -797,18 +821,11 @@ Timeout: %4d return True return False - def test_suitable(self, test): + def test_suitable(self, test: 'TestSerialisation') -> bool: return (not self.options.include_suites or TestHarness.test_in_suites(test, self.options.include_suites)) \ and not TestHarness.test_in_suites(test, self.options.exclude_suites) - def load_suites(self): - ss = set() - for t in self.tests: - for s in t.suite: - ss.add(s) - self.suites = list(ss) - - def get_tests(self): + def get_tests(self) -> typing.List['TestSerialisation']: if not self.tests: print('No tests defined.') return [] @@ -828,14 +845,11 @@ Timeout: %4d print('No suitable tests defined.') return [] - for test in tests: - test.rebuilt = False - return tests - def open_log_files(self): + def open_log_files(self) -> None: if not self.options.logbase or self.options.verbose: - return None, None, None, None + return namebase = None logfile_base = os.path.join(self.options.wd, 'meson-logs', self.options.logbase) @@ -859,8 +873,8 @@ Timeout: %4d self.logfile.write('Inherited environment: {}\n\n'.format(inherit_env)) @staticmethod - def get_wrapper(options): - wrap = [] + def get_wrapper(options: argparse.Namespace) -> typing.List[str]: + wrap = [] # type: typing.List[str] if options.gdb: wrap = ['gdb', '--quiet', '--nh'] if options.repeat > 1: @@ -869,10 +883,9 @@ Timeout: %4d wrap += ['--args'] if options.wrapper: wrap += options.wrapper - assert(isinstance(wrap, list)) return wrap - def get_pretty_suite(self, test): + def get_pretty_suite(self, test: 'TestSerialisation') -> str: if len(self.suites) > 1 and test.suite: rv = TestHarness.split_suite_string(test.suite[0])[0] s = "+".join(TestHarness.split_suite_string(s)[1] for s in test.suite) @@ -882,9 +895,9 @@ Timeout: %4d else: return test.name - def run_tests(self, tests): + def run_tests(self, tests: typing.List['TestSerialisation']) -> None: executor = None - futures = [] + futures = [] # type: typing.List[typing.Tuple[conc.Future[TestRun], int, typing.List[TestSerialisation], str, int]] numlen = len('%d' % len(tests)) self.open_log_files() startdir = os.getcwd() @@ -923,9 +936,9 @@ Timeout: %4d finally: os.chdir(startdir) - def drain_futures(self, futures): - for i in futures: - (result, numlen, tests, name, i) = i + def drain_futures(self, futures: typing.List[typing.Tuple['conc.Future[TestRun]', int, typing.List['TestSerialisation'], str, int]]) -> None: + for x in futures: + (result, numlen, tests, name, i) = x if self.options.repeat > 1 and self.fail_count: result.cancel() if self.options.verbose: @@ -933,7 +946,7 @@ Timeout: %4d self.process_test_result(result.result()) self.print_stats(numlen, tests, name, result.result(), i) - def run_special(self): + def run_special(self) -> int: '''Tests run by the user, usually something like "under gdb 1000 times".''' if self.is_run: raise RuntimeError('Can not use run_special after a full run.') @@ -944,13 +957,13 @@ Timeout: %4d return self.total_failure_count() -def list_tests(th): +def list_tests(th: TestHarness) -> bool: tests = th.get_tests() for t in tests: print(th.get_pretty_suite(t)) return not tests -def rebuild_all(wd): +def rebuild_all(wd: str) -> bool: if not os.path.isfile(os.path.join(wd, 'build.ninja')): print('Only ninja backend is supported to rebuild tests before running them.') return True @@ -969,7 +982,7 @@ def rebuild_all(wd): return True -def run(options): +def run(options: argparse.Namespace) -> int: if options.benchmark: options.num_processes = 1 @@ -1014,7 +1027,7 @@ def run(options): print(e) return 1 -def run_with_args(args): +def run_with_args(args: typing.List[str]) -> int: parser = argparse.ArgumentParser(prog='meson test') add_arguments(parser) options = parser.parse_args(args) diff --git a/mesonbuild/munstable_coredata.py b/mesonbuild/munstable_coredata.py index f16468c..864df04 100644 --- a/mesonbuild/munstable_coredata.py +++ b/mesonbuild/munstable_coredata.py @@ -97,13 +97,11 @@ def run(options): print('Cached cross compilers:') dump_compilers(v) elif k == 'deps': - native = [] - cross = [] - for dep_key, dep in sorted(v.items()): - if dep_key[1]: - cross.append((dep_key, dep)) - else: - native.append((dep_key, dep)) + native = list(sorted(v.build.items())) + if v.host is not v.build: + cross = list(sorted(v.host.items())) + else: + cross = [] def print_dep(dep_key, dep): print(' ' + dep_key[0] + ": ") @@ -115,12 +113,14 @@ def run(options): if native: print('Cached native dependencies:') - for dep_key, dep in native: - print_dep(dep_key, dep) + for dep_key, deps in native: + for dep in deps: + print_dep(dep_key, dep) if cross: print('Cached dependencies:') - for dep_key, dep in cross: - print_dep(dep_key, dep) + for dep_key, deps in cross: + for dep in deps: + print_dep(dep_key, dep) else: print(k + ':') print(textwrap.indent(pprint.pformat(v), ' ')) diff --git a/mesonbuild/optinterpreter.py b/mesonbuild/optinterpreter.py index e64ed4e..1d0533e 100644 --- a/mesonbuild/optinterpreter.py +++ b/mesonbuild/optinterpreter.py @@ -14,6 +14,7 @@ import os, re import functools +import typing from . import mparser from . import coredata @@ -22,7 +23,7 @@ from . import compilers forbidden_option_names = set(coredata.builtin_options.keys()) forbidden_prefixes = [lang + '_' for lang in compilers.all_languages] + ['b_', 'backend_'] -reserved_prefixes = ['cross_'] +reserved_prefixes = ['cross_', 'build_'] def is_invalid_name(name: str, *, log: bool = True) -> bool: if name in forbidden_option_names: @@ -49,7 +50,7 @@ def permitted_kwargs(permitted): if bad: raise OptionException('Invalid kwargs for option "{}": "{}"'.format( name, ' '.join(bad))) - return func(name, description, kwargs) + return func(description, kwargs) return _inner return _wraps @@ -57,21 +58,20 @@ def permitted_kwargs(permitted): optname_regex = re.compile('[^a-zA-Z0-9_-]') @permitted_kwargs({'value', 'yield'}) -def StringParser(name, description, kwargs): - return coredata.UserStringOption(name, - description, +def StringParser(description, kwargs): + return coredata.UserStringOption(description, kwargs.get('value', ''), kwargs.get('choices', []), kwargs.get('yield', coredata.default_yielding)) @permitted_kwargs({'value', 'yield'}) -def BooleanParser(name, description, kwargs): - return coredata.UserBooleanOption(name, description, +def BooleanParser(description, kwargs): + return coredata.UserBooleanOption(description, kwargs.get('value', True), kwargs.get('yield', coredata.default_yielding)) @permitted_kwargs({'value', 'yield', 'choices'}) -def ComboParser(name, description, kwargs): +def ComboParser(description, kwargs): if 'choices' not in kwargs: raise OptionException('Combo option missing "choices" keyword.') choices = kwargs['choices'] @@ -80,19 +80,17 @@ def ComboParser(name, description, kwargs): for i in choices: if not isinstance(i, str): raise OptionException('Combo choice elements must be strings.') - return coredata.UserComboOption(name, - description, + return coredata.UserComboOption(description, choices, kwargs.get('value', choices[0]), kwargs.get('yield', coredata.default_yielding),) @permitted_kwargs({'value', 'min', 'max', 'yield'}) -def IntegerParser(name, description, kwargs): +def IntegerParser(description, kwargs): if 'value' not in kwargs: raise OptionException('Integer option must contain value argument.') - return coredata.UserIntegerOption(name, - description, + return coredata.UserIntegerOption(description, kwargs.get('min', None), kwargs.get('max', None), kwargs['value'], @@ -102,7 +100,7 @@ def IntegerParser(name, description, kwargs): # reading options in project(). See func_project() in interpreter.py #@FeatureNew('array type option()', '0.44.0') @permitted_kwargs({'value', 'yield', 'choices'}) -def string_array_parser(name, description, kwargs): +def string_array_parser(description, kwargs): if 'choices' in kwargs: choices = kwargs['choices'] if not isinstance(choices, list): @@ -116,16 +114,14 @@ def string_array_parser(name, description, kwargs): value = kwargs.get('value', []) if not isinstance(value, list): raise OptionException('Array choices must be passed as an array.') - return coredata.UserArrayOption(name, - description, + return coredata.UserArrayOption(description, value, choices=choices, yielding=kwargs.get('yield', coredata.default_yielding)) @permitted_kwargs({'value', 'yield'}) -def FeatureParser(name, description, kwargs): - return coredata.UserFeatureOption(name, - description, +def FeatureParser(description, kwargs): + return coredata.UserFeatureOption(description, kwargs.get('value', 'auto'), yielding=kwargs.get('yield', coredata.default_yielding)) @@ -135,7 +131,7 @@ option_types = {'string': StringParser, 'integer': IntegerParser, 'array': string_array_parser, 'feature': FeatureParser, - } + } # type: typing.Dict[str, typing.Callable[[str, typing.Dict], coredata.UserOption]] class OptionInterpreter: def __init__(self, subproject): diff --git a/run_project_tests.py b/run_project_tests.py index b9077c9..02ceb04 100755 --- a/run_project_tests.py +++ b/run_project_tests.py @@ -120,7 +120,7 @@ def get_relative_files_list_from_dir(fromdir): def platform_fix_name(fname, compiler, env): # canonicalize compiler - if compiler == 'clang-cl': + if compiler in {'clang-cl', 'intel-cl'}: canonical_compiler = 'msvc' else: canonical_compiler = compiler diff --git a/run_tests.py b/run_tests.py index a4b0fa2..f427736 100755 --- a/run_tests.py +++ b/run_tests.py @@ -220,7 +220,7 @@ def clear_meson_configure_class_caches(): mesonbuild.compilers.CCompiler.find_library_cache = {} mesonbuild.compilers.CCompiler.find_framework_cache = {} mesonbuild.dependencies.PkgConfigDependency.pkgbin_cache = {} - mesonbuild.dependencies.PkgConfigDependency.class_pkgbin = mesonlib.PerMachine(None, None, None) + mesonbuild.dependencies.PkgConfigDependency.class_pkgbin = mesonlib.PerMachine(None, None) def run_configure_inprocess(commandlist): old_stdout = sys.stdout diff --git a/run_unittests.py b/run_unittests.py index 7a7c006..d64e60e 100755 --- a/run_unittests.py +++ b/run_unittests.py @@ -848,7 +848,7 @@ class InternalTests(unittest.TestCase): PkgConfigDependency.check_pkgconfig = old_check # Reset dependency class to ensure that in-process configure doesn't mess up PkgConfigDependency.pkgbin_cache = {} - PkgConfigDependency.class_pkgbin = PerMachine(None, None, None) + PkgConfigDependency.class_pkgbin = PerMachine(None, None) def test_version_compare(self): comparefunc = mesonbuild.mesonlib.version_compare_many @@ -981,7 +981,8 @@ class InternalTests(unittest.TestCase): toolset_ver = cc.get_toolset_version() self.assertIsNotNone(toolset_ver) # Visual Studio 2015 and older versions do not define VCToolsVersion - if int(''.join(cc.version.split('.')[0:2])) < 1910: + # TODO: ICL doesn't set this in the VSC2015 profile either + if cc.id == 'msvc' and int(''.join(cc.version.split('.')[0:2])) < 1910: return self.assertIn('VCToolsVersion', os.environ) vctools_ver = os.environ['VCToolsVersion'] @@ -1586,6 +1587,14 @@ class AllPlatformTests(BasePlatformTests): self.assertEqual(value, expected[args][name]) self.wipe() + def test_clike_get_library_dirs(self): + env = get_fake_env() + cc = env.detect_c_compiler(False) + for d in cc.get_library_dirs(env): + self.assertTrue(os.path.exists(d)) + self.assertTrue(os.path.isdir(d)) + self.assertTrue(os.path.isabs(d)) + def test_static_library_overwrite(self): ''' Tests that static libraries are never appended to, always overwritten. @@ -1938,7 +1947,7 @@ class AllPlatformTests(BasePlatformTests): ''' gnu = mesonbuild.compilers.GnuCompiler clang = mesonbuild.compilers.ClangCompiler - intel = mesonbuild.compilers.IntelCompiler + intel = mesonbuild.compilers.IntelGnuLikeCompiler msvc = (mesonbuild.compilers.VisualStudioCCompiler, mesonbuild.compilers.VisualStudioCPPCompiler) clangcl = (mesonbuild.compilers.ClangClCCompiler, mesonbuild.compilers.ClangClCPPCompiler) ar = mesonbuild.linkers.ArLinker @@ -2742,9 +2751,14 @@ int main(int argc, char **argv) { f.write(cross_content) name = os.path.basename(f.name) - with mock.patch('mesonbuild.coredata.os.path.expanduser', lambda x: x.replace('~', d)): - self.init(testdir, ['--cross-file=' + name], inprocess=True) - self.wipe() + # If XDG_DATA_HOME is set in the environment running the + # tests this test will fail, os mock the environment, pop + # it, then test + with mock.patch.dict(os.environ): + os.environ.pop('XDG_DATA_HOME', None) + with mock.patch('mesonbuild.coredata.os.path.expanduser', lambda x: x.replace('~', d)): + self.init(testdir, ['--cross-file=' + name], inprocess=True) + self.wipe() def test_compiler_run_command(self): ''' @@ -2821,7 +2835,7 @@ recommended as it is not supported on some platforms''') testdirlib = os.path.join(testdirbase, 'lib') extra_args = None env = get_fake_env(testdirlib, self.builddir, self.prefix) - if env.detect_c_compiler(False).get_id() not in ['msvc', 'clang-cl']: + if env.detect_c_compiler(False).get_id() not in {'msvc', 'clang-cl', 'intel-cl'}: # static libraries are not linkable with -l with msvc because meson installs them # as .a files which unix_args_to_native will not know as it expects libraries to use # .lib as extension. For a DLL the import library is installed as .lib. Thus for msvc @@ -3994,7 +4008,7 @@ class WindowsTests(BasePlatformTests): # resource compiler depfile generation is not yet implemented for msvc env = get_fake_env(testdir, self.builddir, self.prefix) - depfile_works = env.detect_c_compiler(False).get_id() not in ['msvc', 'clang-cl'] + depfile_works = env.detect_c_compiler(False).get_id() not in {'msvc', 'clang-cl', 'intel-cl'} self.init(testdir) self.build() @@ -4249,11 +4263,17 @@ class LinuxlikeTests(BasePlatformTests): cmd = ['pkg-config', 'requires-test'] out = self._run(cmd + ['--print-requires']).strip().split('\n') - self.assertEqual(sorted(out), sorted(['libexposed', 'libfoo >= 1.0', 'libhello'])) + if not is_openbsd(): + self.assertEqual(sorted(out), sorted(['libexposed', 'libfoo >= 1.0', 'libhello'])) + else: + self.assertEqual(sorted(out), sorted(['libexposed', 'libfoo>=1.0', 'libhello'])) cmd = ['pkg-config', 'requires-private-test'] out = self._run(cmd + ['--print-requires-private']).strip().split('\n') - self.assertEqual(sorted(out), sorted(['libexposed', 'libfoo >= 1.0', 'libhello'])) + if not is_openbsd(): + self.assertEqual(sorted(out), sorted(['libexposed', 'libfoo >= 1.0', 'libhello'])) + else: + self.assertEqual(sorted(out), sorted(['libexposed', 'libfoo>=1.0', 'libhello'])) def test_pkg_unfound(self): testdir = os.path.join(self.unit_test_dir, '23 unfound pkgconfig') @@ -5866,6 +5886,10 @@ class NativeFileTests(BasePlatformTests): raise unittest.SkipTest('No alternate Fortran implementation.') elif comp.id == 'gcc': if shutil.which('ifort'): + # There is an ICC for windows (windows build, linux host), + # but we don't support that ATM so lets not worry about it. + if is_windows(): + return 'ifort', 'intel-cl' return 'ifort', 'intel' elif shutil.which('flang'): return 'flang', 'flang' diff --git a/test cases/common/1 trivial/meson.build b/test cases/common/1 trivial/meson.build index 67d6ed6..c71d9b0 100644 --- a/test cases/common/1 trivial/meson.build +++ b/test cases/common/1 trivial/meson.build @@ -6,9 +6,12 @@ project('trivial test', #this is a comment sources = 'trivial.c' -if meson.get_compiler('c').get_id() == 'intel' +cc = meson.get_compiler('c') +if cc.get_id() == 'intel' # Error out if the -std=xxx option is incorrect add_project_arguments('-diag-error', '10159', language : 'c') +elif cc.get_id() == 'intel-cl' + add_project_arguments('/Qdiag-error:10159', language : 'c') endif if meson.is_cross_build() diff --git a/test cases/common/123 llvm ir and assembly/meson.build b/test cases/common/123 llvm ir and assembly/meson.build index a67c6c6..3cc7d5e 100644 --- a/test cases/common/123 llvm ir and assembly/meson.build +++ b/test cases/common/123 llvm ir and assembly/meson.build @@ -41,16 +41,17 @@ foreach lang : ['c', 'cpp'] error('MESON_SKIP_TEST: ML (masm) not found') endif # Preprocess file (ml doesn't support pre-processing) + # Force the intput to be C (/Tc) because ICL otherwise assumes it's an object (.obj) file preproc_name = lang + square_base + '.i' square_preproc = custom_target(lang + square_impl + 'preproc', input : square_impl, output : preproc_name, - command : [cl, '/EP', '/P', '/Fi' + preproc_name, '@INPUT@'] + uscore_args) + command : [cl, '/nologo', '/EP', '/P', '/Fi' + preproc_name, '/Tc', '@INPUT@'] + uscore_args) # Use assembled object file instead of the original .S assembly source square_impl = custom_target(lang + square_impl, input : square_preproc, output : lang + square_base + '.obj', - command : [ml, '/safeseh', '/Fo', '@OUTPUT@', '/c', '@INPUT@']) + command : [ml, '/nologo', '/safeseh', '/Fo', '@OUTPUT@', '/c', '@INPUT@']) endif if supported_cpus.contains(cpu) e = executable('square_asm_' + lang, square_impl, 'main.' + lang, diff --git a/test cases/common/124 cpp and asm/meson.build b/test cases/common/124 cpp and asm/meson.build index f097084..cf064d0 100644 --- a/test cases/common/124 cpp and asm/meson.build +++ b/test cases/common/124 cpp and asm/meson.build @@ -15,7 +15,7 @@ endif sources = ['trivial.cc'] # If the compiler cannot compile assembly, don't use it -if not ['msvc', 'clang-cl'].contains(meson.get_compiler('cpp').get_id()) +if not ['msvc', 'clang-cl', 'intel-cl'].contains(meson.get_compiler('cpp').get_id()) sources += ['retval-' + cpu + '.S'] cpp_args = ['-DUSE_ASM'] message('Using ASM') diff --git a/test cases/common/132 generated assembly/meson.build b/test cases/common/132 generated assembly/meson.build index 5fb7429..2837747 100644 --- a/test cases/common/132 generated assembly/meson.build +++ b/test cases/common/132 generated assembly/meson.build @@ -2,7 +2,7 @@ project('generated assembly', 'c') cc = meson.get_compiler('c') -if ['msvc', 'clang-cl'].contains(cc.get_id()) +if ['msvc', 'clang-cl', 'intel-cl'].contains(cc.get_id()) error('MESON_SKIP_TEST: assembly files cannot be compiled directly by the compiler') endif diff --git a/test cases/common/143 C and CPP link/meson.build b/test cases/common/143 C and CPP link/meson.build index 75281de..a93a981 100644 --- a/test cases/common/143 C and CPP link/meson.build +++ b/test cases/common/143 C and CPP link/meson.build @@ -30,6 +30,8 @@ if cxx.get_argument_syntax() == 'msvc' static_linker = find_program('lib') elif cxx.get_id() == 'clang-cl' static_linker = find_program('llvm-lib') + elif cxx.get_id() == 'intel-cl' + static_linker = find_program('xilib') else error('unable to determine static linker to use with this compiler') endif diff --git a/test cases/common/2 cpp/meson.build b/test cases/common/2 cpp/meson.build index 6398382..27c4321 100644 --- a/test cases/common/2 cpp/meson.build +++ b/test cases/common/2 cpp/meson.build @@ -1,8 +1,11 @@ project('c++ test', 'cpp') -if meson.get_compiler('cpp').get_id() == 'intel' +cpp = meson.get_compiler('cpp') +if cpp.get_id() == 'intel' # Error out if the -std=xxx option is incorrect add_project_arguments('-diag-error', '10159', language : 'cpp') +elif cpp.get_id() == 'intel-cl' + add_project_arguments('/Qdiag-error:10159', language : 'cpp') endif exe = executable('trivialprog', 'trivial.cc', extra_files : 'something.txt') diff --git a/test cases/common/204 function attributes/meson.build b/test cases/common/204 function attributes/meson.build index 1e93803..58ac7c8 100644 --- a/test cases/common/204 function attributes/meson.build +++ b/test cases/common/204 function attributes/meson.build @@ -19,7 +19,7 @@ project('gcc func attributes', ['c', 'cpp']) c = meson.get_compiler('c') cpp = meson.get_compiler('cpp') -expected_result = not ['msvc', 'clang-cl'].contains(c.get_id()) +expected_result = not ['msvc', 'clang-cl', 'intel-cl'].contains(c.get_id()) # Q: Why is ifunc not in this list or any of the below lists? # A: It's too damn hard to figure out if you actually support it, since it @@ -95,7 +95,7 @@ foreach a : ['dllexport', 'dllimport'] endforeach message('checking get_supported_function_attributes') -if not ['msvc', 'clang-cl'].contains(c.get_id()) +if not ['msvc', 'clang-cl', 'intel-cl'].contains(c.get_id()) multi_expected = attributes else multi_expected = [] diff --git a/test cases/common/206 argument syntax/meson.build b/test cases/common/206 argument syntax/meson.build index 216da45..b97ca74 100644 --- a/test cases/common/206 argument syntax/meson.build +++ b/test cases/common/206 argument syntax/meson.build @@ -5,16 +5,10 @@ project( cc = meson.get_compiler('c') -if ['gcc', 'lcc', 'clang'].contains(cc.get_id()) +if ['gcc', 'lcc', 'clang', 'intel'].contains(cc.get_id()) expected = 'gcc' -elif ['msvc', 'clang-cl'].contains(cc.get_id()) +elif ['msvc', 'clang-cl', 'intel-cl'].contains(cc.get_id()) expected = 'msvc' -elif cc.get_id() == 'intel' - if host_machine.system() == 'windows' - expected = 'msvc' - else - expected = 'gcc' - endif else # It's possible that other compilers end up here that shouldn't expected = 'other' diff --git a/test cases/common/27 pipeline/depends/copyrunner.py b/test cases/common/27 pipeline/depends/copyrunner.py new file mode 100755 index 0000000..0ef6a6d --- /dev/null +++ b/test cases/common/27 pipeline/depends/copyrunner.py @@ -0,0 +1,7 @@ +#!/usr/bin/env python3 + +import sys, subprocess + +prog, infile, outfile = sys.argv[1:] + +subprocess.check_call([prog, infile, outfile]) diff --git a/test cases/common/27 pipeline/depends/filecopier.c b/test cases/common/27 pipeline/depends/filecopier.c new file mode 100644 index 0000000..9001cf3 --- /dev/null +++ b/test cases/common/27 pipeline/depends/filecopier.c @@ -0,0 +1,22 @@ +#include<stdio.h> +#include<assert.h> + +#define BUFSIZE 1024 + +int main(int argc, char **argv) { + char buffer[BUFSIZE]; + size_t num_read; + size_t num_written; + FILE *fin = fopen(argv[1], "rb"); + FILE *fout; + assert(fin); + num_read = fread(buffer, 1, BUFSIZE, fin); + assert(num_read > 0); + fclose(fin); + fout = fopen(argv[2], "wb"); + assert(fout); + num_written = fwrite(buffer, 1, num_read, fout); + assert(num_written == num_read); + fclose(fout); + return 0; +} diff --git a/test cases/common/27 pipeline/depends/libsrc.c.in b/test cases/common/27 pipeline/depends/libsrc.c.in new file mode 100644 index 0000000..652f4eb --- /dev/null +++ b/test cases/common/27 pipeline/depends/libsrc.c.in @@ -0,0 +1,3 @@ +int func() { + return 42; +} diff --git a/test cases/common/27 pipeline/depends/meson.build b/test cases/common/27 pipeline/depends/meson.build new file mode 100644 index 0000000..5111fee --- /dev/null +++ b/test cases/common/27 pipeline/depends/meson.build @@ -0,0 +1,11 @@ +runner = find_program('copyrunner.py') + +copier = executable('copier', 'filecopier.c', native: true) + +cg = generator(runner, + output: ['@BASENAME@.c'], + arguments: [copier.full_path(), '@INPUT@', '@OUTPUT@'], + depends: copier) + +test('generatordep', + executable('gd', 'prog.c', cg.process('libsrc.c.in'))) diff --git a/test cases/common/27 pipeline/depends/prog.c b/test cases/common/27 pipeline/depends/prog.c new file mode 100644 index 0000000..f4a7dd3 --- /dev/null +++ b/test cases/common/27 pipeline/depends/prog.c @@ -0,0 +1,5 @@ +int func(); + +int main(int argc, char **argv) { + return func() != 42; +} diff --git a/test cases/common/27 pipeline/meson.build b/test cases/common/27 pipeline/meson.build index 200a6d8..e12cb7b 100644 --- a/test cases/common/27 pipeline/meson.build +++ b/test cases/common/27 pipeline/meson.build @@ -15,3 +15,9 @@ generated = gen.process(['input_src.dat']) e2 = executable('prog', 'prog.c', generated) test('pipelined', e2) + +# This is in a subdirectory to make sure +# we write proper subdir paths to output. +subdir('src') + +subdir('depends') diff --git a/test cases/common/28 pipeline/src/input_src.dat b/test cases/common/27 pipeline/src/input_src.dat index 354499a..354499a 100644 --- a/test cases/common/28 pipeline/src/input_src.dat +++ b/test cases/common/27 pipeline/src/input_src.dat diff --git a/test cases/common/28 pipeline/src/meson.build b/test cases/common/27 pipeline/src/meson.build index 4e9ac11..4e9ac11 100644 --- a/test cases/common/28 pipeline/src/meson.build +++ b/test cases/common/27 pipeline/src/meson.build diff --git a/test cases/common/28 pipeline/src/prog.c b/test cases/common/27 pipeline/src/prog.c index 29396b9..29396b9 100644 --- a/test cases/common/28 pipeline/src/prog.c +++ b/test cases/common/27 pipeline/src/prog.c diff --git a/test cases/common/28 pipeline/src/srcgen.c b/test cases/common/27 pipeline/src/srcgen.c index 26761d2..26761d2 100644 --- a/test cases/common/28 pipeline/src/srcgen.c +++ b/test cases/common/27 pipeline/src/srcgen.c diff --git a/test cases/common/28 pipeline/meson.build b/test cases/common/28 pipeline/meson.build deleted file mode 100644 index 0a430bd..0000000 --- a/test cases/common/28 pipeline/meson.build +++ /dev/null @@ -1,5 +0,0 @@ -project('pipeline test', 'c') - -# This is in a subdirectory to make sure -# we write proper subdir paths to output. -subdir('src') diff --git a/test cases/common/40 has function/meson.build b/test cases/common/40 has function/meson.build index eb30acd..539f313 100644 --- a/test cases/common/40 has function/meson.build +++ b/test cases/common/40 has function/meson.build @@ -21,7 +21,7 @@ foreach cc : compilers # not taken from a cache (ie. the check above) # On MSVC fprintf is defined as an inline function in the header, so it cannot # be found without the include. - if cc.get_id() != 'msvc' + if not ['msvc', 'intel-cl'].contains(cc.get_id()) assert(cc.has_function('fprintf', args : unit_test_args), '"fprintf" function not found without include (on !msvc).') else diff --git a/test cases/fortran/1 basic/meson.build b/test cases/fortran/1 basic/meson.build index 042902f..52e2d6f 100644 --- a/test cases/fortran/1 basic/meson.build +++ b/test cases/fortran/1 basic/meson.build @@ -1,10 +1,13 @@ project('simple fortran', 'fortran') fc = meson.get_compiler('fortran') -if fc == 'gcc' +if fc.get_id() == 'gcc' add_global_arguments('-fbounds-check', language : 'fortran') endif +args = fc.first_supported_argument(['-ffree-form', '-free', '/free']) +assert(args != [], 'No arguments found?') + e = executable('simple', 'simple.f90', - fortran_args : '-ffree-form') + fortran_args : args) test('Simple Fortran', e) diff --git a/test cases/fortran/14 fortran links c/clib.def b/test cases/fortran/14 fortran links c/clib.def new file mode 100644 index 0000000..4caeb24 --- /dev/null +++ b/test cases/fortran/14 fortran links c/clib.def @@ -0,0 +1,2 @@ +EXPORTS + hello diff --git a/test cases/fortran/14 fortran links c/meson.build b/test cases/fortran/14 fortran links c/meson.build index 163aec6..cd1369d 100644 --- a/test cases/fortran/14 fortran links c/meson.build +++ b/test cases/fortran/14 fortran links c/meson.build @@ -5,7 +5,7 @@ if ccid == 'msvc' or ccid == 'clang-cl' error('MESON_SKIP_TEST: MSVC and GCC do not interoperate like this.') endif -c_lib = library('clib', 'clib.c') +c_lib = library('clib', 'clib.c', vs_module_defs : 'clib.def') f_call_c = executable('f_call_c', 'f_call_c.f90', link_with: c_lib, diff --git a/test cases/fortran/9 cpp/meson.build b/test cases/fortran/9 cpp/meson.build index ad7d4b2..7f73985 100644 --- a/test cases/fortran/9 cpp/meson.build +++ b/test cases/fortran/9 cpp/meson.build @@ -7,7 +7,7 @@ if cpp.get_id() == 'clang' error('MESON_SKIP_TEST Clang C++ does not find -lgfortran for some reason.') endif -if build_machine.system() == 'windows' and cpp.get_id() != 'gnu' +if build_machine.system() == 'windows' and cpp.get_id() != fc.get_id() error('MESON_SKIP_TEST mixing gfortran with non-GNU C++ does not work.') endif diff --git a/test cases/unit/55 introspection/meson.build b/test cases/unit/55 introspection/meson.build index f11d64d..3f013aa 100644 --- a/test cases/unit/55 introspection/meson.build +++ b/test cases/unit/55 introspection/meson.build @@ -34,6 +34,13 @@ systype = '@0@, @1@, @2@'.format(systype, host_machine.cpu_family(), host_machin message(systype) ### END: Test inspired by taisei +# Minimal code version to produce bug #5376 +# Code inspired by https://github.com/mesa3d/mesa/blob/974c4d679c23373dbed386c696e3e3bc1bfa23ae/meson.build#L1341-L1347 +osmesa_lib_name = 'OSMesa' +osmesa_bits = '8' +osmesa_lib_name = osmesa_lib_name + osmesa_bits +message(osmesa_lib_name) # Infinite recursion gets triggered here when the parameter osmesa_lib_name is resolved + test('test case 1', t1) test('test case 2', t2) benchmark('benchmark 1', t3) |