diff options
author | Jean Perier <jperier@nvidia.com> | 2022-03-09 09:41:55 +0100 |
---|---|---|
committer | Jean Perier <jperier@nvidia.com> | 2022-03-09 09:42:07 +0100 |
commit | 3b7ec85a1e2cd3ba581e88543986fa5031f270b7 (patch) | |
tree | 66b3f1f662acf82d96a5a434ed5e63ce4b52d8eb /clang/lib/Frontend/CompilerInvocation.cpp | |
parent | ba8ee4a43e39218f7bdfd7627db09543c33d9792 (diff) | |
download | llvm-3b7ec85a1e2cd3ba581e88543986fa5031f270b7.zip llvm-3b7ec85a1e2cd3ba581e88543986fa5031f270b7.tar.gz llvm-3b7ec85a1e2cd3ba581e88543986fa5031f270b7.tar.bz2 |
[flang] Use unix logical representation for fir.logical
The front-end and the runtime are currently using the unix logical
representation, but lowering was not. These inconsistencies could
caused issues.
The only place that defines what the logical representation is in
lowering is the translation from FIR to LLVM (FIR is agnostic to the
actual representation). More precisely, the LLVM implementation of
`fir.convert` between `i1` and `fir.logcial` is what defines the
representation:
- `fir.convert` from `i1` to `fir.logical` defines the `.true.` and `.false.`
canonical representations
- `fir.convert` from `fir.logical` to `i1` decides what the test for
truth is.
Unix representation is:
- .true. canonical integer representation is 1
- .false. canonical integer representation is 0
- the test for truth is "integer representation != 0"
For the record, the previous representation that was used was in
codegen was:
- .true. canonical integer representation is -1 (all bits 1)
- .false. canonical integer representation is 0
- the test for truth is "integer representation lowest bit == 1"
Differential Revision: https://reviews.llvm.org/D121200
Diffstat (limited to 'clang/lib/Frontend/CompilerInvocation.cpp')
0 files changed, 0 insertions, 0 deletions