diff options
author | Bill Schmidt <wschmidt@linux.vnet.ibm.com> | 2015-07-21 21:40:17 +0000 |
---|---|---|
committer | Bill Schmidt <wschmidt@linux.vnet.ibm.com> | 2015-07-21 21:40:17 +0000 |
commit | 2be8054b49c42a6bd7d1e94f2b9ca24d92ae7311 (patch) | |
tree | b7a050ac1060d2a5c5a44d011033bf7a7a2e2bef /clang/lib/Serialization/ModuleManager.cpp | |
parent | c1fbb3540a22c64b0afcfb3c2e99171ae7b13414 (diff) | |
download | llvm-2be8054b49c42a6bd7d1e94f2b9ca24d92ae7311.zip llvm-2be8054b49c42a6bd7d1e94f2b9ca24d92ae7311.tar.gz llvm-2be8054b49c42a6bd7d1e94f2b9ca24d92ae7311.tar.bz2 |
[PPC64LE] More vector swap optimization TLC
This makes one substantive change and a few stylistic changes to the
VSX swap optimization pass.
The substantive change is to permit LXSDX and LXSSPX instructions to
participate in swap optimization computations. The previous change to
insert a swap following a SUBREG_TO_REG widening operation makes this
almost trivial.
I experimented with also permitting STXSDX and STXSSPX instructions.
This can be done using similar techniques: we could insert a swap
prior to a narrowing COPY operation, and then permit these stores to
participate. I prototyped this, but discovered that the pattern of a
narrowing COPY followed by an STXSDX does not occur in any of our
test-suite code. So instead, I added commentary indicating that this
could be done.
Other TLC:
- I changed SH_COPYSCALAR to SH_COPYWIDEN to more clearly indicate
the direction of the copy.
- I factored the insertion of swap instructions into a separate
function.
Finally, I added a new test case to check that the scalar-to-vector
loads are working properly with swap optimization.
llvm-svn: 242838
Diffstat (limited to 'clang/lib/Serialization/ModuleManager.cpp')
0 files changed, 0 insertions, 0 deletions