diff options
| author | Sanjay Patel <spatel@rotateright.com> | 2020-05-25 07:50:45 -0400 |
|---|---|---|
| committer | Sanjay Patel <spatel@rotateright.com> | 2020-05-25 08:01:48 -0400 |
| commit | fa038e03504c7d0dfd438b1dfdd6da7081e75617 (patch) | |
| tree | 4a4fd1fb92a3a0bc0496a1782ffb3a78c2291218 /lldb/test/API/python_api/process | |
| parent | 8f48814879c06bbf9f211fa5d959419f0d2d38b6 (diff) | |
| download | llvm-fa038e03504c7d0dfd438b1dfdd6da7081e75617.zip llvm-fa038e03504c7d0dfd438b1dfdd6da7081e75617.tar.gz llvm-fa038e03504c7d0dfd438b1dfdd6da7081e75617.tar.bz2 | |
[x86] favor vector constant load to avoid GPR to XMM transfer, part 2
This replaces the build_vector lowering code that was just added in
D80013
and matches the pattern later from the x86-specific "vzext_movl".
That seems to result in the same or better improvements and gets rid
of the 'TODO' items from that patch.
AFAICT, we always shrink wider constant vectors to 128-bit on these
patterns, so we still get the implicit zero-extension to ymm/zmm
without wasting space on larger vector constants. There's a trade-off
there because that means we miss potential load-folding.
Similarly, we could load scalar constants here with implicit
zero-extension even to 128-bit. That saves constant space, but it
means we forego load-folding, and so it increases register pressure.
This seems like a good middle-ground between those 2 options.
Differential Revision: https://reviews.llvm.org/D80131
Diffstat (limited to 'lldb/test/API/python_api/process')
0 files changed, 0 insertions, 0 deletions
