diff options
author | Peter Maydell <peter.maydell@linaro.org> | 2019-06-11 16:39:44 +0100 |
---|---|---|
committer | Peter Maydell <peter.maydell@linaro.org> | 2019-06-13 15:14:04 +0100 |
commit | 160f3b64c5cc4c8a09a1859edc764882ce6ad6bf (patch) | |
tree | 8adf02874d4142680d7a87e8831da4ea2f1a4386 /net/tap-linux.c | |
parent | f7bbb8f31f0761edbf0c64b7ab3c3f49c13612ea (diff) | |
download | qemu-160f3b64c5cc4c8a09a1859edc764882ce6ad6bf.zip qemu-160f3b64c5cc4c8a09a1859edc764882ce6ad6bf.tar.gz qemu-160f3b64c5cc4c8a09a1859edc764882ce6ad6bf.tar.bz2 |
target/arm: Add helpers for VFP register loads and stores
The current VFP code has two different idioms for
loading and storing from the VFP register file:
1 using the gen_mov_F0_vreg() and similar functions,
which load and store to a fixed set of TCG globals
cpu_F0s, CPU_F0d, etc
2 by direct calls to tcg_gen_ld_f64() and friends
We want to phase out idiom 1 (because the use of the
fixed globals is a relic of a much older version of TCG),
but idiom 2 is quite longwinded:
tcg_gen_ld_f64(tmp, cpu_env, vfp_reg_offset(true, reg))
requires us to specify the 64-bitness twice, once in
the function name and once by passing 'true' to
vfp_reg_offset(). There's no guard against accidentally
passing the wrong flag.
Instead, let's move to a convention of accessing 64-bit
registers via the existing neon_load_reg64() and
neon_store_reg64(), and provide new neon_load_reg32()
and neon_store_reg32() for the 32-bit equivalents.
Implement the new functions and use them in the code in
translate-vfp.inc.c. We will convert the rest of the VFP
code as we do the decodetree conversion in subsequent
commits.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Diffstat (limited to 'net/tap-linux.c')
0 files changed, 0 insertions, 0 deletions