diff options
author | Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> | 2019-03-07 18:05:20 +0000 |
---|---|---|
committer | David Gibson <david@gibson.dropbear.id.au> | 2019-03-12 14:33:04 +1100 |
commit | d59d1182b14fcdad350108012fb015e6c2d355f0 (patch) | |
tree | a53994758e0ee72308d682ae18fe4068566f708e /target/ppc/translate | |
parent | 8a14d31b00ae82ed430806bac96962b73fe6967f (diff) | |
download | qemu-d59d1182b14fcdad350108012fb015e6c2d355f0.zip qemu-d59d1182b14fcdad350108012fb015e6c2d355f0.tar.gz qemu-d59d1182b14fcdad350108012fb015e6c2d355f0.tar.bz2 |
target/ppc: introduce vsr64_offset() to simplify get_cpu_vsr{l,h}() and set_cpu_vsr{l,h}()
Now that all VSX registers are stored in host endian order, there is no need
to go via different accessors depending upon the register number. Instead we
introduce vsr64_offset() and use it directly from within get_cpu_vsr{l,h}() and
set_cpu_vsr{l,h}().
This also allows us to rewrite avr64_offset() and fpr_offset() in terms of the
new vsr64_offset() function to more clearly express the relationship between the
VSX, FPR and VMX registers, and also remove vsrl_offset() which is no longer
required.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20190307180520.13868-8-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Diffstat (limited to 'target/ppc/translate')
-rw-r--r-- | target/ppc/translate/vsx-impl.inc.c | 34 |
1 files changed, 4 insertions, 30 deletions
diff --git a/target/ppc/translate/vsx-impl.inc.c b/target/ppc/translate/vsx-impl.inc.c index 7d02a23..95a269f 100644 --- a/target/ppc/translate/vsx-impl.inc.c +++ b/target/ppc/translate/vsx-impl.inc.c @@ -1,49 +1,23 @@ /*** VSX extension ***/ -static inline void get_vsrl(TCGv_i64 dst, int n) -{ - tcg_gen_ld_i64(dst, cpu_env, vsrl_offset(n)); -} - -static inline void set_vsrl(int n, TCGv_i64 src) -{ - tcg_gen_st_i64(src, cpu_env, vsrl_offset(n)); -} - static inline void get_cpu_vsrh(TCGv_i64 dst, int n) { - if (n < 32) { - get_fpr(dst, n); - } else { - get_avr64(dst, n - 32, true); - } + tcg_gen_ld_i64(dst, cpu_env, vsr64_offset(n, true)); } static inline void get_cpu_vsrl(TCGv_i64 dst, int n) { - if (n < 32) { - get_vsrl(dst, n); - } else { - get_avr64(dst, n - 32, false); - } + tcg_gen_ld_i64(dst, cpu_env, vsr64_offset(n, false)); } static inline void set_cpu_vsrh(int n, TCGv_i64 src) { - if (n < 32) { - set_fpr(n, src); - } else { - set_avr64(n - 32, src, true); - } + tcg_gen_st_i64(src, cpu_env, vsr64_offset(n, true)); } static inline void set_cpu_vsrl(int n, TCGv_i64 src) { - if (n < 32) { - set_vsrl(n, src); - } else { - set_avr64(n - 32, src, false); - } + tcg_gen_st_i64(src, cpu_env, vsr64_offset(n, false)); } #define VSX_LOAD_SCALAR(name, operation) \ |