aboutsummaryrefslogtreecommitdiff
path: root/target/ppc/int_helper.c
AgeCommit message (Collapse)AuthorFilesLines
2019-08-21target/ppc: Optimize emulation of vclzw instructionStefan Brankovic1-3/+0
Optimize Altivec instruction vclzw (Vector Count Leading Zeros Word). This instruction counts the number of leading zeros of each word element in source register and places result in the appropriate word element of destination register. Counting is to be performed in four iterations of for loop(one for each word elemnt of source register vB). Every iteration consists of loading appropriate word element from source register, counting leading zeros with tcg_gen_clzi_i32, and saving the result in appropriate word element of destination register. Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <1563200574-11098-7-git-send-email-stefan.brankovic@rt-rk.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21target/ppc: Optimize emulation of vclzd instructionStefan Brankovic1-3/+0
Optimize Altivec instruction vclzd (Vector Count Leading Zeros Doubleword). This instruction counts the number of leading zeros of each doubleword element in source register and places result in the appropriate doubleword element of destination register. Using tcg-s count leading zeros instruction two times(once for each doubleword element of source register vB) and placing result in appropriate doubleword element of destination register vD. Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <1563200574-11098-6-git-send-email-stefan.brankovic@rt-rk.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21target/ppc: Optimize emulation of vgbbd instructionStefan Brankovic1-276/+0
Optimize altivec instruction vgbbd (Vector Gather Bits by Bytes by Doubleword) All ith bits (i in range 1 to 8) of each byte of doubleword element in source register are concatenated and placed into ith byte of appropriate doubleword element in destination register. Following solution is done for both doubleword elements of source register in parallel, in order to reduce the number of instructions needed(that's why arrays are used): First, both doubleword elements of source register vB are placed in appropriate element of array avr. Bits are gathered in 2x8 iterations(2 for loops). In first iteration bit 1 of byte 1, bit 2 of byte 2,... bit 8 of byte 8 are in their final spots so avr[i], i={0,1} can be and-ed with tcg_mask. For every following iteration, both avr[i] and tcg_mask variables have to be shifted right for 7 and 8 places, respectively, in order to get bit 1 of byte 2, bit 2 of byte 3.. bit 7 of byte 8 in their final spots so shifted avr values(saved in tmp) can be and-ed with new value of tcg_mask... After first 8 iteration(first loop), all the first bits are in their final places, all second bits but second bit from eight byte are in their places... only 1 eight bit from eight byte is in it's place). In second loop we do all operations symmetrically, in order to get other half of bits in their final spots. Results for first and second doubleword elements are saved in result[0] and result[1] respectively. In the end those results are saved in appropriate doubleword element of destination register vD. Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <1563200574-11098-5-git-send-email-stefan.brankovic@rt-rk.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21target/ppc: Optimize emulation of vsl and vsr instructionsStefan Brankovic1-35/+0
Optimization of altivec instructions vsl and vsr(Vector Shift Left/Rigt). Perform shift operation (left and right respectively) on 128 bit value of register vA by value specified in bits 125-127 of register vB. Lowest 3 bits in each byte element of register vB must be identical or result is undefined. For vsl instruction, the first step is bits 125-127 of register vB have to be saved in variable sh. Then, the highest sh bits of the lower doubleword element of register vA are saved in variable shifted, in order not to lose those bits when shift operation is performed on the lower doubleword element of register vA, which is the next step. After shifting the lower doubleword element shift operation is performed on higher doubleword element of vA, with replacement of the lowest sh bits(that are now 0) with bits saved in shifted. For vsr instruction, firstly, the bits 125-127 of register vB have to be saved in variable sh. Then, the lowest sh bits of the higher doubleword element of register vA are saved in variable shifted, in odred not to lose those bits when the shift operation is performed on the higher doubleword element of register vA, which is the next step. After shifting higher doubleword element, shift operation is performed on lower doubleword element of vA, with replacement of highest sh bits(that are now 0) with bits saved in shifted. Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <1563200574-11098-3-git-send-email-stefan.brankovic@rt-rk.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-21target/ppc: Optimize emulation of lvsl and lvsr instructionsStefan Brankovic1-18/+0
Adding simple macro that is calling tcg implementation of appropriate instruction if altivec support is active. Optimization of altivec instruction lvsl (Load Vector for Shift Left). Place bytes sh:sh+15 of value 0x00 || 0x01 || 0x02 || ... || 0x1E || 0x1F in destination register. Sh is calculated by adding 2 source registers and getting bits 60-63 of result. First, the bits [28-31] are placed from EA to variable sh. After that, the bytes are created in the following way: sh:(sh+7) of X(from description) by multiplying sh with 0x0101010101010101 followed by addition of the result with 0x0001020304050607. Value obtained is placed in higher doubleword element of vD. (sh+8):(sh+15) by adding the result of previous multiplication with 0x08090a0b0c0d0e0f. Value obtained is placed in lower doubleword element of vD. Optimization of altivec instruction lvsr (Load Vector for Shift Right). Place bytes 16-sh:31-sh of value 0x00 || 0x01 || 0x02 || ... || 0x1E || 0x1F in destination register. Sh is calculated by adding 2 source registers and getting bits 60-63 of result. First, the bits [28-31] are placed from EA to variable sh. After that, the bytes are created in the following way: sh:(sh+7) of X(from description) by multiplying sh with 0x0101010101010101 followed by substraction of the result from 0x1011121314151617. Value obtained is placed in higher doubleword element of vD. (sh+8):(sh+15) by substracting the result of previous multiplication from 0x18191a1b1c1d1e1f. Value obtained is placed in lower doubleword element of vD. Signed-off-by: Stefan Brankovic <stefan.brankovic@rt-rk.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <1563200574-11098-2-git-send-email-stefan.brankovic@rt-rk.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-08-16Include qemu/main-loop.h lessMarkus Armbruster1-0/+2
In my "build everything" tree, changing qemu/main-loop.h triggers a recompile of some 5600 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). It includes block/aio.h, which in turn includes qemu/event_notifier.h, qemu/notify.h, qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h, qemu/thread.h, qemu/timer.h, and a few more. Include qemu/main-loop.h only where it's needed. Touching it now recompiles only some 1700 objects. For block/aio.h and qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the others, they shrink only slightly. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-21-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-07-02target/ppc: decode target register in VSX_EXTRACT_INSERT at translation timeMark Cave-Ayland1-8/+4
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190616123751.781-15-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-07-02target/ppc: remove getVSR()/putVSR() from int_helper.cMark Cave-Ayland1-12/+10
Since commit 8a14d31b00 "target/ppc: switch fpr/vsrl registers so all VSX registers are in host endian order" functions getVSR() and putVSR() which used to convert the VSR registers into host endian order are no longer required. Now that there are now no more users of getVSR()/putVSR() these functions can be completely removed. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190616123751.781-4-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-05-29target/ppc: Use vector variable shifts for VSL, VSR, VSRARichard Henderson1-37/+0
The gvec expanders take care of masking the shift amount against the element width. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190518191430.21686-2-richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-05-29target/ppc: Fix vsum2swsAnton Blanchard1-1/+1
A recent cleanup changed the pre zeroing of the result from 64 bit to 32 bit operations: - result.u64[i] = 0; + result.VsrW(i) = 0; This corrupts the result. Fixes: 60594fea298d ("target/ppc: remove various HOST_WORDS_BIGENDIAN hacks in int_helper.c") Signed-off-by: Anton Blanchard <anton@ozlabs.org> Message-Id: <20190507004811.29968-9-anton@ozlabs.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-05-29target/ppc: Fix vslv and vsrvAnton Blanchard1-7/+7
vslv and vsrv are broken on little endian, we append 00 to the high byte not the low byte. Fix it by using the VsrB() accessor. Signed-off-by: Anton Blanchard <anton@ozlabs.org> Message-Id: <20190507004811.29968-6-anton@ozlabs.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-05-22target/ppc: Use qemu_guest_getrandom for DARNRichard Henderson1-11/+26
We now have an interface for guest visible random numbers. Acked-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-04-26target/ppc: Style fixes for int_helper.cDavid Gibson1-31/+39
Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
2019-02-18target/ppc: convert vmin* and vmax* to vector operationsRichard Henderson1-27/+0
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-18-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert vadd*s and vsub*s to vector operationsRichard Henderson1-14/+4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-17-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: Split out VSCR_SAT to a vector fieldRichard Henderson1-3/+8
Change the representation of VSCR_SAT such that it is easy to set from vector code. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-16-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: Add set_vscr_satRichard Henderson1-12/+17
This is required before changing the representation of the register. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-15-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: Add helper_mfvscrRichard Henderson1-0/+5
This is required before changing the representation of the register. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-13-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: Pass integer to helper_mtvscrRichard Henderson1-3/+3
We can re-use this helper elsewhere if we're not passing in an entire vector register. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-10-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert vsplt[bhw] to use vector operationsRichard Henderson1-19/+0
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-5-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert vspltis[bhw] to use vector operationsRichard Henderson1-15/+0
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190215100058.20015-4-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-18target/ppc: convert vaddu[b,h,w,d] and vsubu[b,h,w,d] over to use vector ↵Mark Cave-Ayland1-7/+0
operations Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215100058.20015-3-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-04target/ppc: remove various HOST_WORDS_BIGENDIAN hacks in int_helper.cMark Cave-Ayland1-110/+45
Following on from the previous work, there are numerous endian-related hacks in int_helper.c that can now be replaced with Vsr* macros. There are also a few places where the VECTOR_FOR_INORDER_I macro can be replaced with a normal iterator since the processing order is irrelevant. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-04target/ppc: remove ROTRu32 and ROTRu64 macros from int_helper.cMark Cave-Ayland1-28/+20
Richard points out that these macros suffer from a -fsanitize=shift bug in that they improperly handle n == 0 turning it into a shift by 32/64 respectively. Replace them with QEMU's existing ror32() and ror64() functions instead. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-04target/ppc: simplify VEXT_SIGNED macro in int_helper.cMark Cave-Ayland1-7/+7
As pointed out by Richard: it does not need the mask argument, nor does it need the recast argument. The masking is implied by the cast argument, and the recast is implied by the assignment. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-04target/ppc: eliminate use of EL_IDX macros from int_helper.cMark Cave-Ayland1-39/+27
These macros can be eliminated by instead using the relavant Vsr* macros in the few locations where they appear. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-04target/ppc: eliminate use of HI_IDX and LO_IDX macros from int_helper.cMark Cave-Ayland1-95/+85
The original purpose of these macros was to correctly reference the high and low parts of the VSRs regardless of the host endianness. Replace these direct references to high and low parts with the relevant VsrD macro instead, and completely remove the now-unused HI_IDX and LO_IDX macros. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-04target/ppc: rework vmul{e,o}{s,u}{b,h,w} instructions to use Vsr* macrosMark Cave-Ayland1-21/+27
The current implementations make use of the endian-specific macros HI_IDX and LO_IDX directly to calculate array offsets. Rework the implementation to use the Vsr* macros so that these per-endian references can be removed. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-02-04target/ppc: rework vmrg{l,h}{b,h,w} instructions to use Vsr* macrosMark Cave-Ayland1-35/+19
The current implementations make use of the endian-specific macros MRGLO/MRGHI and also reference HI_IDX and LO_IDX directly to calculate array offsets. Rework the implementation to use the Vsr* macros so that these per-endian references can be removed. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-01-09target/ppc: replace AVR* macros with Vsr* macrosMark Cave-Ayland1-17/+13
Now that the VMX and VSR register sets have been combined, the same macros can be used to access both AVR and VSR field members. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2019-01-09target/ppc: merge ppc_vsr_t and ppc_avr_t union typesMark Cave-Ayland1-27/+29
Since the VSX registers are actually a superset of the VMX registers then they can be represented by the same type. Merge ppc_avr_t into ppc_vsr_t and change ppc_avr_t to be a simple typedef alias. Note that due to a difference in the naming of the float32 member between ppc_avr_t and ppc_vsr_t, references to the ppc_avr_t f member must be replaced with f32 instead. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21target/ppc: simplify bcdadd/sub functionsYasmin Beatriz1-31/+18
After solving a corner case in bcdsub, this patch simplifies the logic of both bcdadd/sub instructions by removing some unnecessary local flags. This commit also rearranges some if-else conditions in bcdadd to make it easier to read. Signed-off-by: Yasmin Beatriz <yasmins@linux.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-08-21target/ppc: bcdsub fix sign when result is zeroYasmin Beatriz1-0/+3
When the result of bcdsub is equal to zero, the result sign may be set to negative in some cases, and this does not follow the Power ISA specifications as to decimal integer arithmetic instructions. Signed-off-by: Yasmin Beatriz <yasmins@linux.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-07target/ppc: fix build on ppc64 hostLaurent Vivier1-1/+1
When I try to build a ppc64 target on a ppc64 host (gcc 8.1.1), I have: .../target/ppc/int_helper.c: In function 'helper_vinsertb': .../target/ppc/int_helper.c:1954:32: error: array subscript 18446744073709551608 is above array bounds of 'uint8_t[16]' {aka 'unsigned char[16]'} [-Werror=array-bounds] memmove(&r->u8[index], &b->u8[8 - sizeof(r->element)], \ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .../target/ppc/int_helper.c:1965:1: note: in expansion of macro 'VINSERT' If we compare with the macro for ppc64le, we can see sizeof(r->element[0]) should be used instead of sizeof(r->element). And VINSERT uses only u8, u16, u32 and u64, so the maximum value of sizeof(r->element[0]) is 8 Suggested-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-06-01target: Do not include "exec/exec-all.h" if it is not necessaryPhilippe Mathieu-Daudé1-1/+0
Code change produced with: $ git grep '#include "exec/exec-all.h"' | \ cut -d: -f-1 | \ xargs egrep -L "(cpu_address_space_init|cpu_loop_|tlb_|tb_|GETPC|singlestep|TranslationBlock)" | \ xargs sed -i.bak '/#include "exec\/exec-all.h"/d' Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20180528232719.4721-10-f4bug@amsat.org> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-05-11rename included C files to foo.inc.c, remove osdep.hPaolo Bonzini1-1/+1
osdep.h is only needed for files that are compiled directly. Remove it from included C source files, and rename them to *.inc.c so that scripts/clean-includes knows to skip them. Cc: Eric Blake <eblake@redhat.com> Cc: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-02-21target/*/cpu.h: remove softfloat.hAlex Bennée1-0/+1
As cpu.h is another typically widely included file which doesn't need full access to the softfloat API we can remove the includes from here as well. Where they do need types it's typically for float_status and the rounding modes so we move that to softfloat-types.h as well. As a result of not having softfloat in every cpu.h call we now need to add it to various helpers that do need the full softfloat.h definitions. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [For PPC parts] Acked-by: David Gibson <david@gibson.dropbear.id.au>
2018-01-10target/ppc: more use of the PPC_*() macrosCédric Le Goater1-1/+1
Also introduce utilities to manipulate bitmasks (originaly from OPAL) which be will be used in the model of the XIVE interrupt controller. Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17target/ppc: Fix carry flag setting for shift algebraic instructionsSandipan Das1-8/+8
For POWER ISA v3.0, the XER bit CA32 needs to be set by the shift right algebraic instructions whenever the CA bit is to be set. This change affects the following instructions: * Shift Right Algebraic Word (sraw[.]) * Shift Right Algebraic Word Immediate (srawi[.]) * Shift Right Algebraic Doubleword (srad[.]) * Shift Right Algebraic Doubleword Immediate (sradi[.]) Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-03-01target/ppc: introduce helper_update_ov_legacyNikunj A Dadhania1-21/+13
Removes duplicate code and will be useful for consolidating flags Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-01-31ppc: Implement bcdutrunc. instructionJose Ricardo Ziviani1-0/+51
bcdutrunc. Decimal unsigned truncate. Works like bcdtrunc. with unsigned BCD numbers. Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-01-31ppc: Implement bcdtrunc. instructionJose Ricardo Ziviani1-0/+37
bcdtrunc.: Decimal integer truncate. Given a BCD number in vrb and the number of bytes to truncate in vra, the return register will have vrb with such bits truncated. Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-01-31ppc: Implement bcdsr. instructionJose Ricardo Ziviani1-0/+48
bcdsr.: Decimal shift and round. This instruction works like bcds. however, when performing right shift, 1 will be added to the result if the last digit was >= 5. Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-01-31ppc: Implement bcdus. instructionJose Ricardo Ziviani1-0/+41
bcdus.: Decimal unsigned shift. This instruction works like bcds. but considers only unsigned BCDs (no sign in least meaning 4 bits). Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-01-31ppc: Implement bcds. instructionJose Ricardo Ziviani1-0/+40
bcds.: Decimal shift. Given two registers vra and vrb, this instruction shift the vrb value by vra bits into the result register. Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-01-31ppc: Fix a warning in bcdcfz code and improve BCD_DIG_BYTE macroJose Ricardo Ziviani1-3/+3
This commit fixes a warning in the code "(i * 2) ? .. : ..", which should be better as "i ? .. : ..", and improves the BCD_DIG_BYTE macro by placing parentheses around its argument to avoid possible expansion issues like: BCD_DIG_BYTE(i + j). Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-01-31target-ppc: Add xxinsertw instructionNikunj A Dadhania1-0/+25
xxinsertw: VSX Vector Insert Word Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-01-31target-ppc: Add xxextractuw instructionNikunj A Dadhania1-0/+26
xxextractuw: VSX Vector Extract Unsigned Word Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-01-31target-ppc: Implement bcd_is_valid functionJose Ricardo Ziviani1-7/+20
A function to check if all digits of a given BCD number is valid is here presented because more instructions will need to reuse the same code. Signed-off-by: Jose Ricardo Ziviani <joserz@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-01-31target-ppc: add vextu[bhw][lr]x instructionsAvinesh Kumar1-0/+36
vextublx: Vector Extract Unsigned Byte Left vextuhlx: Vector Extract Unsigned Halfword Left vextuwlx: Vector Extract Unsigned Word Left vextubrx: Vector Extract Unsigned Byte Right-Indexed VX-form vextuhrx: Vector Extract Unsigned Halfword Right-Indexed VX-form vextuwrx: Vector Extract Unsigned Word Right-Indexed VX-form Signed-off-by: Avinesh Kumar <avinesku@linux.vnet.ibm.com> Signed-off-by: Hariharan T.S. <hari@linux.vnet.ibm.com> [ implement using int128_rshift ] Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>