aboutsummaryrefslogtreecommitdiff
path: root/target/s390x
AgeCommit message (Collapse)AuthorFilesLines
2019-05-16Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20190510' into stagingPeter Maydell4-44/+49
Add CPUClass::tlb_fill. Improve tlb_vaddr_to_host for use by ARM SVE no-fault loads. # gpg: Signature made Fri 10 May 2019 19:48:37 BST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full] # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F * remotes/rth/tags/pull-tcg-20190510: (27 commits) tcg: Use tlb_fill probe from tlb_vaddr_to_host tcg: Remove CPUClass::handle_mmu_fault tcg: Use CPUClass::tlb_fill in cputlb.c target/xtensa: Convert to CPUClass::tlb_fill target/unicore32: Convert to CPUClass::tlb_fill target/tricore: Convert to CPUClass::tlb_fill target/tilegx: Convert to CPUClass::tlb_fill target/sparc: Convert to CPUClass::tlb_fill target/sh4: Convert to CPUClass::tlb_fill target/s390x: Convert to CPUClass::tlb_fill target/riscv: Convert to CPUClass::tlb_fill target/ppc: Convert to CPUClass::tlb_fill target/openrisc: Convert to CPUClass::tlb_fill target/nios2: Convert to CPUClass::tlb_fill target/moxie: Convert to CPUClass::tlb_fill target/mips: Convert to CPUClass::tlb_fill target/mips: Tidy control flow in mips_cpu_handle_mmu_fault target/mips: Pass a valid error to raise_mmu_exception for user-only target/microblaze: Convert to CPUClass::tlb_fill target/m68k: Convert to CPUClass::tlb_fill ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-05-13target/s390x: Use tcg_gen_abs_i64Richard Henderson1-7/+1
Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-10tcg: Use CPUClass::tlb_fill in cputlb.cRichard Henderson1-6/+0
We can now use the CPUClass hook instead of a named function. Create a static tlb_fill function to avoid other changes within cputlb.c. This also isolates the asserts within. Remove the named tlb_fill function from all of the targets. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-05-10target/s390x: Convert to CPUClass::tlb_fillRichard Henderson4-44/+55
Cc: qemu-s390x@nongnu.org Cc: Cornelia Huck <cohuck@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-04-28Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20190426' into stagingPeter Maydell1-2/+2
Add tcg_gen_extract2_*. Deal with overflow of TranslationBlocks. Respect access_type in io_readx. # gpg: Signature made Fri 26 Apr 2019 18:17:01 BST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full] # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F * remotes/rth/tags/pull-tcg-20190426: cputlb: Fix io_readx() to respect the access_type tcg/arm: Restrict constant pool displacement to 12 bits tcg/ppc: Allow the constant pool to overflow at 32k tcg: Restart TB generation after out-of-line ldst overflow tcg: Restart TB generation after constant pool overflow tcg: Restart TB generation after relocation overflow tcg: Restart after TB code generation overflow tcg: Hoist max_insns computation to tb_gen_code tcg/aarch64: Support INDEX_op_extract2_{i32,i64} tcg/arm: Support INDEX_op_extract2_i32 tcg/i386: Support INDEX_op_extract2_{i32,i64} tcg: Use extract2 in tcg_gen_deposit_{i32,i64} tcg: Use deposit and extract2 in tcg_gen_shifti_i64 tcg: Add INDEX_op_extract2_{i32,i64} tcg: Implement tcg_gen_extract2_{i32,i64} Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-04-25s390x/kvm: Configure page size after memory has actually been initializedDavid Hildenbrand5-21/+27
Right now we configure the pagesize quite early, when initializing KVM. This is long before system memory is actually allocated via memory_region_allocate_system_memory(), and therefore memory backends marked as mapped. Instead, let's configure the maximum page size after initializing memory in s390_memory_init(). cap_hpage_1m is still properly configured before creating any CPUs, and therefore before configuring the CPU model and eventually enabling CMMA. This is not a fix but rather a preparation for the future, when initial memory might reside on memory backends (not the case for s390x right now) We will replace qemu_getrampagesize() soon by a function that will always return the maximum page size (not the minimum page size, which only works by pure luck so far, as there are no memory backends). Acked-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190417113143.5551-2-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-04-24tcg: Hoist max_insns computation to tb_gen_codeRichard Henderson1-2/+2
In order to handle TB's that translate to too much code, we need to place the control of the length of the translation in the hands of the code gen master loop. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-04-18qom/cpu: Simplify how CPUClass:cpu_dump_state() printsMarkus Armbruster2-23/+22
CPUClass method dump_statistics() takes an fprintf()-like callback and a FILE * to pass to it. Most callers pass fprintf() and stderr. log_cpu_state() passes fprintf() and qemu_log_file. hmp_info_registers() passes monitor_fprintf() and the current monitor cast to FILE *. monitor_fprintf() casts it right back, and is otherwise identical to monitor_printf(). The callback gets passed around a lot, which is tiresome. The type-punning around monitor_fprintf() is ugly. Drop the callback, and call qemu_fprintf() instead. Also gets rid of the type-punning, since qemu_fprintf() takes NULL instead of the current monitor cast to FILE *. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190417191805.28198-15-armbru@redhat.com>
2019-04-18target: Simplify how the TARGET_cpu_list() printMarkus Armbruster2-14/+9
The various TARGET_cpu_list() take an fprintf()-like callback and a FILE * to pass to it. Their callers (vl.c's main() via list_cpus(), bsd-user/main.c's main(), linux-user/main.c's main()) all pass fprintf() and stdout. Thus, the flexibility provided by the (rather tiresome) indirection isn't actually used. Drop the callback, and call qemu_printf() instead. Calling printf() would also work, but would make the code unsuitable for monitor context without making it simpler. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190417191805.28198-10-armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-04-18s390x/kvm: Report warnings with warn_report(), not error_printf()Markus Armbruster1-1/+1
kvm_s390_mem_op() can fail in two ways: when !cap_mem_op, it returns -ENOSYS, and when kvm_vcpu_ioctl() fails, it returns -errno set by ioctl(). Its caller s390_cpu_virt_mem_rw() recovers from both failures. kvm_s390_mem_op() prints "KVM_S390_MEM_OP failed" with error_printf() in the latter failure mode. Since this is obviously a warning, use warn_report(). Perhaps the reporting should be left to the caller. It could warn on failure other than -ENOSYS. Cc: Thomas Huth <thuth@redhat.com> Cc: qemu-s390x@nongnu.org Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <20190417190641.26814-9-armbru@redhat.com>
2019-03-22trace-events: Shorten file names in commentsMarkus Armbruster1-5/+5
We spell out sub/dir/ in sub/dir/trace-events' comments pointing to source files. That's because when trace-events got split up, the comments were moved verbatim. Delete the sub/dir/ part from these comments. Gets rid of several misspellings. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190314180929.27722-3-armbru@redhat.com Message-Id: <20190314180929.27722-3-armbru@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR UNPACK *David Hildenbrand2-0/+46
Combine all variant in a single handler. As source and destination have different element sizes, we can't use gvec expansion. Expand manually. Also watch out for overlapping source and destination registers. Use a safe evaluation order depending on the operation. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-33-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR STORE WITH LENGTHDavid Hildenbrand4-0/+40
Very similar to VECTOR LOAD WITH LENGTH, just the opposite direction. Properly probe write access before modifying memory. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-32-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR STORE MULTIPLEDavid Hildenbrand2-0/+32
Similar to VECTOR LOAD MULTIPLE, just the opposite direction. Probe write access first. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-31-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR STORE ELEMENTDavid Hildenbrand2-0/+23
As we only store one element, there is nothing to consider regarding exceptions. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-30-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR STOREDavid Hildenbrand2-0/+19
Properly probe the whole access first. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-29-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Provide probe_write_access helperDavid Hildenbrand3-0/+29
Instead of checking e.g. the first access on every touched page, we should check the actual access, otherwise we might get false positives when Low Address Protection (LAP) is active. As probe_write() can only deal with accesses to one page, we have to loop. Use i64 for the length, although not needed - easier to reuse TCG temps we already have in the translation functions where this will be used. Also allow it to be used from other helpers. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-28-david@redhat.com> [CH: add missing page_check_range()] Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR SIGN EXTEND TO DOUBLEWORDDavid Hildenbrand2-0/+35
Load both elements signed and store them into the two 64 bit elements. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-27-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR SELECTDavid Hildenbrand2-0/+43
Provide an implementation based on i64 and on real host vectors. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-26-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR SCATTER ELEMENTDavid Hildenbrand2-0/+25
Similar to VECTOR GATHER ELEMENT, but the other direction. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-25-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR REPLICATE IMMEDIATEDavid Hildenbrand2-0/+16
Like VECTOR REPLICATE, but the element to be replicated comes from an immediate. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-24-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR REPLICATEDavid Hildenbrand2-0/+18
Replicate via the special gvec helper. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-23-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR PERMUTE DOUBLEWORD IMMEDIATEDavid Hildenbrand2-0/+18
Read the whole input before modifying the destination vector. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-22-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR PERMUTEDavid Hildenbrand4-0/+34
Take care of overlying inputs and outputs by using a temporary vector. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-21-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR PACK *David Hildenbrand4-0/+215
This is a big one. Luckily we only have a limited set of such nasty instructions. We'll implement all variants with helpers, except when sources and the destination don't overlap for VECTOR PACK. Provide different helpers when the cc is to be modified. We'll return the cc then via env->cc_op. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-20-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR MERGE (HIGH|LOW)David Hildenbrand2-0/+46
We cannot use gvec expansion as source and destination elements are have different element numbers. So we'll expand using a fancy loop. Also, we have to take care of overlapping source and destination registers, therefore use a safe evaluation irder depending on the operation. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-19-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR LOAD WITH LENGTHDavid Hildenbrand3-0/+22
We can reuse the helper introduced along with VECTOR LOAD TO BLOCK BOUNDARY. We just have to take care of converting the highest index into a length. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-18-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR LOAD VR FROM GRS DISJOINTDavid Hildenbrand2-0/+9
Fairly easy, just load from to gprs into a single vector. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-17-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR LOAD VR ELEMENT FROM GRDavid Hildenbrand2-0/+43
Very similar to VECTOR LOAD GR FROM VR ELEMENT, just the opposite direction. Also provide a fast path in case we don't care about the register content. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-16-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR LOAD TO BLOCK BOUNDARYDavid Hildenbrand5-0/+75
Very similar to LOAD COUNT TO BLOCK BOUNDARY, but instead of only calculating, the actual vector is loaded. Use a temporary vector to not modify the real vector on exceptions. Initialize that one to zero, to not leak any data. Provide a fast path if we're loading a full vector. As we don't have gvec ool handlers for single vectors, just calculate the vector address manually. We can reuse the helper later on for VECTOR LOAD WITH LENGTH. In fact, we are going to name it "vll" right from the beginning, because that's a better match. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-15-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR LOAD MULTIPLEDavid Hildenbrand2-0/+42
Try to load the last element first. Access to the first element will be checked afterwards. This way, we can guarantee that the vector is not modified before we checked for all possible exceptions. (16 vectors cannot cross more than two pages) Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-14-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR LOAD LOGICAL ELEMENT AND ZERODavid Hildenbrand2-0/+48
Fairly easy, zero out the vector before we load the desired element. Load the element before touching the vector. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-13-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR LOAD GR FROM VR ELEMENTDavid Hildenbrand2-0/+65
To avoid an helper, we have to do the actual calculation of the element address (offset in cpu_env + cpu_env) manually. Factor that out into get_vec_element_ptr_i64(). The same logic will be reused for "VECTOR LOAD VR ELEMENT FROM GR". Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-12-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR LOAD ELEMENT IMMEDIATEDavid Hildenbrand2-0/+22
Take care of properly sign-extending the immediate. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-11-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR LOAD ELEMENTDavid Hildenbrand2-0/+23
Fairly easy, load with desired size and store it into the right element. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-10-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR LOAD AND REPLICATEDavid Hildenbrand2-0/+21
We can use tcg_gen_gvec_dup_i64() to carry out the duplication. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-9-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR LOADDavid Hildenbrand2-0/+27
When loading from memory, load both elements into temps first before modifying the target vector Loading with strange alingment from the end of the address space will not properly wrap, we can ignore that for now. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-8-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR GENERATE MASKDavid Hildenbrand2-0/+49
Add gen_gvec_dupi() for handling duplication of immediates, so it can be reused later. Reviewed-by: Richard Henderson <richard.henderson@linaro.org Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-7-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR GENERATE BYTE MASKDavid Hildenbrand3-0/+42
Let's optimize it for the common cases (setting a vector to zero or all ones) - courtesy of Richard. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-6-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Implement VECTOR GATHER ELEMENTDavid Hildenbrand3-0/+143
Let's start with a more involved one, but it is the first in the list of vector support instructions (introduced with the vector facility). Good thing is, we need a lot of basic infrastructure for this. Reading and writing vector elements as well as checking element validity. All vector instruction related translation functions will reside in translate_vx.inc.c, to be included in translate.c - similar to how other architectures handle it. While at it, directly add some documentation (which contains parts about things added in follow-up patches, but splitting this up does not make too much sense). Also add ES_* defines heavily used later. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-5-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Utilities for vector instruction helpersDavid Hildenbrand1-0/+101
We'll have to read/write vector elements quite frequently from helpers. The tricky bit is properly taking care of endianess. Handle it similar to aarch64. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-4-david@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Check vector register instructions at central pointDavid Hildenbrand2-0/+19
Check them at a central point. We'll use a new instruction flag to flag all vector instructions (IF_VEC) and handle it very similar to AFP, whereby we use another unused position in the PSW mask to store the state of vector register enablement per translation block. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-3-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11s390x/tcg: Define vector instruction formatsDavid Hildenbrand2-1/+63
These are the new instruction formats related to vector instructions as up to the z14 (a.k.a. latest PoP). As v2 appeares (like x2 in VRX) with d2/b2 in VRV, we have to assign it a higher field number to avoid collisions. Properly take care of the MSB (to be able to address 32 registers) for each vector register field stored in the RXB field (Bit 36 - 30 for all vector instructions). As we have 32 bit vector registers and the "v" fields are only 4 bit in size, the 5th bit is stored in the RXB. We use a new type to indicate that the MSB has to be fetched from the RXB. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190307121539.12842-2-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-11target/s390x: Remove non-architected entries from struct LowCoreThomas Huth1-39/+2
There are some fields in our struct LowCore which apparently have been copied from a very old version of the Linux kernel. These fields are not architected in the "Principles of Operation", and only used on these memory locations in Linux kernels older than 2.6.29. Newer Linux kernels moved the entries to different locations or are not using them at all anymore. Thus we should never access these fields from the QEMU side, so they should be removed. While we're at it, also add a QEMU_BUILD_BUG_ON() statement to assert that struct LowCore has the right size. Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <1551775581-27989-1-git-send-email-thuth@redhat.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04s390x: Add floating-point extension facility to "qemu" cpu modelDavid Hildenbrand1-0/+5
The floating-point extension facility implemented certain changes to BFP, HFP and DFP instructions. As we don't implement HFP/DFP, we can ignore those completely. Related to BFP, the changes include - SET BFP ROUNDING MODE (SRNMB) instruction - BFP-rounding-mode field in the FPC register is changed to 3 bits - CONVERT FROM LOGICAL instructions - CONVERT TO LOGICAL instructions - Changes (rounding mode + XxC) added to -- CONVERT TO FIXED -- CONVERT FROM FIXED -- LOAD FP INTEGER -- LOAD ROUNDED -- DIVIDE TO INTEGER For TCG, we don't implement DIVIDE TO INTEGER, and it is harder to implement, so skip that. Also, as we don't implement PFPO, we can skip changes to that as well. The other parts are now implemented, we can indicate the facility. z14 PoP mentions that "The floating-point extension facility is installed in the z/Architecture architectural mode. When bit 37 is one, bit 42 is also one.", meaning that the DFP (decimal-floating-point) facility also has to be indicated. We can ignore that for now. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190218122710.23639-16-david@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04s390x/tcg: Handle all rounding modes overwritten by BFP instructionsDavid Hildenbrand1-2/+11
"round to nearest with ties away from 0" maps to float_round_ties_away. "round to prepare for shorter precision" maps to float_round_to_odd. As all instructions properly check for valid rounding modes in translate.c we can add an assert. Fix one missing empty line. Cc: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190218122710.23639-15-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04s390x/tcg: Implement rounding mode and XxC for LOAD ROUNDEDDavid Hildenbrand4-15/+44
With the floating-point extension facility, LOAD ROUNDED has a rounding mode specification and the inexact-exception control (XxC). Handle them just like e.g. LOAD FP INTEGER. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190218122710.23639-14-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04s390x/tcg: Implement XxC and checks for most FP instructionsDavid Hildenbrand2-126/+247
With the floating-point extension facility - CONVERT FROM LOGICAL - CONVERT TO LOGICAL - CONVERT TO FIXED - CONVERT FROM FIXED - LOAD FP INTEGER have both, a rounding mode specification and the inexact-exception control (XxC). Other instructions will be handled separatly. Check for valid rounding modes and forward also the XxC (via m4). To avoid a lot of boilerplate code and changes to the helpers, combine both, the m3 and m4 field in a combined 32 bit TCG variable. Perform checks at a central place, taking in account if the m3 or m4 field was ignore before the floating-point extension facility was introduced. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190218122710.23639-13-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04s390x/tcg: Prepare for IEEE-inexact-exception control (XxC)David Hildenbrand1-57/+57
Some instructions allow to suppress IEEE inexact exceptions. z14 PoP, 9-23, "Suppression of Certain IEEE Exceptions" IEEE-inexact-exception control (XxC): Bit 1 of the M4 field is the XxC bit. If XxC is zero, recogni- tion of IEEE-inexact exception is not suppressed; if XxC is one, recognition of IEEE-inexact excep- tion is suppressed. Especially, handling for overflow/unerflow remains as is, inexact is reported along z14 PoP, 9-23, "Suppression of Certain IEEE Exceptions" For example, the IEEE-inexact-exception control (XxC) has no effect on the DXC; that is, the DXC for IEEE- overflow or IEEE-underflow exceptions along with the detail for exact, inexact and truncated, or inexact and incremented, is reported according to the actual con- dition. Follow up patches will wire it correctly up for the applicable instructions. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190218122710.23639-12-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-03-04s390x/tcg: Refactor saving/restoring the bfp rounding modeDavid Hildenbrand2-43/+71
We want to reuse this in the context of vector instructions. So use better matching names and introduce s390_restore_bfp_rounding_mode(). While at it, add proper newlines. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190218122710.23639-11-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>