aboutsummaryrefslogtreecommitdiff
path: root/exec.c
AgeCommit message (Collapse)AuthorFilesLines
2012-12-08exec: Advise huge pages for the TCG code gen bufferRichard Henderson1-0/+2
After allocating 32MB or more contiguous memory, huge pages would seem to be ideal. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-12dma: Define dma_context_memory and use in sysbus-ohciPeter Maydell1-0/+5
Define a new global dma_context_memory which is a DMAContext corresponding to the global address_space_memory AddressSpace. This can be used by sysbus peripherals like sysbus-ohci which need to do DMA. In particular, use it in the sysbus-ohci device, which fixes a segfault when attempting to use that device. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
2012-11-03Merge branch 'trivial-patches' of git://github.com/stefanha/qemuBlue Swirl1-6/+9
* 'trivial-patches' of git://github.com/stefanha/qemu: pc: Drop redundant test for ROM memory region exec: make some functions static target-ppc: make some functions static ppc: add missing static vnc: add missing static vl.c: add missing static target-sparc: make do_unaligned_access static m68k: Return semihosting errno values correctly cadence_uart: More debug information Conflicts: target-m68k/m68k-semi.c
2012-11-03tcg: Add extended GETPC mechanism for MMU helpers with ldst optimizationYeongkyoon Lee1-0/+11
Add GETPC_EXT which is used by MMU helpers to selectively calculate the code address of accessing guest memory when called from a qemu_ld/st optimized code or a C function. Currently, it supports only i386 and x86-64 hosts. Signed-off-by: Yeongkyoon Lee <yeongkyoon.lee@samsung.com> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-11-01exec: make some functions staticBlue Swirl1-6/+9
Signed-off-by: Blue Swirl <blauwirbel@gmail.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2012-10-31cpu: Move thread_id to CPUStateAndreas Färber1-1/+4
Signed-off-by: Andreas Färber <afaerber@suse.de>
2012-10-31cpus: Pass CPUState to qemu_cpu_kick()Andreas Färber1-1/+1
CPUArchState is no longer needed there. Signed-off-by: Andreas Färber <afaerber@suse.de>
2012-10-31cpus: Pass CPUState to qemu_cpu_is_self()Andreas Färber1-1/+2
Change return type to bool, move to include/qemu/cpu.h and add documentation. Signed-off-by: Andreas Färber <afaerber@suse.de> Reviewed-by: Igor Mammedov <imammedo@redhat.com> [AF: Updated new caller qemu_in_vcpu_thread()]
2012-10-23Rename target_phys_addr_t to hwaddrAvi Kivity1-81/+81
target_phys_addr_t is unwieldly, violates the C standard (_t suffixes are reserved) and its purpose doesn't match the name (most target_phys_addr_t addresses are not target specific). Replace it with a finger-friendly, standards conformant hwaddr. Outstanding patchsets can be fixed up with the command git rebase -i --exec 'find -name "*.[ch]" | xargs s/target_phys_addr_t/hwaddr/g' origin Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-10-22Call MADV_HUGEPAGE for guest RAM allocationsLuiz Capitulino1-0/+1
This makes it possible for QEMU to use transparent huge pages (THP) when transparent_hugepage/enabled=madvise. Otherwise THP is only used when it's enabled system wide. Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-10-22Merge remote-tracking branch 'quintela/migration-next-20121017' into stagingAnthony Liguori1-1/+1
* quintela/migration-next-20121017: (41 commits) cpus: create qemu_in_vcpu_thread() savevm: make qemu_file_put_notify() return errors savevm: un-export qemu_file_set_error() block-migration: handle errors with the return codes correctly block-migration: Switch meaning of return value block-migration: make flush_blks() return errors buffered_file: buffered_put_buffer() don't need to set last_error savevm: Only qemu_fflush() can generate errors savevm: make qemu_fill_buffer() be consistent savevm: unexport qemu_ftell() savevm: unfold qemu_fclose_internal() savevm: make qemu_fflush() return an error code savevm: Remove qemu_fseek() virtio-net: use qemu_get_buffer() in a temp buffer savevm: unexport qemu_fflush migration: make migrate_fd_wait_for_unfreeze() return errors buffered_file: make buffered_flush return the error code buffered_file: callers of buffered_flush() already check for errors buffered_file: We can access directly to bandwidth_limit buffered_file: unfold migrate_fd_close ...
2012-10-22Merge remote-tracking branch 'qemu-kvm/memory/dma' into stagingAnthony Liguori1-184/+133
* qemu-kvm/memory/dma: (23 commits) pci: honor PCI_COMMAND_MASTER pci: give each device its own address space memory: add address_space_destroy() dma: make dma access its own address space memory: per-AddressSpace dispatch s390: avoid reaching into memory core internals memory: use AddressSpace for MemoryListener filtering memory: move tcg flush into a tcg memory listener memory: move address_space_memory and address_space_io out of memory core memory: manage coalesced mmio via a MemoryListener xen: drop no-op MemoryListener callbacks kvm: drop no-op MemoryListener callbacks xen_pt: drop no-op MemoryListener callbacks vfio: drop no-op MemoryListener callbacks memory: drop no-op MemoryListener callbacks memory: provide defaults for MemoryListener operations memory: maintain a list of address spaces memory: export AddressSpace memory: prepare AddressSpace for exporting xen_pt: use separate MemoryListeners for memory and I/O ...
2012-10-22memory: add address_space_destroy()Avi Kivity1-0/+10
Since address spaces can be created dynamically by device hotplug, they can also be destroyed dynamically. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-22memory: per-AddressSpace dispatchAvi Kivity1-67/+107
Currently we use a global radix tree to dispatch memory access. This only works with a single address space; to support multiple address spaces we make the radix tree a member of AddressSpace (via an intermediate structure AddressSpaceDispatch to avoid exposing too many internals). A side effect is that address_space_io also gains a dispatch table. When we remove all the pre-memory-API I/O registrations, we can use that for dispatching I/O and get rid of the original I/O dispatch. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-22memory: use AddressSpace for MemoryListener filteringAvi Kivity1-5/+5
Using the AddressSpace type reduces confusion, as you can't accidentally supply the MemoryRegion you're interested in. Reviewed-by: Anthony Liguori <aliguori@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-22memory: move tcg flush into a tcg memory listenerAvi Kivity1-2/+6
We plan to make the core listener listen to all address spaces; this will cause many more flushes than necessary. Prepare for that by moving the flush into a tcg-specific listener. Later we can avoid registering the listener if tcg is disabled. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-22memory: move address_space_memory and address_space_io out of memory coreAvi Kivity1-2/+7
With this change, memory.c no longer knows anything about special address spaces, so it is prepared for AddressSpace based DMA. Reviewed-by: Anthony Liguori <aliguori@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-22memory: manage coalesced mmio via a MemoryListenerAvi Kivity1-13/+0
Instead of calling a global function on coalesced mmio changes, which routes the call to kvm if enabled, add coalesced mmio hooks to MemoryListener and make kvm use that instead. The motivation is support for multiple address spaces (which means we we need to filter the call on the right address space) but the result is cleaner as well. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-20exec: Make MIN_CODE_GEN_BUFFER_SIZE private to exec.cRichard Henderson1-0/+4
It is used nowhere else, and the corresponding MAX_CODE_GEN_BUFFER_SIZE also lives there. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-20exec: Allocate code_gen_prologue from code_gen_bufferRichard Henderson1-19/+11
We had a hack for arm and sparc, allocating code_gen_prologue to a special section. Which, honestly does no good under certain cases. We've already got limits on code_gen_buffer_size to ensure that all TBs can use direct branches between themselves; reuse this limit to ensure the prologue is also reachable. As a bonus, we get to avoid marking a page of the main executable's data segment as executable. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-20exec: Do not use absolute address hints for code_gen_buffer with -fpieRichard Henderson1-1/+6
The hard-coded addresses inside alloc_code_gen_buffer only make sense if we're building an executable that will actually run at the address we've put into the linker scripts. When we're building with -fpie, the executable will run at some random location chosen by the kernel. We get better placement for the code_gen_buffer if we allow the kernel to place the memory, as it will tend to to place it near the executable, based on the PROT_EXEC bit. Since code_gen_prologue is always inside the executable, this effect is easily seen at the end of most TB, with the exit_tb opcode, and with any calls to helper functions. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-20exec: Don't make DEFAULT_CODE_GEN_BUFFER_SIZE too largeRichard Henderson1-1/+5
For ARM we cap the buffer size to 16MB. Do not allocate 32MB in that case. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-20exec: Split up and tidy code_gen_bufferRichard Henderson1-92/+103
It now consists of: A macro definition of MAX_CODE_GEN_BUFFER_SIZE with host-specific values, A function size_code_gen_buffer that applies most of the reasoning for choosing a buffer size, Three variations of a function alloc_code_gen_buffer that contain all of the logic for allocating executable memory via a given allocation mechanism. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-10-17ram: Export last_ram_offset()Juan Quintela1-1/+1
Is the only way of knowing the RAM size. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2012-10-15memory: drop no-op MemoryListener callbacksAvi Kivity1-96/+0
Removes quite a bit of useless code. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-15memory: rename 'exec-obsolete.h'Avi Kivity1-2/+1
exec-obsolete.h used to hold pre-memory-API functions that were used from device code prior to the transition to the memory API. Now that the transition is complete, the name no longer describes the file. The functions still need to be merged better into the memory core, but there's no danger of anyone using them. Reviewed-by: Anthony Liguori <aliguori@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-10-05cpu_dump_state: move DUMP_FPU and DUMP_CCOP flags from x86-only to genericPeter Maydell1-10/+2
Move the DUMP_FPU and DUMP_CCOP flags for cpu_dump_state() from being x86-specific flags to being generic ones. This allows us to drop some TARGET_I386 ifdefs in various places, and means that we can (potentially) be more consistent across architectures about which monitor commands or debug abort printouts include FPU register contents and info about QEMU's condition-code optimisations. Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2012-10-03exec, memory: Call to xen_modified_memory.Anthony PERARD1-0/+1
This patch add some calls to xen_modified_memory to notify Xen about dirtybits during migration. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Reviewed-by: Avi Kivity <avi@redhat.com>
2012-10-03exec: Introduce helper to set dirty flags.Anthony PERARD1-35/+17
This new helper/hook is used in the next patch to add an extra call in a single place. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Reviewed-by: Avi Kivity <avi@redhat.com>
2012-09-21tcg-sparc: Assume v9 cpu always, i.e. force v8plus in 32-bit mode.Richard Henderson1-3/+3
Current code doesn't actually work in 32-bit mode at all. Since no one really noticed, drop the complication of v7 and v8 cpus. Eliminate the --sparc_cpu configure option and standardize macro testing on TCG_TARGET_REG_BITS / HOST_LONG_BITS Signed-off-by: Richard Henderson <rth@twiddle.net>
2012-09-21tcg-sparc: Don't MAP_FIXED on top of the programRichard Henderson1-4/+2
The address we pick in sparc64.ld is also 0x60000000, so doing a fixed map on top of that is guaranteed to blow up. Choosing 0x40000000 is exactly right for the max of code_gen_buffer_size set below. No need to ever use MAP_FIXED. While getting our desired address helps optimize the generated code, we won't fail if we don't get it. Signed-off-by: Richard Henderson <rth@twiddle.net>
2012-09-17cpu_physical_memory_write_rom() needs to do TB invalidatesDavid Gibson1-0/+7
cpu_physical_memory_write_rom(), despite the name, can also be used to write images into RAM - and will often be used that way if the machine uses load_image_targphys() into RAM addresses. However, cpu_physical_memory_write_rom(), unlike cpu_physical_memory_rw() doesn't invalidate any cached TBs which might be affected by the region written. This was breaking reset (under full emu) on the pseries machine - we loaded our firmware image into RAM, and while executing it rewrite the code at the entry point (correctly causing a TB invalidate/refresh). When we reset the firmware image was reloaded, but the TB from the rewrite was still active and caused us to get an illegal instruction trap. This patch fixes the bug by duplicating the tb invalidate code from cpu_physical_memory_rw() in cpu_physical_memory_write_rom(). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-09-17add -machine mem-merge=on|off optionLuiz Capitulino1-3/+16
It allows to disable memory merge support (KSM on Linux), which is enabled by default otherwise. Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-08-16memory: add -machine dump-guest-core=on|offJason Baron1-0/+21
Add a new '[,dump-guest-core=on|off]' option to the '-machine' option. When 'dump-guest-core=off' is specified, guest memory is omitted from the core dump. The default behavior continues to be to include guest memory when a core dump is triggered. In my testing, this brought the core dump size down from 384MB to 6MB on a 2GB guest. Is anything additional required to preserve this setting for migration or savevm? I don't believe so. Changelog: v3: Eliminate globals as per Anthony's suggestion set no dump from qemu_ram_remap() as well v2: move the option from -m to -machine, rename option dump -> dump-guest-core Signed-off-by: Jason Baron <jbaron@redhat.com> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-08-11exec.c: fix dirty bitmap reallocationIgor Mitsyanko1-0/+2
For each newly created RAM block, dirty bitmap is reallocated with g_realloc, which doesn't make any promises on initial content of new extra data in returned buffer. In theory, we initialize this new data with cpu_physical_memory_set_dirty_range() call. The problem is, cpu_physical_memory_set_dirty_range() has a side effect of incrementing ram_list.dirty_pages variable, but only for pages which are not already dirty. And page "cleanliness" is determined using the same not yet uninitialized dirty bitmap we've just reallocated. This results in inconsistency between real dirty page number and value in ram_list.dirty_pages variable, which in turn could (and will) result in errors during VM migration. Zero initialize new dirty bitmap bytes to fix this problem. Signed-off-by: Igor Mitsyanko <i.mitsyanko@samsung.com> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-08-03exec.c: Remove out of date commentPeter Maydell1-8/+0
Remove an out of date comment: this comment used to be attached to cpu_register_physical_memory_log(), before commit 0f0cb164 accidentally inserted a couple of other functions between the comment and its function. It is in any case obsolete since (a) the function arguments it refers to have been replaced with a single MemoryRegionSection* argument and (b) the inability to handle regions whose offset_within_address_space and offset_within_region aren't equally aligned was fixed as part of the rewrite of this code. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-08-03exec.c: Use subpages for large unaligned mappingsTyler Hall1-4/+9
Registering a multi-page memory region that is non-page-aligned results in a subpage from the start to the page boundary, some number of full pages, and possibly another subpage from the last page boundary to the end. The full pages will have a value for offset_within_region that is not a multiple of TARGET_PAGE_SIZE. Accesses through softmmu are unable to handle this and will segfault. Handling full pages through subpages is not optimal, but only non-page-aligned mappings take the penalty. Signed-off-by: Tyler Hall <tylerwhall@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Avi Kivity <avi@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-08-03exec.c: Fix off-by-one error in register_subpageTyler Hall1-1/+1
subpage_register() expects "end" to be the last byte in the mapping. Registering a non-page-aligned memory region that extends up to or beyond a page boundary causes subpage_register() to silently fail through the (end >= PAGE_SIZE) check. This bug does not cause noticeable problems for mappings that do not extend to a page boundary, though they do register an extra byte. Signed-off-by: Tyler Hall <tylerwhall@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Avi Kivity <avi@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-07-18Merge remote-tracking branch 'qemu-kvm/uq/master' into stagingAnthony Liguori1-4/+4
* qemu-kvm/uq/master: virtio: move common irqfd handling out of virtio-pci virtio: move common ioeventfd handling out of virtio-pci event_notifier: add event_notifier_set_handler memory: pass EventNotifier, not eventfd ivshmem: wrap ivshmem_del_eventfd loops with transaction ivshmem: use EventNotifier and memory API event_notifier: add event_notifier_init_fd event_notifier: remove event_notifier_test event_notifier: add event_notifier_set apic: Defer interrupt updates to VCPU thread apic: Reevaluate pending interrupts on LVT_LINT0 changes apic: Resolve potential endless loop around apic_update_irq kvm: expose tsc deadline timer feature to guest kvm_pv_eoi: add flag support kvm: Don't abort on kvm_irqchip_add_msi_route()
2012-07-12memory: pass EventNotifier, not eventfdPaolo Bonzini1-4/+4
Under Win32, EventNotifiers will not have event_notifier_get_fd, so we cannot call it in common code such as hw/virtio-pci.c. Pass a pointer to the notifier, and only retrieve the file descriptor in kvm-specific code. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-07-10s390: autodetect map privateChristian Borntraeger1-15/+3
By default qemu will use MAP_PRIVATE for guest pages. This will write protect pages and thus break on s390 systems that dont support this feature. Therefore qemu has a hack to always use MAP_SHARED for s390. But MAP_SHARED has other problems (no dirty pages tracking, a lot more swap overhead etc.) Newer systems allow the distinction via KVM_CAP_S390_COW. With this feature qemu can use the standard qemu alloc if available, otherwise it will use the old s390 hack. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com> Acked-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Alexander Graf <agraf@suse.de>
2012-06-29dirty bitmap: abstract its useJuan Quintela1-2/+1
Always use accessors to read/set the dirty bitmap. Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-06-29Only TCG needs TLB handlingJuan Quintela1-10/+21
Refactor the code that is only needed for tcg to an static function. Call that only when tcg is enabled. We can't refactor to a dummy function in the kvm case, as qemu can be compiled at the same time with tcg and kvm. Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-06-21qemu-log: move logging to qemu-log.cBlue Swirl1-122/+0
Move logging functions from exec.c to qemu-log.c, compile it only once. Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-06-18qdev: Use wrapper for qdev_get_pathAnthony Liguori1-2/+2
This makes it easier to remove it from BusInfo. Signed-off-by: Anthony Liguori <aliguori@us.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [AF: Drop now unnecessary NULL initialization in scsibus_get_dev_path()] Signed-off-by: Andreas Färber <afaerber@suse.de>
2012-06-11Merge remote-tracking branch 'stefanha/trivial-patches' into stagingAnthony Liguori1-10/+12
* stefanha/trivial-patches: configure: report missing libraries for virtfs trace/simple.c: fix deprecated glib2 interface Clarify comments of tb_invalidate_phys_[page_]range
2012-06-09exec: fix TB invalidation after breakpoint insertion/deletionMax Filippov1-1/+2
tb_invalidate_phys_addr has to be called with the exact physical address of the breakpoint we add/remove, not just the page's base address. Otherwise we easily fail to flush the right TB. This breakage was introduced by the commit f3705d5329 "memory: make phys_page_find() return an unadjusted". This appeared to work for some guest architectures because their cpu_get_phys_page_debug implementation returns full translated physical address, not just the base of the TARGET_PAGE_SIZE-sized page. Reported-by: TeLeMan <geleman@gmail.com> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-06-08Clarify comments of tb_invalidate_phys_[page_]rangeJan Kiszka1-10/+12
They could suggest that all TBs of the page containing the range would be invalidated. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-06-04Add API to check whether a physical address is I/O addressWen Congyang1-0/+12
This API will be used in the following patch. Signed-off-by: Wen Congyang <wency@cn.fujitsu.com> Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
2012-05-19linux-user: Fix stale tbs after mmapAlexander Graf1-0/+17
If we execute linux-user code that does the following: * A = mmap() * execute code in A * munmap(A) * B = mmap(), but mmap returns the same address as A * execute code in B we end up executing a stale cached tb that contains translated code from A, while we want new code from B. This patch adds a TB flush for mmap'ed regions, before we return them, avoiding the whole issue. It also adds a flush for munmap, so that we don't execute stale TBs instead of getting a segfault. Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Alexander Graf <agraf@suse.de> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Riku Voipio <riku.voipio@linaro.org> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>