aboutsummaryrefslogtreecommitdiff
path: root/cpus-common.c
AgeCommit message (Collapse)AuthorFilesLines
2020-05-27cpus-common: ensure auto-assigned cpu_indexes don't clashAlex Bennée1-5/+5
Basing the cpu_index on the number of currently allocated vCPUs fails when vCPUs aren't removed in a LIFO manner. This is especially true when we are allocating a cpu_index for each guest thread in linux-user where there is no ordering constraint on their allocation and de-allocation. [I've dropped the assert which is there to guard against out-of-order removal as this should probably be caught higher up the stack. Maybe we could just ifdef CONFIG_SOFTTMU it?] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: Igor Mammedow <imammedo@redhat.com> Cc: Nikolay Igotti <igotti@gmail.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Message-Id: <20200520140541.30256-13-alex.bennee@linaro.org>
2020-05-04lockable: replaced locks with lock guard macros where appropriateDaniel Brodsky1-9/+5
- ran regexp "qemu_mutex_lock\(.*\).*\n.*if" to find targets - replaced result with QEMU_LOCK_GUARD if all unlocks at function end - replaced result with WITH_QEMU_LOCK_GUARD if unlock not at end Signed-off-by: Daniel Brodsky <dnbrdsky@gmail.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-id: 20200404042108.389635-3-dnbrdsky@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2019-10-28cpu: introduce cpu_in_exclusive_context()Emilio G. Cota1-0/+4
Suggested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> [AJB: moved inside start/end_exclusive fns + cleanup] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-08-22Merge remote-tracking branch 'remotes/armbru/tags/pull-monitor-2019-08-21' ↵Peter Maydell1-1/+1
into staging Monitor patches for 2019-08-21 # gpg: Signature made Wed 21 Aug 2019 16:35:07 BST # gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653 # gpg: issuer "armbru@redhat.com" # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full] # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full] # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * remotes/armbru/tags/pull-monitor-2019-08-21: monitor/qmp: Update comment for commit 4eaca8de268 qdev: Collect HMP handlers command handlers in qdev-monitor.c qapi: Move query-target from misc.json to machine.json hw/core: Move cpu.c, cpu.h from qom/ to hw/core/ Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-21hw/core: Move cpu.c, cpu.h from qom/ to hw/core/Markus Armbruster1-1/+1
Suggested-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190709152053.16670-2-armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> [Rebased onto merge commit 95a9457fd44; missed instances of qom/cpu.h in comments replaced]
2019-08-20cpus-common: nuke finish_safe_workRoman Kagan1-8/+0
It was introduced in commit ab129972c8b41e15b0521895a46fd9c752b68a5e, with the following motivation: Because start_exclusive uses CPU_FOREACH, merge exclusive_lock with qemu_cpu_list_lock: together with a call to exclusive_idle (via cpu_exec_start/end) in cpu_list_add, this protects exclusive work against concurrent CPU addition and removal. However, it seems to be redundant, because the cpu-exclusive infrastructure provides suffificent protection against the newly added CPU starting execution while the cpu-exclusive work is running, and the aforementioned traversing of the cpu list is protected by qemu_cpu_list_lock. Besides, this appears to be the only place where the cpu-exclusive section is entered with the BQL taken, which has been found to trigger AB-BA deadlock as follows: vCPU thread main thread ----------- ----------- async_safe_run_on_cpu(self, async_synic_update) ... [cpu hot-add] process_queued_cpu_work() qemu_mutex_unlock_iothread() [grab BQL] start_exclusive() cpu_list_add() async_synic_update() finish_safe_work() qemu_mutex_lock_iothread() cpu_exec_start() So remove it. This paves the way to establishing a strict nesting rule of never entering the exclusive section with the BQL taken. Signed-off-by: Roman Kagan <rkagan@virtuozzo.com> Message-Id: <20190523105440.27045-2-rkagan@virtuozzo.com>
2019-01-11qemu/queue.h: simplify reverse access to QTAILQPaolo Bonzini1-1/+1
The new definition of QTAILQ does not require passing the headname, remove it. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-08-23qom: convert the CPU list to RCUEmilio G. Cota1-2/+2
Iterating over the list without using atomics is undefined behaviour, since the list can be modified concurrently by other threads (e.g. every time a new thread is created in user-mode). Fix it by implementing the CPU list as an RCU QTAILQ. This requires a little bit of extra work to traverse list in reverse order (see previous patch), but other than that the conversion is trivial. Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20180819091335.22863-12-cota@braap.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-10-31*_run_on_cpu: introduce run_on_cpu_data typePaolo Bonzini1-4/+5
This changes the *_run_on_cpu APIs (and helpers) to pass data in a run_on_cpu_data type instead of a plain void *. This is because we sometimes want to pass a target address (target_ulong) and this fails on 32 bit hosts emulating 64 bit guests. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20161027151030.20863-24-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus-common: lock-free fast path for cpu_exec_start/endPaolo Bonzini1-15/+80
Set cpu->running without taking the cpu_list lock, only requiring it if there is a concurrent exclusive section. This requires adding a new field to CPUState, which records whether a running CPU is being counted in pending_cpus. When an exclusive section is started concurrently with cpu_exec_start, cpu_exec_start can use the new field to determine if it has to wait for the end of the exclusive section. Likewise, cpu_exec_end can use it to see if start_exclusive is waiting for that CPU. This a separate patch for easier bisection of issues. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus-common: Introduce async_safe_run_on_cpu()Paolo Bonzini1-2/+31
Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus-common: simplify locking for start_exclusive/end_exclusivePaolo Bonzini1-3/+8
It is not necessary to hold qemu_cpu_list_mutex throughout the exclusive section, because no other exclusive section can run while pending_cpus != 0. exclusive_idle() is called in cpu_exec_start(), and that prevents any CPUs created after start_exclusive() from entering cpu_exec() during an exclusive section. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus-common: remove redundant call to exclusive_idle()Paolo Bonzini1-1/+0
No need to call exclusive_idle() from cpu_exec_end since it is done immediately afterwards in cpu_exec_start. Any exclusive section could run as soon as cpu_exec_end leaves, because cpu->running is false and the mutex is not taken, so the call does not add any protection either. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus-common: always defer async_run_on_cpu work itemsPaolo Bonzini1-5/+0
async_run_on_cpu is only called from the I/O thread, not from CPU threads, so it doesn't make any difference. It will make a difference however for async_safe_run_on_cpu. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus-common: move exclusive work infrastructure from linux-userPaolo Bonzini1-0/+82
This will serve as the base for async_safe_run_on_cpu. Because start_exclusive uses CPU_FOREACH, merge exclusive_lock with qemu_cpu_list_lock: together with a call to exclusive_idle (via cpu_exec_start/end) in cpu_list_add, this protects exclusive work against concurrent CPU addition and removal. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus-common: fix uninitialized variable use in run_on_cpuPaolo Bonzini1-2/+2
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus-common: move CPU work item management to common codeSergey Fedorov1-0/+94
Make CPU work core functions common between system and user-mode emulation. User-mode does not use run_on_cpu, so do not implement it. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1470158864-17651-10-git-send-email-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-27cpus-common: move CPU list management to common codePaolo Bonzini1-0/+83
Add a mutex for the CPU list to system emulation, as it will be used to manage safe work. Abstract manipulation of the CPU list in new functions cpu_list_add and cpu_list_remove. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>