aboutsummaryrefslogtreecommitdiff
path: root/cpu-exec.c
diff options
context:
space:
mode:
authorAlex Bennée <alex.bennee@linaro.org>2017-02-23 18:29:12 +0000
committerAlex Bennée <alex.bennee@linaro.org>2017-02-24 10:32:45 +0000
commite5143e30fb87fbf179029387f83f98a5a9b27f19 (patch)
treeb328a8046041f35bc3d03611f064d5e7098c2b7f /cpu-exec.c
parent8d04fb55dec381bc5105cb47f29d918e579e8cbd (diff)
downloadqemu-e5143e30fb87fbf179029387f83f98a5a9b27f19.zip
qemu-e5143e30fb87fbf179029387f83f98a5a9b27f19.tar.gz
qemu-e5143e30fb87fbf179029387f83f98a5a9b27f19.tar.bz2
tcg: remove global exit_request
There are now only two uses of the global exit_request left. The first ensures we exit the run_loop when we first start to process pending work and in the kick handler. This is just as easily done by setting the first_cpu->exit_request flag. The second use is in the round robin kick routine. The global exit_request ensured every vCPU would set its local exit_request and cause a full exit of the loop. Now the iothread isn't being held while running we can just rely on the kick handler to push us out as intended. We lightly re-factor the main vCPU thread to ensure cpu->exit_requests cause us to exit the main loop and process any IO requests that might come along. As an cpu->exit_request may legitimately get squashed while processing the EXCP_INTERRUPT exception we also check cpu->queued_work_first to ensure queued work is expedited as soon as possible. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
Diffstat (limited to 'cpu-exec.c')
-rw-r--r--cpu-exec.c20
1 files changed, 7 insertions, 13 deletions
diff --git a/cpu-exec.c b/cpu-exec.c
index 1bd3d72..85f14d4 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -568,15 +568,13 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
*tb_exit = ret & TB_EXIT_MASK;
switch (*tb_exit) {
case TB_EXIT_REQUESTED:
- /* Something asked us to stop executing
- * chained TBs; just continue round the main
- * loop. Whatever requested the exit will also
- * have set something else (eg exit_request or
- * interrupt_request) which we will handle
- * next time around the loop. But we need to
- * ensure the zeroing of tcg_exit_req (see cpu_tb_exec)
- * comes before the next read of cpu->exit_request
- * or cpu->interrupt_request.
+ /* Something asked us to stop executing chained TBs; just
+ * continue round the main loop. Whatever requested the exit
+ * will also have set something else (eg interrupt_request)
+ * which we will handle next time around the loop. But we
+ * need to ensure the tcg_exit_req read in generated code
+ * comes before the next read of cpu->exit_request or
+ * cpu->interrupt_request.
*/
smp_mb();
*last_tb = NULL;
@@ -630,10 +628,6 @@ int cpu_exec(CPUState *cpu)
rcu_read_lock();
- if (unlikely(atomic_mb_read(&exit_request))) {
- cpu->exit_request = 1;
- }
-
cc->cpu_exec_enter(cpu);
/* Calculate difference between guest clock and host clock.