aboutsummaryrefslogtreecommitdiff
path: root/cpu-exec.c
diff options
context:
space:
mode:
authorPaolo Bonzini <pbonzini@redhat.com>2017-01-29 12:15:15 +0100
committerPaolo Bonzini <pbonzini@redhat.com>2017-02-16 14:06:56 +0100
commita70fe14b7dddcb944fbd6c9f3739cd3a22089af5 (patch)
tree37ff276d712d82f5e8c46d72cd181c9bd906ecfc /cpu-exec.c
parent43d70ddf9f96b3ad037abe4d5f9f2768196b8c92 (diff)
downloadqemu-a70fe14b7dddcb944fbd6c9f3739cd3a22089af5.zip
qemu-a70fe14b7dddcb944fbd6c9f3739cd3a22089af5.tar.gz
qemu-a70fe14b7dddcb944fbd6c9f3739cd3a22089af5.tar.bz2
cpu-exec: tighten barrier on TCG_EXIT_REQUESTED
This seems to have worked just fine so far on weakly-ordered architectures, but I don't see anything that prevents the reordering from: store 1 to exit_request store 1 to tcg_exit_req load tcg_exit_req store 0 to tcg_exit_req load exit_request store 0 to exit_request store 1 to exit_request store 1 to tcg_exit_req to this: store 1 to exit_request store 1 to tcg_exit_req load tcg_exit_req load exit_request store 1 to exit_request store 1 to tcg_exit_req store 0 to tcg_exit_req store 0 to exit_request therefore losing a request. It's possible that other memory barriers (e.g. in rcu_read_unlock) are hiding it, but better safe than sorry. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'cpu-exec.c')
-rw-r--r--cpu-exec.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/cpu-exec.c b/cpu-exec.c
index 1f7d217..d50625b 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -552,11 +552,11 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
* have set something else (eg exit_request or
* interrupt_request) which we will handle
* next time around the loop. But we need to
- * ensure the tcg_exit_req read in generated code
+ * ensure the zeroing of tcg_exit_req (see cpu_tb_exec)
* comes before the next read of cpu->exit_request
* or cpu->interrupt_request.
*/
- smp_rmb();
+ smp_mb();
*last_tb = NULL;
break;
case TB_EXIT_ICOUNT_EXPIRED: