diff options
author | Paolo Bonzini <pbonzini@redhat.com> | 2017-01-29 12:15:15 +0100 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2017-02-16 14:06:56 +0100 |
commit | a70fe14b7dddcb944fbd6c9f3739cd3a22089af5 (patch) | |
tree | 37ff276d712d82f5e8c46d72cd181c9bd906ecfc /cpu-exec.c | |
parent | 43d70ddf9f96b3ad037abe4d5f9f2768196b8c92 (diff) | |
download | qemu-a70fe14b7dddcb944fbd6c9f3739cd3a22089af5.zip qemu-a70fe14b7dddcb944fbd6c9f3739cd3a22089af5.tar.gz qemu-a70fe14b7dddcb944fbd6c9f3739cd3a22089af5.tar.bz2 |
cpu-exec: tighten barrier on TCG_EXIT_REQUESTED
This seems to have worked just fine so far on weakly-ordered
architectures, but I don't see anything that prevents the
reordering from:
store 1 to exit_request
store 1 to tcg_exit_req
load tcg_exit_req
store 0 to tcg_exit_req
load exit_request
store 0 to exit_request
store 1 to exit_request
store 1 to tcg_exit_req
to this:
store 1 to exit_request
store 1 to tcg_exit_req
load tcg_exit_req
load exit_request
store 1 to exit_request
store 1 to tcg_exit_req
store 0 to tcg_exit_req
store 0 to exit_request
therefore losing a request. It's possible that other memory barriers
(e.g. in rcu_read_unlock) are hiding it, but better safe than
sorry.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'cpu-exec.c')
-rw-r--r-- | cpu-exec.c | 4 |
1 files changed, 2 insertions, 2 deletions
@@ -552,11 +552,11 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb, * have set something else (eg exit_request or * interrupt_request) which we will handle * next time around the loop. But we need to - * ensure the tcg_exit_req read in generated code + * ensure the zeroing of tcg_exit_req (see cpu_tb_exec) * comes before the next read of cpu->exit_request * or cpu->interrupt_request. */ - smp_rmb(); + smp_mb(); *last_tb = NULL; break; case TB_EXIT_ICOUNT_EXPIRED: |