diff options
author | Jordan Niethe <jniethe5@gmail.com> | 2019-04-02 10:43:17 +1100 |
---|---|---|
committer | Stewart Smith <stewart@linux.ibm.com> | 2019-05-20 14:20:29 +1000 |
commit | bc04bf1eef38a11b9d90f379d7be180f98e2b58d (patch) | |
tree | 2dba2b79d778a9e79746a07ef1e1000c06953aa3 /external | |
parent | d56b151d7f87d18f7b6333d8ec4d5c5cf04b538c (diff) | |
download | skiboot-bc04bf1eef38a11b9d90f379d7be180f98e2b58d.zip skiboot-bc04bf1eef38a11b9d90f379d7be180f98e2b58d.tar.gz skiboot-bc04bf1eef38a11b9d90f379d7be180f98e2b58d.tar.bz2 |
core/trace: Change mask/and to modulo for buffer offset
We would like the be able to mmap the trace buffers so that the
dump_trace tool is able to make use of the existing functions for
reading traces in external/trace. Mmaping is done by pages which means
that buffers should be aligned to page size. This is not as simple as
setting the buffer length to a page aligned value as the buffers each
have a header and leave space for an extra entry at the end. These must
be taken into account so the entire buffer will be page aligned.
The current method of calculating buffer offsets is to use a mask and
bitwise 'and'. This limits the potential sizes of the buffer to powers
of two. The initial justification for using the mask was that the
buffers had different sizes so the offset needed to based on information
the buffers carried with them, otherwise they could overflow.
Being limited to powers of two will make it impossible to page align the
entire buffer. Change to using modulo for calculating the buffer offset
to make a much larger range of buffer sizes possible. Instead of the
mask, make each buffer carry around the length of the buffer to be used
for calculating the offset to avoid overflows.
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
Diffstat (limited to 'external')
-rw-r--r-- | external/trace/trace.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/external/trace/trace.c b/external/trace/trace.c index 745da53..bb5a9bf 100644 --- a/external/trace/trace.c +++ b/external/trace/trace.c @@ -32,7 +32,7 @@ bool trace_empty(const struct tracebuf *tb) * we've already seen every repeat for (yet which may be * incremented in future), we're also empty. */ - rep = (void *)tb->buf + be64_to_cpu(tb->rpos & tb->mask); + rep = (void *)tb->buf + be64_to_cpu(tb->rpos) % be64_to_cpu(tb->buf_size); if (be64_to_cpu(tb->end) != be64_to_cpu(tb->rpos) + sizeof(*rep)) return false; @@ -62,7 +62,7 @@ again: * The actual buffer is slightly larger than tbsize, so this * memcpy is always valid. */ - memcpy(t, tb->buf + be64_to_cpu(tb->rpos & tb->mask), len); + memcpy(t, tb->buf + be64_to_cpu(tb->rpos) % be64_to_cpu(tb->buf_size), len); rmb(); /* read barrier, so we read tb->start after copying record. */ |