diff options
author | Jakub Jelinek <jakub@redhat.com> | 2021-11-18 09:10:40 +0100 |
---|---|---|
committer | Jakub Jelinek <jakub@redhat.com> | 2021-11-18 09:10:40 +0100 |
commit | 17da2c7425ea1f5bf417b954f444dbe1f1618a1c (patch) | |
tree | 9ce2dd3deabea300a014828072e33f3abfa6c277 /libgomp/libgomp.h | |
parent | 7a2aa63fad06a72d9770b08491f1a7809eac7c50 (diff) | |
download | gcc-17da2c7425ea1f5bf417b954f444dbe1f1618a1c.zip gcc-17da2c7425ea1f5bf417b954f444dbe1f1618a1c.tar.gz gcc-17da2c7425ea1f5bf417b954f444dbe1f1618a1c.tar.bz2 |
libgomp: Ensure that either gomp_team is properly aligned [PR102838]
struct gomp_team has struct gomp_work_share array inside of it.
If that latter structure has 64-byte aligned member in the middle,
the whole struct gomp_team needs to be 64-byte aligned, but we weren't
allocating it using gomp_aligned_alloc.
This patch fixes that, except that on gcn team_malloc is special, so
I've instead decided at least for now to avoid using aligned member
and use the padding instead on gcn.
2021-11-18 Jakub Jelinek <jakub@redhat.com>
PR libgomp/102838
* libgomp.h (GOMP_USE_ALIGNED_WORK_SHARES): Define if
GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC is defined and __AMDGCN__ is not.
(struct gomp_work_share): Use GOMP_USE_ALIGNED_WORK_SHARES instead of
GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC.
* work.c (alloc_work_share, gomp_work_share_start): Likewise.
* team.c (gomp_new_team): If GOMP_USE_ALIGNED_WORK_SHARES, use
gomp_aligned_alloc instead of team_malloc.
Diffstat (limited to 'libgomp/libgomp.h')
-rw-r--r-- | libgomp/libgomp.h | 6 |
1 files changed, 5 insertions, 1 deletions
diff --git a/libgomp/libgomp.h b/libgomp/libgomp.h index ceef643..299cf42 100644 --- a/libgomp/libgomp.h +++ b/libgomp/libgomp.h @@ -95,6 +95,10 @@ enum memmodel #define GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC 1 #endif +#if defined(GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC) && !defined(__AMDGCN__) +#define GOMP_USE_ALIGNED_WORK_SHARES 1 +#endif + extern void *gomp_malloc (size_t) __attribute__((malloc)); extern void *gomp_malloc_cleared (size_t) __attribute__((malloc)); extern void *gomp_realloc (void *, size_t); @@ -348,7 +352,7 @@ struct gomp_work_share are in a different cache line. */ /* This lock protects the update of the following members. */ -#ifdef GOMP_HAVE_EFFICIENT_ALIGNED_ALLOC +#ifdef GOMP_USE_ALIGNED_WORK_SHARES gomp_mutex_t lock __attribute__((aligned (64))); #else char pad[64 - offsetof (struct gomp_work_share_1st_cacheline, pad)]; |