aboutsummaryrefslogtreecommitdiff
path: root/nptl
diff options
context:
space:
mode:
authorAdhemerval Zanella Netto <adhemerval.zanella@linaro.org>2022-07-21 10:04:59 -0300
committerAdhemerval Zanella <adhemerval.zanella@linaro.org>2022-07-22 11:58:27 -0300
commit6f4e0fcfa2d2b0915816a3a3a1d48b4763a7dee2 (patch)
tree6b1a61c1ccc7e265998db647729411dcb8826901 /nptl
parent6c4ed247bf5aee6416c8c81a394cf692e068a579 (diff)
downloadglibc-6f4e0fcfa2d2b0915816a3a3a1d48b4763a7dee2.zip
glibc-6f4e0fcfa2d2b0915816a3a3a1d48b4763a7dee2.tar.gz
glibc-6f4e0fcfa2d2b0915816a3a3a1d48b4763a7dee2.tar.bz2
stdlib: Add arc4random, arc4random_buf, and arc4random_uniform (BZ #4417)
The implementation is based on scalar Chacha20 with per-thread cache. It uses getrandom or /dev/urandom as fallback to get the initial entropy, and reseeds the internal state on every 16MB of consumed buffer. To improve performance and lower memory consumption the per-thread cache is allocated lazily on first arc4random functions call, and if the memory allocation fails getentropy or /dev/urandom is used as fallback. The cache is also cleared on thread exit iff it was initialized (so if arc4random is not called it is not touched). Although it is lock-free, arc4random is still not async-signal-safe (the per thread state is not updated atomically). The ChaCha20 implementation is based on RFC8439 [1], omitting the final XOR of the keystream with the plaintext because the plaintext is a stream of zeros. This strategy is similar to what OpenBSD arc4random does. The arc4random_uniform is based on previous work by Florian Weimer, where the algorithm is based on Jérémie Lumbroso paper Optimal Discrete Uniform Generation from Coin Flips, and Applications (2013) [2], who credits Donald E. Knuth and Andrew C. Yao, The complexity of nonuniform random number generation (1976), for solving the general case. The main advantage of this method is the that the unit of randomness is not the uniform random variable (uint32_t), but a random bit. It optimizes the internal buffer sampling by initially consuming a 32-bit random variable and then sampling byte per byte. Depending of the upper bound requested, it might lead to better CPU utilization. Checked on x86_64-linux-gnu, aarch64-linux, and powerpc64le-linux-gnu. Co-authored-by: Florian Weimer <fweimer@redhat.com> Reviewed-by: Yann Droneaud <ydroneaud@opteya.com> [1] https://datatracker.ietf.org/doc/html/rfc8439 [2] https://arxiv.org/pdf/1304.1916.pdf
Diffstat (limited to 'nptl')
-rw-r--r--nptl/allocatestack.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/nptl/allocatestack.c b/nptl/allocatestack.c
index 98f5f6d..219854f 100644
--- a/nptl/allocatestack.c
+++ b/nptl/allocatestack.c
@@ -32,6 +32,7 @@
#include <kernel-features.h>
#include <nptl-stack.h>
#include <libc-lock.h>
+#include <tls-internal.h>
/* Default alignment of stack. */
#ifndef STACK_ALIGN
@@ -127,7 +128,7 @@ get_cached_stack (size_t *sizep, void **memp)
result->exiting = false;
__libc_lock_init (result->exit_lock);
- result->tls_state = (struct tls_internal_t) { 0 };
+ memset (&result->tls_state, 0, sizeof result->tls_state);
/* Clear the DTV. */
dtv_t *dtv = GET_DTV (TLS_TPADJ (result));