aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAdhemerval Zanella <adhemerval.zanella@linaro.org>2018-10-11 15:18:40 -0300
committerAdhemerval Zanella <adhemerval.zanella@linaro.org>2018-10-23 14:57:02 -0300
commitc3d8dc45c9df199b8334599a6cbd98c9950dba62 (patch)
tree723bf34e95345abb6fe13f4afa6f4821a317b92e
parentf1034472e21d77b978464b73adbb0f9f1f032c91 (diff)
downloadglibc-c3d8dc45c9df199b8334599a6cbd98c9950dba62.zip
glibc-c3d8dc45c9df199b8334599a6cbd98c9950dba62.tar.gz
glibc-c3d8dc45c9df199b8334599a6cbd98c9950dba62.tar.bz2
x86: Fix Haswell strong flags (BZ#23709)
Th commit 'Disable TSX on some Haswell processors.' (2702856bf4) changed the default flags for Haswell models. Previously, new models were handled by the default switch path, which assumed a Core i3/i5/i7 if AVX is available. After the patch, Haswell models (0x3f, 0x3c, 0x45, 0x46) do not set the flags Fast_Rep_String, Fast_Unaligned_Load, Fast_Unaligned_Copy, and Prefer_PMINUB_for_stringop (only the TSX one). This patch fixes it by disentangle the TSX flag handling from the memory optimization ones. The strstr case cited on patch now selects the __strstr_sse2_unaligned as expected for the Haswell cpu. Checked on x86_64-linux-gnu. [BZ #23709] * sysdeps/x86/cpu-features.c (init_cpu_features): Set TSX bits independently of other flags.
-rw-r--r--ChangeLog6
-rw-r--r--sysdeps/x86/cpu-features.c6
2 files changed, 12 insertions, 0 deletions
diff --git a/ChangeLog b/ChangeLog
index c0fbf75..c5fe2a8 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,9 @@
+2018-10-23 Adhemerval Zanella <adhemerval.zanella@linaro.org>
+
+ [BZ #23709]
+ * sysdeps/x86/cpu-features.c (init_cpu_features): Set TSX bits
+ independently of other flags.
+
2018-10-23 Florian Weimer <fweimer@redhat.com>
* time/tst-mktime2.c (N_STRINGS): Remove.
diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index f4e0f5a..80b3054 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -316,7 +316,13 @@ init_cpu_features (struct cpu_features *cpu_features)
| bit_arch_Fast_Unaligned_Copy
| bit_arch_Prefer_PMINUB_for_stringop);
break;
+ }
+ /* Disable TSX on some Haswell processors to avoid TSX on kernels that
+ weren't updated with the latest microcode package (which disables
+ broken feature by default). */
+ switch (model)
+ {
case 0x3f:
/* Xeon E7 v3 with stepping >= 4 has working TSX. */
if (stepping >= 4)