Age | Commit message (Collapse) | Author | Files | Lines |
|
These functions were needed during the transition of the runtime from
C to Go, but are no longer necessary.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/179879
From-SVN: r271890
|
|
Revise the gccgo version of memory/block/mutex profiling to reduce
runtime overhead. The main change is to collect raw stack traces while
the profile is on line, then post-process the stacks just prior to the
point where we are ready to use the final product. Memory profiling
(at a very low sampling rate) is enabled by default, and the overhead
of the symbolization / DWARF-reading from backtrace_full was slowing
things down relative to the main Go runtime.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/171497
From-SVN: r271172
|
|
Based on patch by Rainer Orth.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/176938
From-SVN: r271135
|
|
runtime.throw needs a g to work properly. Set up g early, to
ensure that if something goes wrong in the runtime startup (e.g.
runtime.check fails), the program terminates in a reasonable way.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/176657
From-SVN: r271088
|
|
We can use the intrinsic memmove directly, without going through
C.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/170004
* go-gcc.cc (Gcc_backend::Gcc_backend): Define memmove builtin.
From-SVN: r271016
|
|
Recognize
for i := range a { a[i] = zero }
for array or slice a, and rewrite it to call memclr, as the gc
compiler does.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/169398
From-SVN: r270862
|
|
Recognize
for k := range m { delete(m, k) }
for map m, and rewrite it to runtime.mapclear, as the gc compiler
does.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/169397
From-SVN: r270780
|
|
A direct interface is an interface whose data word contains the
actual data value, instead of a pointer to it. The gc toolchain
creates a direct interface if the value is pointer shaped, that
includes pointers (including unsafe.Pointer), functions, channels,
maps, and structs and arrays containing a single pointer-shaped
field. In gccgo, we only do this for pointers. This CL unifies
direct interface types with gc. This reduces allocations when
converting such types to interfaces.
Our method functions used to always take pointer receivers, to
make interface calls easy. Now for direct interface types, their
value methods will take value receivers. For a pointer to those
types, when converted to interface, the interface data contains
the pointer. For that interface to call a value method, it will
need a wrapper method that dereference the pointer and invokes
the value method. The wrapper method, instead of the actual one,
is put into the itable of the pointer type.
In the runtime, adjust funcPC for the new layout of interfaces of
functions.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/168409
From-SVN: r270779
|
|
Previously, each time we do an interface conversion for which the
method table is not known at compile time, we allocate a new
method table.
This CL ports the mechanism of itab caching from the gc runtime,
adapted to our itab representation and method finding mechanism.
With the cache, we reuse the same itab for the same (interface,
concrete) type pair. This reduces allocations in interface
conversions.
Unlike the gc runtime, we don't prepopulate the cache with
statically allocated itabs, as currently we don't have a way to
find them. This means we don't deduplicate run-time allocated
itabs with compile-time allocated ones. But that is not too bad
-- it is just a cache anyway.
As now itabs are never freed, it is also possible to drop the
write barrier for writing the first word of an interface header.
I'll leave this optimization for the future.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/171617
From-SVN: r270778
|
|
AIX doesn't allow to mmap an address range which is already mmap.
Therefore, once the region has been allocated, it must munmap before
being able to play with it.
The corresponding Go Toolchain patch is CL 174059.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/174138
From-SVN: r270615
|
|
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/170706
From-SVN: r270214
|
|
In the C calling convention, on AMD64, and probably a number of
other architectures, a 3-word struct argument is passed on stack.
This is less efficient than passing in three registers. Further,
this may affect the code generation in other part of the program,
even if the function is not actually called.
Slices are common in Go and append is a common slice operation,
which calls growslice in the growing path. To improve the code
generation, pass the slice header's three fields as separate
values, instead of a struct, to growslice.
The drawback is that this makes the runtime implementation
slightly diverges from the gc runtime.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/168277
From-SVN: r269811
|
|
Since aix/ppc64 has been added to GC toolchain, a mix between new and
old files were created in gcc toolchain.
This commit corrects this merge for aix/ppc64 and aix/ppc.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/167658
From-SVN: r269797
|
|
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/167749
From-SVN: r269780
|
|
In the runtime there are bad pointer checks that currently don't
work with the concervative collector. With stack maps, the GC is
precise and the checks should work. Enable them.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/153871
From-SVN: r269406
|
|
Interpreting auxv as []uintptr is incorrect on 64-bit big-endian,
as auxv alternates a 32-bit int with a 64-bit pointer.
Patch from Rainer Orth.
Reviewed-on: https://go-review.googlesource.com/c/164739
From-SVN: r269315
|
|
PR go/89172
internal/cpu, runtime, runtime/pprof: handle function descriptors
When using PPC64 ELF ABI v1 a function address is not a PC, but is the
address of a function descriptor. The first field in the function
descriptor is the actual PC (see
http://refspecs.linuxfoundation.org/ELF/ppc64/PPC-elf64abi.html#FUNC-DES).
The libbacktrace library knows about this, and libgo uses actual PC
values consistently except for the helper function funcPC that appears
in both runtime and runtime/pprof.
This patch fixes funcPC by recording, in the internal/cpu package,
whether function descriptors are being used. We have to check for
function descriptors using a C compiler check, because GCC can be
configured using --with-abi to select the ELF ABI to use.
Fixes https://gcc.gnu.org/PR89172
Reviewed-on: https://go-review.googlesource.com/c/162978
From-SVN: r269266
|
|
Backport of upstream https://golang.org/cl/163859.
This fixes various failures on 32-bit SPARC.
Patch from Eric Boctazou.
Reviewed-on: https://go-review.googlesource.com/c/163860
From-SVN: r269258
|
|
Reviewed-on: https://go-review.googlesource.com/c/163742
From-SVN: r269216
|
|
7.3.0 fails)
PR go/86535
runtime: always declare nanotime in Go
For libgo it's always defined in C.
Updates https://gcc.gnu.org/PR86535
Reviewed-on: https://go-review.googlesource.com/c/163743
From-SVN: r269214
|
|
Reviewed-on: https://go-review.googlesource.com/c/162881
From-SVN: r269202
|
|
In signal-triggered stack scan, if the signal is delivered at
certain bad time (e.g. in vdso, or in the middle of setcontext?),
the unwinder may not be able to unwind the whole stack, while it
still reports _URC_END_OF_STACK. So we cannot rely on _URC_END_OF_STACK
to tell if it successfully scanned the stack. Instead, we check
the last Go frame to see it actually reached the end of the stack.
For Go-created stack, this is runtime.kickoff. For C-created
stack, we need to record the outermost Go frame when it enters
the Go side.
Also we cannot unwind the stack if the signal is delivered in the
middle of runtime.gogo, halfway through a goroutine switch, where
the g and the stack don't match. Give up in this case as well.
Reviewed-on: https://go-review.googlesource.com/c/159098
From-SVN: r269018
|
|
PR go/89123
internal/cpu, runtime: add S/390 CPU capability support
Patch by Robin Dapp.
Updates https://gcc.gnu.org/PR89123
Reviewed-on: https://go-review.googlesource.com/c/162887
From-SVN: r268941
|
|
Compiling with LTO revealed a number of cases in the runtime and
standard library where C and Go disagreed about the type of an object or
function (or where Go and code generated by the compiler disagreed). In
all cases the underlying representation was the same (e.g., uintptr vs.
void*), so this wasn't causing actual problems, but it did result in a
number of annoying warnings when compiling with LTO.
Reviewed-on: https://go-review.googlesource.com/c/160700
From-SVN: r268923
|
|
PR go/89168
libgo: change gotest to run examples with output
Change the gotest script to act like "go test" and run examples that
have "output" comments. This is not done with full generality, but
just enough to run the libgo tests. Other packages should be tested
with "go test" as usual.
While we're here clean up some old bits of gotest, and only run
TestXXX functions that are actually in *_test.go files. The latter
change should fix https://gcc.gnu.org/PR89168.
Reviewed-on: https://go-review.googlesource.com/c/162139
From-SVN: r268922
|
|
Patch by Svante Signell.
Reviewed-on: https://go-review.googlesource.com/c/160827
From-SVN: r268465
|
|
Patch by Svante Signell.
Reviewed-on: https://go-review.googlesource.com/c/160823
From-SVN: r268460
|
|
Patch by Svante Signell.
Reviewed-on: https://go-review.googlesource.com/c/160822
From-SVN: r268459
|
|
GCC has supported the __atomic intrinsics since 4.7. They are better
than the __sync intrinsics in that they specify a memory model and,
more importantly for our purposes, they are reliably implemented
either in the compiler or in libatomic.
Fixes https://gcc.gnu.org/PR52084
Reviewed-on: https://go-review.googlesource.com/c/160820
From-SVN: r268458
|
|
If sigtramp and sigtrampgo are both on stack, n -= framesToDiscard
is executed twice, which should actually run only once.
Reviewed-on: https://go-review.googlesource.com/c/159238
From-SVN: r268366
|
|
If a panic happens in the runtime we turn that into a fatal error.
We use the caller's PC to determine if the panic call is inside
the runtime. getcallerpc returns the PC immediately after the
call instruction. If the call is the very last instruction of a
function, it may not find this PC belong to a runtime function,
giving false result. We need to back off the PC by 1 to the call
instruction.
The gc runtime doesn't do this because the gc compiler always
emit an instruction following a panic call, presumably an UNDEF
instruction which turns into an architecture-specific illegal
instruction. Our compiler doesn't do this.
Reviewed-on: https://go-review.googlesource.com/c/159437
From-SVN: r268358
|
|
Precise stack scan uses SIGURG to trigger a stack scan. We need
to have Go signal handler installed for SIGURG.
Reviewed-on: https://go-review.googlesource.com/c/159097
From-SVN: r268230
|
|
PR go/88927
runtime, internal/cpu: fix build for ARM GNU/Linux
Was failing with
../../../libgo/go/internal/cpu/cpu.go:138:2: error: reference to undefined name 'doinit'
138 | doinit()
| ^
Fix it by adding in Go 1.12 internal/cpu/cpu_arm.go, and the code in
runtime that initializes the values.
Fixes https://gcc.gnu.org/PR88927.
Reviewed-on: https://go-review.googlesource.com/c/158717
From-SVN: r268131
|
|
Restore some of the fixes that were applied to golang_org/x/net/lif
but were lost when 1.12 moved the directory to internal/x/net/lif.
Add support for reading /proc to fetch argc/argv/env for c-archive mode.
Reviewed-on: https://go-review.googlesource.com/c/158640
From-SVN: r268130
|
|
Reviewed-on: https://go-review.googlesource.com/c/158019
gotools/:
* Makefile.am (go_cmd_vet_files): Update for Go1.12beta2 release.
(GOTOOLS_TEST_TIMEOUT): Increase to 600.
(check-runtime): Export LD_LIBRARY_PATH before computing GOARCH
and GOOS.
(check-vet): Copy golang.org/x/tools into check-vet-dir.
* Makefile.in: Regenerate.
gcc/testsuite/:
* go.go-torture/execute/names-1.go: Stop using debug/xcoff, which
is no longer externally visible.
From-SVN: r268084
|
|
PR go/88202
runtime: in sigprof, skip to sigtrampgo if we don't find sigtramp
Fixes https://gcc.gnu.org/PR88202
Reviewed-on: https://go-review.googlesource.com/c/158218
From-SVN: r268057
|
|
Currently, we dropg (which clears gp.m) after we CAS the g status
to _Grunnable or _Gwaiting. Immediately after CASing the g status,
another thread may CAS it to _Gscan status and scan its stack.
With precise stack scan, it accesses gp.m in order to switch to g
and back (in doscanstackswitch). This races with dropg. If
doscanstackswitch reads gp.m, then dropg runs, when we restore
the m at the end of the scan it will set to a stale value. Worse,
if dropg runs after doscanstackswitch sets the new m, gp will be
running with a nil m.
To fix this, we do dropg before CAS g status to _Grunnable or
_Gwaiting. We can do this safely if we are CASing from _Grunning,
as we own the g when it is in _Grunning. There is one case where
we CAS from _Gsyscall to _Grunnable. It is not safe to dropg when
it is in _Gsyscall, as precise stack scan needs to read gp.m in
order to signal the m. So we need to introduce a transient state,
_Gexitingsyscall, between _Gsyscall and _Grunnable, where the GC
should not scan its stack.
In is a little unfortunate that we have to add another g status.
We could reuse an existing one (e.g. _Gcopystack), but it is
clearer and safer to just use a new one, as Austin suggested.
Reviewed-on: https://go-review.googlesource.com/c/158157
From-SVN: r268001
|
|
CL 157557 changes the compiler to add one byte padding to
non-empty struct ending with a zero-sized field. Add the same
padding to the FFI type, so reflect.Call works.
This fixes test/fixedbugs/issue26335.go in the main repo.
Reviewed-on: https://go-review.googlesource.com/c/158018
From-SVN: r267956
|
|
This ports https://golang.org/cl/155918 from the master repo.
runtime: panic on uncomparable map key, even if map is empty
Reorg map flags a bit so we don't need any extra space for the extra flag.
This is a pre-req for updating libgo to the Go 1.12beta2 release.
Reviewed-on: https://go-review.googlesource.com/c/157858
From-SVN: r267950
|
|
This is the gccgo version of https://golang.org/cl/141822:
Only return a pointer p to the new slices backing array from makeslice.
Makeslice callers then construct sliceheader{p, len, cap} explictly
instead of makeslice returning the slice.
This change caused the GCC backend to break the runtime/pprof test by
merging together the identical functions allocateReflectTransient and
allocateTransient2M. This caused the traceback to be other than
expected. Fix that by making the functions not identical.
This is a step toward updating libgo to the Go1.12beta1 release.
Reviewed-on: https://go-review.googlesource.com/c/155937
From-SVN: r267660
|
|
Currently, when collecting a traceback for another goroutine,
getTraceback calls gogo(gp) switching to gp, which will resume in
mcall, which will call gtraceback, which will set up gp->m. There
is a gap between setting the current running g to gp and setting
gp->m. If a profiling signal arrives in between, sigtramp will
see a non-nil gp with a nil m, and will seg fault. Fix this by
setting up gp->m first.
Fixes golang/go#29448.
Reviewed-on: https://go-review.googlesource.com/c/156038
From-SVN: r267658
|
|
The only thing export_arm_test.go does is to export usplit,
which does not exist in gccgo.
Reviewed-on: https://go-review.googlesource.com/c/155760
From-SVN: r267435
|
|
Path from Rainer Orth.
Reviewed-on: https://go-review.googlesource.com/c/153118
From-SVN: r266890
|
|
This CL adds support of precise stack scan using stack maps to
the runtime. The stack maps are generated by the compiler (if
supported). Each safepoint is associated with a (real or dummy)
landing pad, and its "type info" in the exception table is a
pointer to the stack map. When a stack is scanned, the stack map
is found by the stack unwinding code by inspecting the exception
table (LSDA).
For precise stack scan we need to unwind the stack. There are
three cases:
- If a goroutine is scanning its own stack, it can unwind the
stack and scan the frames.
- If a goroutine is scanning another, stopped, goroutine, it
cannot directly unwind the target stack. We handle this by
switching (runtime.gogo) to the target g, letting it unwind
and scan the stack, and switch back.
- If we are scanning a goroutine that is blocked in a syscall,
we send a signal to the target goroutine's thread, and let the
signal handler unwind and scan the stack. Extra care is needed
as this races with enter/exit syscall.
Currently this is only implemented on linux.
Reviewed-on: https://go-review.googlesource.com/c/140518
From-SVN: r266832
|
|
The current implementation of Gogo::pkgpath_for_symbol was written in
a way that allowed two distinct package paths to map to the same
symbol, which could cause collisions at link- time or compile-time.
Switch to a better mangling scheme to insure that we get a unique
packagepath symbol for each package. In the new scheme instead of having
separate mangling schemes for identifiers and package paths, the
main identifier mangler ("go_encode_id") now handles mangling of
both packagepath characters and identifier characters.
The new mangling scheme is more intrusive: "foo/bar.Baz" is mangled as
"foo..z2fbar.Baz" instead of "foo_bar.Baz". To mitigate this, this
patch also adds a demangling capability so that function names
returned from runtime.CallersFrames are converted back to their
original unmangled form.
Changing the pkgpath_for_symbol scheme requires updating a number of
//go:linkname directives and C "__asm__" directives to match the new
scheme, as well as updating the 'gotest' driver (which makes
assumptions about the correct mapping from pkgpath symbol to package
name).
Fixes golang/go#27534.
Reviewed-on: https://go-review.googlesource.com/c/135455
From-SVN: r265510
|
|
of constants))
PR go/87661
runtime: remove unused armArch, hwcap and hardDiv
After CL 140057 these are only written but never read in gccgo.
Reviewed-on: https://go-review.googlesource.com/c/141077
From-SVN: r265439
|
|
LLVM doesn't support non-call exception. This test was passing
more or less by luck: if the faulting instruction is between two
calls with the same landing pad (in instruction layout order,
not the program's logic order), it generates a merged PC range
that covers the faulting instruction. If the instruction layout
order changes, or it uses two different (but may be degenerate)
landing pads, this doesn't work.
Reviewed-on: https://go-review.googlesource.com/c/140517
From-SVN: r264985
|
|
Reviewed-on: https://go-review.googlesource.com/c/140277
From-SVN: r264932
|
|
Nothing in libgo calls checkgoarm, and it relies on a variable, goarm,
that is not set.
Reviewed-on: https://go-review.googlesource.com/c/140057
From-SVN: r264872
|
|
This is enough to let libgo build when configured using
--with-multilib-list=m64,m32,mx32. I don't have an x32-enabled kernel
so I haven't tested whether it executes correctly.
For https://gcc.gnu.org/PR87470
Reviewed-on: https://go-review.googlesource.com/138817
From-SVN: r264772
|