Age | Commit message (Collapse) | Author | Files | Lines |
|
This commit is the result of the following actions:
- Running gdb/copyright.py to update all of the copyright headers to
include 2024,
- Manually updating a few files the copyright.py script told me to
update, these files had copyright headers embedded within the
file,
- Regenerating gdbsupport/Makefile.in to refresh it's copyright
date,
- Using grep to find other files that still mentioned 2023. If
these files were updated last year from 2022 to 2023 then I've
updated them this year to 2024.
I'm sure I've probably missed some dates. Feel free to fix them up as
you spot them.
|
|
multi-inferior-gpu.exp
The gdb.rocm/multi-inferior-gpu.exp testcase uses a "continue $thread"
command, but this is incorrect. If "continue" is given an argument, it
sets the ignore count of the breakpoint the thread stopped at.
For this testcase it does not really matter since the breakpoint is not
meant to be hit anymore, so whatever the ignore count is won't influence
the outcome of the test. It is worth fixing nevertheless.
Change-Id: I0eb674d5529cdeb9e808b74870a29b6077265737
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
Functions of the hip runtime returning a hipError_t can be marked
nodiscard depending on the configuration[1] (when compiled with C++17).
This patch makes sure that we always check the value returned by
hipDeviceSynchronize and friends, and print an error message when
appropriate. This avoid a wall of warnings when running the testsuite
if the compiler defaults to using C++17.
It is always a good practice to check the return values anyway.
[1] https://github.com/ROCm-Developer-Tools/HIP/blob/docs/5.7.1/include/hip/hip_runtime_api.h#L203-L218
Change-Id: I2a819a8ac45f4bcf814efe9a2ff12c6a7ad22f97
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
The amd-dbgapi library exposes a setting called "memory precision" for
AMD GPUs [1]. Here's a copy of the description of the setting:
The AMD GPU can overlap the execution of memory instructions with other
instructions. This can result in a wave stopping due to a memory violation
or hardware data watchpoint hit with a program counter beyond the
instruction that caused the wave to stop.
Some architectures allow the hardware to be configured to always wait for
memory operations to complete before continuing. This will result in the
wave stopping at the instruction immediately after the one that caused the
stop event. Enabling this mode can make execution of waves significantly
slower.
Expose this option through a new "amdgpu precise-memory" setting.
The precise memory setting is per inferior. The setting is transferred
from one inferior to another when using the clone-inferior command, or
when a new inferior is created following an exec or a fork.
It can be set before starting the inferior, in which case GDB will
attempt to apply what the user wants when attaching amd-dbgapi. If the
user has requested to enable precise memory, but it can't be enabled
(not all hardware supports it), GDB prints a warning.
If precise memory is disabled, GDB prints a warning when hitting a
memory exception (translated into GDB_SIGNAL_SEGV or GDB_SIGNAL_BUS),
saying that the stop location may not be precise.
Note that the precise memory setting also affects memory watchpoint
reporting, but the watchpoint support for AMD GPUs hasn't been
upstreamed to GDB yet. When we do upstream watchpoint support, GDB will
produce a similar warning message when stopping due to a watchpoint if
precise memory is disabled.
Add a handful of tests. Add a util proc
"hip_devices_support_precise_memory", which indicates if all devices
used for testing support that feature.
[1] https://github.com/ROCm-Developer-Tools/ROCdbgapi/blob/687374258a27b5aab1309a7e8ded719e2f1ed3b1/include/amd-dbgapi.h.in#L6300-L6317
Change-Id: Ife1a99c0e960513da375ced8f8afaf8e47a61b3f
Approved-By: Lancelot Six <lancelot.six@amd.com>
|
|
The gdb/testsuite/gdb.rocm/multi-inferior-gpu.cpp testcase contains a
call to execl which does not have NULL as a last argument. This is
an invalid use of execl. This patch fixes this oversight.
Change-Id: I03b60abe30468d71ba5089b240c6d00f9b8883b2
Approved-By: Tom Tromey <tom@tromey.com>
|
|
When debugging a multi-process application where a parent spawns
multiple child processes using the ROCm runtime, I see the following
assertion failure:
../../gdb/amd-dbgapi-target.c:1071: internal-error: process_one_event: Assertion `runtime_state == AMD_DBGAPI_RUNTIME_STATE_UNLOADED' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
----- Backtrace -----
0x556e9a318540 gdb_internal_backtrace_1
../../gdb/bt-utils.c:122
0x556e9a318540 _Z22gdb_internal_backtracev
../../gdb/bt-utils.c:168
0x556e9a730224 internal_vproblem
../../gdb/utils.c:396
0x556e9a7304e0 _Z15internal_verrorPKciS0_P13__va_list_tag
../../gdb/utils.c:476
0x556e9a87aeb4 _Z18internal_error_locPKciS0_z
../../gdbsupport/errors.cc:58
0x556e9a29f446 process_one_event
../../gdb/amd-dbgapi-target.c:1071
0x556e9a29f446 process_event_queue
../../gdb/amd-dbgapi-target.c:1156
0x556e9a29faf2 _ZN17amd_dbgapi_target4waitE6ptid_tP17target_waitstatus10enum_flagsI16target_wait_flagE
../../gdb/amd-dbgapi-target.c:1262
0x556e9a6b0965 _Z11target_wait6ptid_tP17target_waitstatus10enum_flagsI16target_wait_flagE
../../gdb/target.c:2586
0x556e9a4c221f do_target_wait_1
../../gdb/infrun.c:3876
0x556e9a4d8489 operator()
../../gdb/infrun.c:3935
0x556e9a4d8489 do_target_wait
../../gdb/infrun.c:3964
0x556e9a4d8489 _Z20fetch_inferior_eventv
../../gdb/infrun.c:4365
0x556e9a87b915 gdb_wait_for_event
../../gdbsupport/event-loop.cc:694
0x556e9a87c3a9 gdb_wait_for_event
../../gdbsupport/event-loop.cc:593
0x556e9a87c3a9 _Z16gdb_do_one_eventi
../../gdbsupport/event-loop.cc:217
0x556e9a521689 start_event_loop
../../gdb/main.c:412
0x556e9a521689 captured_command_loop
../../gdb/main.c:476
0x556e9a523c04 captured_main
../../gdb/main.c:1320
0x556e9a523c04 _Z8gdb_mainP18captured_main_args
../../gdb/main.c:1339
0x556e9a24b1bf main
../../gdb/gdb.c:32
---------------------
../../gdb/amd-dbgapi-target.c:1071: internal-error: process_one_event: Assertion `runtime_state == AMD_DBGAPI_RUNTIME_STATE_UNLOADED' failed.
A problem internal to GDB has been detected,
Before diving into why this error appears, let's explore how things are
expected to work in normal circumstances. When a process being debugged
starts using the ROCm runtime, the following happens:
- The runtime registers itself to the driver.
- The driver creates a "runtime loaded" event and notifies the debugger
that a new event is available by writing to a file descriptor which is
registered in GDB's main event loop.
- GDB core calls the callback associated with this file descriptor
(dbgapi_notifier_handler). Because the amd-dbgapi-target is not
pushed at this point, the handler pulls the "runtime loaded" event
from the driver (this is the only event which can be available at this
point) and eventually pushes the amd-dbgapi-target on the inferior's
target stack.
In a nutshell, this is the expected AMDGPU runtime activation process.
From there, when new events are available regarding the GPU threads, the
same file descriptor is written to. The callback sees that the
amd-dbgapi-target is pushed so marks the amd_dbgapi_async_event_handler.
This will later cause amd_dbgapi_target::wait to be called. The wait
method pulls all the available events from the driver and handles them.
The wait method returns the information conveyed by the first event, the
other events are cached for later calls of the wait method.
Note that because we are under the wait method, we know that the
amd-dbgapi-target is pushed on the inferior target stack. This implies
that the runtime activation event has been seen already. As a
consequence, we cannot receive another event indicating that the runtime
gets activated. This is what the failing assertion checks.
In the case when we have multiple inferiors however, there is a flaw in
what have been described above. If one inferior (let's call it inferior
1) already has the amd-dbgapi-target pushed to its target stack and
another inferior (inferior 2) activates the ROCm runtime, here is what
can happen:
- The driver creates the runtime activation for inferior 2 and writes to
the associated file descriptor.
- GDB has inferior 1 selected and calls target_wait for some reason.
- This prompts amd_dbgapi_target::wait to be called. The method pulls
all events from the driver, including the runtime activation event for
inferior 2, leading to the assertion failure.
The fix for this problem is simple. To avoid such problem, we need to
make sure that amd_dbgapi_target::wait only pulls events for the current
inferior from the driver. This is what this patch implements.
This patch also includes a testcase which could fail before this patch.
This patch has been tested on a system with multiple GPUs which had more
chances to reproduce the original bug. It has also been tested on top
of the downstream ROCgdb port which has more AMDGPU related tests. The
testcase has been tested with `make check check-read1 check-readmore`.
Approved-By: Pedro Alves <pedro@palves.net>
|
|
Prior to this patch, it's not possible for GDB to debug GPU code in fork
children or after an exec. The amd-dbgapi target attaches to processes
when an inferior appears due to a "run" or "attach" command, but not
after a fork or exec. This patch adds support for that, such that it's
possible to for an inferior to fork and for GDB to debug the GPU code in
the child.
To achieve that, use the inferior_forked and inferior_execd observers.
In the case of fork, we have nothing to do if `child_inf` is nullptr,
meaning that GDB won't debug the child. We also don't attach if the
inferior has vforked. We are already attached to the parent's address
space, which is shared with the child, so trying to attach would cause
problems. And anyway, the inferior can't do anything other than exec or
exit, it certainly won't start GPU kernels before exec'ing.
In the case of exec, we detach from the exec'ing inferior and attach to
the following inferior. This works regardless of whether they are the
same or not. If they are the same, meaning the execution continues in
the existing inferior, we need to do a detach/attach anyway, as
amd-dbgapi needs to be aware of the new address space created by the
exec.
Note that we use observers and not target_ops::follow_{fork,exec} here.
When the amd-dbgapi target is compiled in, it will attach (in the
amd_dbgapi_process_attach sense, not the ptrace sense) to native
inferiors when they appear, but won't push itself on the inferior's
target stack just yet. It only pushes itself if the inferior
initializes the ROCm runtime. So, if a non-GPU-using inferior calls
fork, an amd_dbgapi_target::follow_fork method would not get called.
Same for exec. A previous version of the code had the amd-dbgapi target
pushed all the time, in which case we could use the target methods. But
we prefer having the target pushed only when necessary, it's less
intrusive when doing native debugging that doesn't involve the GPU.
Change-Id: I5819c151c371120da8bab2fa9cbfa8769ba1d6f9
Reviewed-By: Pedro Alves <pedro@palves.net>
|
|
The copyright years in the ROCm files (e.g. solib-rocm.c) are wrong,
they end in 2022 instead of 2023. I suppose because I posted (or at
least prepared) the patches in 2022 but merged them in 2023, and forgot
to update the year. I found a bunch of other files that are in the same
situation. Fix them all up.
Change-Id: Ia55f5b563606c2ba6a89046f22bc0bf1c0ff2e10
Reviewed-By: Tom Tromey <tom@tromey.com>
|
|
Rename skip_hipcc_tests to allow_hipcc_tests so it can be used as a
"require" predicate in tests.
Use require in gdb.rocm/simple.exp.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This patch adds the foundation for GDB to be able to debug programs
offloaded to AMD GPUs using the AMD ROCm platform [1]. The latest
public release of the ROCm release at the time of writing is 5.4, so
this is what this patch targets.
The ROCm platform allows host programs to schedule bits of code for
execution on GPUs or similar accelerators. The programs running on GPUs
are typically referred to as `kernels` (not related to operating system
kernels).
Programs offloaded with the AMD ROCm platform can be written in the HIP
language [2], OpenCL and OpenMP, but we're going to focus on HIP here.
The HIP language consists of a C++ Runtime API and kernel language.
Here's an example of a very simple HIP program:
#include "hip/hip_runtime.h"
#include <cassert>
__global__ void
do_an_addition (int a, int b, int *out)
{
*out = a + b;
}
int
main ()
{
int *result_ptr, result;
/* Allocate memory for the device to write the result to. */
hipError_t error = hipMalloc (&result_ptr, sizeof (int));
assert (error == hipSuccess);
/* Run `do_an_addition` on one workgroup containing one work item. */
do_an_addition<<<dim3(1), dim3(1), 0, 0>>> (1, 2, result_ptr);
/* Copy result from device to host. Note that this acts as a synchronization
point, waiting for the kernel dispatch to complete. */
error = hipMemcpyDtoH (&result, result_ptr, sizeof (int));
assert (error == hipSuccess);
printf ("result is %d\n", result);
assert (result == 3);
return 0;
}
This program can be compiled with:
$ hipcc simple.cpp -g -O0 -o simple
... where `hipcc` is the HIP compiler, shipped with ROCm releases. This
generates an ELF binary for the host architecture, containing another
ELF binary with the device code. The ELF for the device can be
inspected with:
$ roc-obj-ls simple
1 host-x86_64-unknown-linux file://simple#offset=8192&size=0
1 hipv4-amdgcn-amd-amdhsa--gfx906 file://simple#offset=8192&size=34216
$ roc-obj-extract 'file://simple#offset=8192&size=34216'
$ file simple-offset8192-size34216.co
simple-offset8192-size34216.co: ELF 64-bit LSB shared object, *unknown arch 0xe0* version 1, dynamically linked, with debug_info, not stripped
^
amcgcn architecture that my `file` doesn't know about ----´
Running the program gives the very unimpressive result:
$ ./simple
result is 3
While running, this host program has copied the device program into the
GPU's memory and spawned an execution thread on it. The goal of this
GDB port is to let the user debug host threads and these GPU threads
simultaneously. Here's a sample session using a GDB with this patch
applied:
$ ./gdb -q -nx --data-directory=data-directory ./simple
Reading symbols from ./simple...
(gdb) break do_an_addition
Function "do_an_addition" not defined.
Make breakpoint pending on future shared library load? (y or [n]) y
Breakpoint 1 (do_an_addition) pending.
(gdb) r
Starting program: /home/smarchi/build/binutils-gdb-amdgpu/gdb/simple
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff5db7640 (LWP 1082911)]
[New Thread 0x7ffef53ff640 (LWP 1082913)]
[Thread 0x7ffef53ff640 (LWP 1082913) exited]
[New Thread 0x7ffdecb53640 (LWP 1083185)]
[New Thread 0x7ffff54bf640 (LWP 1083186)]
[Thread 0x7ffdecb53640 (LWP 1083185) exited]
[Switching to AMDGPU Wave 2:2:1:1 (0,0,0)/0]
Thread 6 hit Breakpoint 1, do_an_addition (a=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>,
b=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>,
out=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>) at simple.cpp:24
24 *out = a + b;
(gdb) info inferiors
Num Description Connection Executable
* 1 process 1082907 1 (native) /home/smarchi/build/binutils-gdb-amdgpu/gdb/simple
(gdb) info threads
Id Target Id Frame
1 Thread 0x7ffff5dc9240 (LWP 1082907) "simple" 0x00007ffff5e9410b in ?? () from /opt/rocm-5.4.0/lib/libhsa-runtime64.so.1
2 Thread 0x7ffff5db7640 (LWP 1082911) "simple" __GI___ioctl (fd=3, request=3222817548) at ../sysdeps/unix/sysv/linux/ioctl.c:36
5 Thread 0x7ffff54bf640 (LWP 1083186) "simple" __GI___ioctl (fd=3, request=3222817548) at ../sysdeps/unix/sysv/linux/ioctl.c:36
* 6 AMDGPU Wave 2:2:1:1 (0,0,0)/0 do_an_addition (
a=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>,
b=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>,
out=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>) at simple.cpp:24
(gdb) bt
Python Exception <class 'gdb.error'>: Unhandled dwarf expression opcode 0xe1
#0 do_an_addition (a=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>,
b=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>,
out=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>) at simple.cpp:24
(gdb) continue
Continuing.
result is 3
warning: Temporarily disabling breakpoints for unloaded shared library "file:///home/smarchi/build/binutils-gdb-amdgpu/gdb/simple#offset=8192&size=67208"
[Thread 0x7ffff54bf640 (LWP 1083186) exited]
[Thread 0x7ffff5db7640 (LWP 1082911) exited]
[Inferior 1 (process 1082907) exited normally]
One thing to notice is the host and GPU threads appearing under
the same inferior. This is a design goal for us, as programmers tend to
think of the threads running on the GPU as part of the same program as
the host threads, so showing them in the same inferior in GDB seems
natural. Also, the host and GPU threads share a global memory space,
which fits the inferior model.
Another thing to notice is the error messages when trying to read
variables or printing a backtrace. This is expected for the moment,
since the AMD GPU compiler produces some DWARF that uses some
non-standard extensions:
https://llvm.org/docs/AMDGPUDwarfExtensionsForHeterogeneousDebugging.html
There were already some patches posted by Zoran Zaric earlier to make
GDB support these extensions:
https://inbox.sourceware.org/gdb-patches/20211105113849.118800-1-zoran.zaric@amd.com/
We think it's better to get the basic support for AMD GPU in first,
which will then give a better justification for GDB to support these
extensions.
GPU threads are named `AMDGPU Wave`: a wave is essentially a hardware
thread using the SIMT (single-instruction, multiple-threads) [3]
execution model.
GDB uses the amd-dbgapi library [4], included in the ROCm platform, for
a few things related to AMD GPU threads debugging. Different components
talk to the library, as show on the following diagram:
+---------------------------+ +-------------+ +------------------+
| GDB | amd-dbgapi target | <-> | AMD | | Linux kernel |
| +-------------------+ | Debugger | +--------+ |
| | amdgcn gdbarch | <-> | API | <=> | AMDGPU | |
| +-------------------+ | | | driver | |
| | solib-rocm | <-> | (dbgapi.so) | +--------+---------+
+---------------------------+ +-------------+
- The amd-dbgapi target is a target_ops implementation used to control
execution of GPU threads. While the debugging of host threads works
by using the ptrace / wait Linux kernel interface (as usual), control
of GPU threads is done through a special interface (dubbed `kfd`)
exposed by the `amdgpu` Linux kernel module. GDB doesn't interact
directly with `kfd`, but instead goes through the amd-dbgapi library
(AMD Debugger API on the diagram).
Since it provides execution control, the amd-dbgapi target should
normally be a process_stratum_target, not just a target_ops. More
on that later.
- The amdgcn gdbarch (describing the hardware architecture of the GPU
execution units) offloads some requests to the amd-dbgapi library,
so that knowledge about the various architectures doesn't need to be
duplicated and baked in GDB. This is for example for things like
the list of registers.
- The solib-rocm component is an solib provider that fetches the list of
code objects loaded on the device from the amd-dbgapi library, and
makes GDB read their symbols. This is very similar to other solib
providers that handle shared libraries, except that here the shared
libraries are the pieces of code loaded on the device.
Given that Linux host threads are managed by the linux-nat target, and
the GPU threads are managed by the amd-dbgapi target, having all threads
appear in the same inferior requires the two targets to be in that
inferior's target stack. However, there can only be one
process_stratum_target in a given target stack, since there can be only
one target per slot. To achieve it, we therefore resort the hack^W
solution of placing the amd-dbgapi target in the arch_stratum slot of
the target stack, on top of the linux-nat target. Doing so allows the
amd-dbgapi target to intercept target calls and handle them if they
concern GPU threads, and offload to beneath otherwise. See
amd_dbgapi_target::fetch_registers for a simple example:
void
amd_dbgapi_target::fetch_registers (struct regcache *regcache, int regno)
{
if (!ptid_is_gpu (regcache->ptid ()))
{
beneath ()->fetch_registers (regcache, regno);
return;
}
// handle it
}
ptids of GPU threads are crafted with the following pattern:
(pid, 1, wave id)
Where pid is the inferior's pid and "wave id" is the wave handle handed
to us by the amd-dbgapi library (in practice, a monotonically
incrementing integer). The idea is that on Linux systems, the
combination (pid != 1, lwp == 1) is not possible. lwp == 1 would always
belong to the init process, which would also have pid == 1 (and it's
improbable for the init process to offload work to the GPU and much less
for the user to debug it). We can therefore differentiate GPU and
non-GPU ptids this way. See ptid_is_gpu for more details.
Note that we believe that this scheme could break down in the context of
containers, where the initial process executed in a container has pid 1
(in its own pid namespace). For instance, if you were to execute a ROCm
program in a container, then spawn a GDB in that container and attach to
the process, it will likely not work. This is a known limitation. A
workaround for this is to have a dummy process (like a shell) fork and
execute the program of interest.
The amd-dbgapi target watches native inferiors, and "attaches" to them
using amd_dbgapi_process_attach, which gives it a notifier fd that is
registered in the event loop (see enable_amd_dbgapi). Note that this
isn't the same "attach" as in PTRACE_ATTACH, but being ptrace-attached
is a precondition for amd_dbgapi_process_attach to work. When the
debugged process enables the ROCm runtime, the amd-dbgapi target gets
notified through that fd, and pushes itself on the target stack of the
inferior. The amd-dbgapi target is then able to intercept target_ops
calls. If the debugged process disables the ROCm runtime, the
amd-dbgapi target unpushes itself from the target stack.
This way, the amd-dbgapi target's footprint stays minimal when debugging
a process that doesn't use the AMD ROCm platform, it does not intercept
target calls.
The amd-dbgapi library is found using pkg-config. Since enabling
support for the amdgpu architecture (amdgpu-tdep.c) depends on the
amd-dbgapi library being present, we have the following logic for
the interaction with --target and --enable-targets:
- if the user explicitly asks for amdgcn support with
--target=amdgcn-*-* or --enable-targets=amdgcn-*-*, we probe for
the amd-dbgapi and fail if not found
- if the user uses --enable-targets=all, we probe for amd-dbgapi,
enable amdgcn support if found, disable amdgcn support if not found
- if the user uses --enable-targets=all and --with-amd-dbgapi=yes,
we probe for amd-dbgapi, enable amdgcn if found and fail if not found
- if the user uses --enable-targets=all and --with-amd-dbgapi=no,
we do not probe for amd-dbgapi, disable amdgcn support
- otherwise, amd-dbgapi is not probed for and support for amdgcn is not
enabled
Finally, a simple test is included. It only tests hitting a breakpoint
in device code and resuming execution, pretty much like the example
shown above.
[1] https://docs.amd.com/category/ROCm_v5.4
[2] https://docs.amd.com/bundle/HIP-Programming-Guide-v5.4
[3] https://en.wikipedia.org/wiki/Single_instruction,_multiple_threads
[4] https://docs.amd.com/bundle/ROCDebugger-API-Guide-v5.4
Change-Id: I591edca98b8927b1e49e4b0abe4e304765fed9ee
Co-Authored-By: Zoran Zaric <zoran.zaric@amd.com>
Co-Authored-By: Laurent Morichetti <laurent.morichetti@amd.com>
Co-Authored-By: Tony Tye <Tony.Tye@amd.com>
Co-Authored-By: Lancelot SIX <lancelot.six@amd.com>
Co-Authored-By: Pedro Alves <pedro@palves.net>
|