| Age | Commit message (Collapse) | Author | Files | Lines |
|
"pre-commit autoupdate" suggests a new version of black. This version
seems to want to change how destructuring assignments are formatted.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
|
|
This changes the DAP code so that constants will now be returned by a
DAP scopes request. This is perhaps slightly strange with Ada
enumerators, but on the other hand this is consistent with what the
CLI does.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
A co-worker reported that certain symbols weren't appearing in the DAP
'scopes' response. In particular, symbols with non-ASCII names didn't
appear; though further research showed that this was in fact a result
of the variable in question actually being a constant.
Unfortunately Ada still requires the user to set the Ada source
character set in order to properly display symbol names. For DAP, it
seemed to make sense to allow this as a launch parameter. This patch
implements this.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
This changes some DAP code to explicitly use a symbol's print name.
Some places were using '.name'; and while 'str' does use the print
name, it seems better to be both consistent and explicit.
|
|
This updates the copyright headers to include 2026. I did this by
running gdb/copyright.py and then manually modifying a few files as
noted by the script.
|
|
This adds a way for DAP clients to catch unhandled Ada exceptions,
similar to the "catch exception unhandled" CLI command.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
In PR dap/33228, we changed gdb to gracefully report an error if a DAP
'variables' request asked for more variables than had been reported.
This behavior was clarified in the spec, see
https://github.com/microsoft/debug-adapter-protocol/issues/571
This patch changes gdb to conform to the specified approach, namely
truncating the list rather than erroring.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33228
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
gdb's implementation of the DAP 'disconnect' request was incorrect in
a few ways.
First, the 'terminateDebuggee' field is optional, and has a special
meaning when not supplied: it should do whatever the default is.
Second, if the inferior was attached, it should detach rather than
terminate by default.
Finally, if the inferior was not started at all, it seems reasonable
for this request to simply succeed silently -- currently it returns
"success: false" with the reason being that the inferior isn't
running.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
Some Debug Adapter Protocol clients like Helix set the optional
"arguments" field of the ConfigurationDone request to null, which is a
bit odd but seems to be allowed by the protocol specification. Before
this patch, Python exceptions would be raised on such requests. This
patch makes it so these requests are treated as if the "arguments"
field was absent.
|
|
This changes DAP to ignore the case where a pretty-printer returns a
negative number from the num_children method. It didn't seem worth
writing a test case for this.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33594
Reviewed-By: Ciaran Woodward <ciaranwoodward@xmos.com>
|
|
A user pointed out that if multiple breakpoints are set at the same
spot, in DAP mode, then changing the breakpoints won't reset all of
them.
The problem here is that the breakpoint map only stores a single
breakpoint, so if two breakpoints have the same key, only one will be
stored. Then, when breakpoints are changed, the "missing" breakpoint
will not be deleted.
The fix is to change the map to store a list of breakpoints.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33467
Reviewed-By: Ciaran Woodward <ciaranwoodward@xmos.com>
|
|
This renames the variable 'breakpoint_map' in DAP's breakpoint.py,
adding an underscore to make it clear that it is private to the
module.
Reviewed-By: Ciaran Woodward <ciaranwoodward@xmos.com>
|
|
pre-commit (really flake8) points out that a recent change to
frame_filters.py left an unused import. This patch fixes the problem.
|
|
With the recent addition of the gdb.Style Python API, this commit goes
through the gdb.Command sub-classes which we ship with GDB and adds
some styling support.
This adds 'title' style in a couple of places where we layout tables.
And uses 'filename' style where we are printing filenames.
While I was making these changes I've made a couple of related fixes.
In 'info frame-filter', 'info missing-objfile-handlers', 'info
pretty-printer', and 'info xmethod', we would sometimes print the
gdb.Progspace.filename unconditionally, even though this field can
sometimes be None. To better handle this case, I now check for None,
and print '<no-file>' instead. We already printed that same string
for the program space name in at least one other case, so this change
makes things a little more consistent.
I don't format the '<no-file>' string with the filename style, only if
we have an actual filename does the string get formatted.
The other fix I made was in 'maint info python-disassemblers'. Here
I've added an extra space between the two columns in the output
table. The two columns are 'Architecture' and 'Disassembler Name'.
Given that one column contains a white space, it was rather confusing
having a single space between columns. Luckily the tests don't depend
on a single space, so nothing needs updating for this change.
Additionally, in 'info frame-filter' I've updated the exception
handling to use the gdb.warning function, rather than just printing a
line of output. This means that should this case occur we get the
neat little emoji. We have no tests that trigger this warning, and I
couldn't figure out how to write one. In this end, I just hacked the
Python code to raise an exception and checked the output looked
reasonable. I suspect this warning might be a hard one to trigger!
Approved-By: Tom Tromey <tom@tromey.com>
|
|
The DAP breakpoint code has some helper functions that don't really
provide much value any more. This patch removes them.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
"pre-commit run --all" shows some flake8 warnings coming from a recent
patch. There was no real bug here, but this fix silences the
warnings.
|
|
Add a new helper class gdb.StyleParameterSet. This new class can be
used to simplify creation of new style parameter sets. A style
parameter set is the 'foreground', 'background', and (optionally), the
'intensity' settings, all grouped under a single prefix command.
And example usage is:
(gdb) python s = gdb.StyleParameterSet("my-style")
(gdb) show style my-style
style my-style background: The "my-style" style background color is: none
style my-style foreground: The "my-style" style foreground color is: none
style my-style intensity: The "my-style" style display intensity is: normal
(gdb)
Having created a gdb.StyleParameterSet, the object itself can be used
to access a named style corresponding to the setting group, like this:
(gdb) python print(s.style)
<gdb.Style name='my-style', fg=none, bg=none, intensity=normal>
(gdb)
Of course, having access to the gdb.Style makes it easy to change the
settings, or the settings can be adjusted via the normal CLI 'set'
commands.
As gdb.StyleParameterSet manages a set of parameters, and the
gdb.Parameter class uses Parameter.value as the attribute to read the
parameter's value, there is also StyleParameterSet.value, but this is
just an alias for StyleParameterSet.style, that is, it allows the
gdb.Style object to be read and written too.
It is worth noting that this class only creates a single level of
prefix command. As an example GDB has style 'disassembler mnemonic',
where the 'disassembler' part is a group of related styles. If a user
wanted to create:
style
my-style-group
style-1
style-2
style-3
Where each of 'style-1', 'style-2', and 'style-3' will have the full
set of 'foreground', 'background', and 'intensity', then the
gdb.StyleParameterSet can be used to create the 'style-N' part, but
the user will have to create the 'my-style-group' prefix themselves,
possibly using gdb.ParameterPrefix, e.g.:
gdb.ParameterPrefix("style my-style-group", gdb.COMMAND_NONE)
gdb.StyleParameterSet("my-style-group style-1")
gdb.StyleParameterSet("my-style-group style-2")
gdb.StyleParameterSet("my-style-group style-3")
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Some code in next.py avoids exec_and_log due to its exception
behavior. Now that exec_and_log always forwards exceptions, this is
easily fixed.
|
|
This changes the DAP exec_and_log function to always transform an
exception into a DAPException and propagate it.
As the bug points out, we haven't always wrapped calls when
appropriate. I think it's better to cause the request to fail by
default; if any spot truly needs to ignore errors, that is readily
done at the point of call.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33346
|
|
The Invoker used to be more convenient, before DAP requests were run
on the gdb thread by default. Now it is barely used and easily
replaced by a couple of lambdas.
|
|
Currently, the gdb DAP implementation doesn't provide a way to filter
based on the thrown Ada exception.
There isn't really an ideal way to handle this in DAP:
* Requiring an IDE to use an expression checking $_ada_exception
exposes the IDE to any workarounds needed to get this correct (see
ada-lang.c).
* The setExceptionBreakpoint "filterOptions" field doesn't allow a
special kind of condition to be set. (We could add one but we've
generally avoided gdb-specific extensions.)
* The "exceptionOptions" approach is under-documented. It could be
used but it would have to be in a somewhat gdb-specific way anyway
-- and this approach does not allow a separate condition that is an
expression.
So, after some internal discussion, we agreed that it isn't all that
useful to have conditions on Ada exception catchpoints. This patch
changes the implementation to treat the condition as an exception name
here.
|
|
This changes gdb.printing.make_visualizer to treat an optimized-out
pointer as a scalar variable -- that is, one that does not advertise
any children. This makes sense because such a pointer cannot be
dereferenced.
The test case checks this case, plus it ensures that synthetic
pointers still continue to work.
Approved-By: Andrew Burgess <aburgess@redhat.com>
|
|
In VariableReference.to_object, we try to convert a gdb.Value to an
int without checking if the value is actually available. This came to
light in PR gdb/33345, after the x86 CET shadow stack patches were
merged.
If the x86 CET shadow stack register is available on the machine,
but the shadow stack feature is not enabled at run time, then the
register will show as "<unavailable>".
As the register is of type 'void *', then in the DAP code we try to
add a 'memoryReference' attribute with the value of the register
formatted as hex. This will fail if the register is unavailable.
To test this change you'll need:
(a) a machine which support the shadow stack feature, and
(b) to revert the changes from commit 63b862be762e1e6e7 in the file
gdb.dap/scopes.exp.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33345
Reviewed-By: Christina Schimpe <christina.schimpe@intel.com>
|
|
PR dap/33228 points out a failure that occurs when the DAP client
requests more children of a variable than actually exist. Currently,
gdb throws a somewhat confusing exception. This patch changes this
code to throw a DAPException instead, resulting in a more ordinary and
readable failure.
The spec seems to be silent on what to do in this case. I chose an
exception on the theory that it's easier to be strict now and lift the
restriction later (if needed) than vice versa.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33228
|
|
While investigating a different bug, I noticed that the DAP code would
report a "void *"-typed register as having children -- however,
requesting the children of this register would fail.
The issue here is that a plain "void *" can't be dereferenced. This
patch changes the default visualizer to treat a "void *" as a scalar.
This adds a new test; but also arranges to examine all the returned
registers -- this was the first thing I attempted and it seemed
reasonable to have a test that double-checks that all the registers
really can be dereferenced as appropriate.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33228
|
|
A user pointed out that DAP allows the "threads" request to work when
the inferior is running. This is documented in the overview, not the
specification.
While looking into this, I found a few other issues:
* The _thread_name function was not marked @in_gdb_thread.
This isn't very important but is still an oversight.
* DAP requires all threads to have a name -- the field is not optional
in the "Thread" type.
* There was no test examining events resulting from the inferior
printing to stdout.
This patch fixes all these problems.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=33080
|
|
Makes it possible to set and remove other types of breakpoints while the
process is running. Makes debugging more convenient.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed a minor grammer issue in a comment in DAP.
|
|
DAP requests have a "defer_stop_events" option that is intended to
defer the emission of any "stopped" event until after the current
request completes. This was needed to handle async continues like
"finish &".
However, I noticed that sometimes DAP tests can fail, because a stop
event does arrive before the response to the "stepOut" request. I've
only noticed this when the machine is fairly loaded -- for instance
when I'm regression-testing a series, it may occur in some of the
tests mid-series.
I believe the problem is that the implementation in the "request"
function is incorrect -- the flag is set when "request" is invoked,
but instead it must be deferred until the request itself is run. That
is, the setting must be captured in one of the wrapper functions.
Following up on this, Simon pointed out that introducing a delay
before sending a request's response will cause test case failures.
That is, there's a race here that is normally hidden.
Investigation showed that that deferred requests can't force event
deferral. This patch implements this; but more testing showed many
more race failures. Some of these are due to how the test suite is
written.
Anyway, in the end I took the radical approach of deferring all events
by default. Most DAP requests are asynchronous by nature, so this
seemed ok. The only case I found that really required this is
pause.exp, where the test (rightly) expects to see a 'continued' event
while performing an inferior function call.
I went through all events and all requests and tried to convince
myself that this patch will cause acceptable behavior in every case.
However, it's hard to be completely sure about this approach. Maybe
there are cases that do still need an event before the response, but
we just don't have tests for them.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32685
Acked-By: Simon Marchi <simon.marchi@efficios.com>
|
|
At commit 34b0776fd73^, flake8 reports the following F405 warnings:
...
$ pre-commit run flake8 --file gdb/python/lib/gdb/__init__.py
flake8...................................................................Failed
- hook id: flake8
- exit code: 1
F405 'flush' may be undefined, or defined from star imports: _gdb
F405 'write' may be undefined, or defined from star imports: _gdb
F405 'STDOUT' may be undefined, or defined from star imports: _gdb
F405 'STDERR' may be undefined, or defined from star imports: _gdb
...
F405 'selected_inferior' may be undefined, or defined from star imports: _gdb
F405 'execute' may be undefined, or defined from star imports: _gdb
F405 'parameter' may be undefined, or defined from star imports: _gdb
...
The F405s are addressed by commit 34b0776fd73 ('Suppress some "undefined"
warnings from flake8').
The problem indicated by the first F405 is that the use of flush here:
...
class _GdbFile(object):
...
def flush(self):
flush(stream=self.stream)
...
cannot be verified by flake8. It concludes that either, flush is undefined,
or it is defined by this "star import":
...
from _gdb import * # noqa: F401,F403
...
In this particular case, indeed flush is defined by the star import.
This can be addressed by simply adding:
...
flush(stream=self.stream) # noqa: F405
...
but that has only effect for flake8, so other analyzers may report the same
problem.
The commit 34b0776fd73 addresses it instead by adding an "import _gdb" and
adding a "_gdb." prefix:
...
_gdb.flush(stream=self.stream)
...
This introduces a second way to specify _gdb names, but the first one still
remains, and occasionally someone will use the first one, which then requires
fixing once flake8 is run [1].
While this works to silence the warnings, there is a problem: if a developer
makes a typo:
...
_gdb.flash(stream=self.stream)
...
this is not detected by flake8.
This matters because although the python import already complains:
...
$ gdb -q -batch -ex "python import gdb"
Exception ignored in: <gdb._GdbFile object at 0x7f6186d4d7f0>
Traceback (most recent call last):
File "__init__.py", line 63, in flush
_gdb.flash(stream=self.stream)
AttributeError: module '_gdb' has no attribute 'flash'
...
that doesn't trigger if the code is hidden behind some control flow:
...
if _var_mostly_false:
flash(stream=self.stream)
...
Instead, fix the F405s by reverting commit 34b0776fd73 and adding a second
import of _gdb alongside the star import which lists the names used locally:
...
from _gdb import * # noqa: F401,F403
+from _gdb import (
+ STDERR,
+ STDOUT,
+ Command,
+ execute,
+ flush,
+ parameter,
+ selected_inferior,
+ write,
+)
...
This gives the following warnings for the flash typo:
...
31:1: F401 '_gdb.flush' imported but unused
70:5: F811 redefinition of unused 'flush' from line 31
71:9: F405 'flash' may be undefined, or defined from star imports: _gdb
...
The benefits of this approach compared to the previous one are that:
- the typo is noticed, and
- when using a new name, the F405 fix needs to be done once (by adding it to
the explicit import list), while previously the fix had to be applied to
each use (by adding the "_gdb." prefix).
Tested on x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
[1] Commit 475799b692e ("Fix some pre-commit nits in gdb/__init__.py")
|
|
I believe we previously agreed that the minimum supported Python
version should be 3.4. This patch makes this change, harmonizing the
documentation (which was inconsistent about the minimum version) and
the code.
New in v2: rebased, and removed a pre-3.4 workaround from __init__.py.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Approved-by: Kevin Buettner <kevinb@redhat.com>
Acked-By: Tom de Vries <tdevries@suse.de>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31870
|
|
When DAP completion requests receives empty string to complete,
the script crashes due trying to access element -1 from list
being a result of `text.splitlines()` (which for `text == ""`
evaluates into empty list).
This patch adds simple check if `text` is populated, and when it
is not, skips transformations and assigns correct result directly.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
I noticed that pre-commit has some complaints (flake8 and codespell)
about gdb/__init__.py. This patch fixes these.
Approved-By: Tom de Vries <tdevries@suse.de>
|
|
This commit adds a new gdb.ParameterPrefix class to GDB's Python API.
When creating multiple gdb.Parameters, it is often desirable to group
these together under a sub-command, for example, 'set print' has lots
of parameters nested under it, like 'set print address', and 'set
print symbol'. In the Python API the 'print' part of these commands
are called prefix commands, and are created using gdb.Command objects.
However, as parameters are set via the 'set ....' command list, and
shown through the 'show ....' command list, creating a prefix for a
parameter usually requires two prefix commands to be created, one for
the 'set' command, and one for the 'show' command.
This often leads to some duplication, or at the very least, each user
will end up creating their own helper class to simplify creation of
the two prefix commands.
This commit adds a new gdb.ParameterPrefix class. Creating a single
instance of this class will create both the 'set' and 'show' prefix
commands, which can then be used while creating the gdb.Parameter.
Here is an example of it in use:
gdb.ParameterPrefix('my-prefix', gdb.COMMAND_NONE)
This adds 'set my-prefix' and 'show my-prefix', both of which are
prefix commands. The user can then add gdb.Parameter objects under
these prefixes.
The gdb.ParameterPrefix initialise method also supports documentation
strings, so we can write:
gdb.ParameterPrefix('my-prefix', gdb.COMMAND_NONE,
"Configuration setting relating to my special extension.")
which will set the documentation string for the prefix command.
Also, it is possible to support prefix commands that use the `invoke`
functionality to handle unknown sub-commands. This is done by
sub-classing gdb.ParameterPrefix and overriding either 'invoke_set' or
'invoke_show' to handle the 'set' or 'show' prefix command
respectively.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
The documentation for the Source interface says
* The path of the source to be shown in the UI.
* It is only used to locate and load the content of the source if no
* `sourceReference` is specified (or its value is 0).
but the code used `path` first. I fixed it to use `sourceReference` first.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
flake8 7.2.0 appears to have this new warning:
F824: global name / nonlocal name is unused: name is never assigned in scope
It points out a few places in our code base where "global" is not
necessary, fix them.
Change-Id: Ia6fb08686977559726fefe2a5bb95d8dcb298bb0
Approved-By: Tom Tromey <tom@tromey.com>
|
|
This updates the copyright headers to include 2025. I did this by
running gdb/copyright.py and then manually modifying a few files as
noted by the script.
Approved-By: Eli Zaretskii <eliz@gnu.org>
|
|
Consider the following scenario:
...
$ cat hello
int
main (void)
{
printf ("hello\n");
return 0;
}
$ gcc -x c hello -g
$ gdb -q -iex "maint set gnu-source-highlight enabled off" a.out
Reading symbols from a.out...
(gdb) start
Temporary breakpoint 1 at 0x4005db: file hello, line 6.
Starting program: /data/vries/gdb/a.out
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Temporary breakpoint 1, main () at hello:6
6 printf ("hello\n");
...
This doesn't produce highlighting for line 6, because:
- pygments is used for highlighting instead of source-highlight, and
- pygments guesses the language for highlighting only based on the filename,
which in this case doesn't give a clue.
Fix this by:
- adding a language parameter to the extension_language_ops.colorize interface,
- passing the language as found in the debug info, and
- using it in gdb.styling.colorize to pick the pygments lexer.
The new test-case gdb.python/py-source-styling-2.exp excercises a slightly
different scenario: it compiles a c++ file with a .c extension, and checks
that c++ highlighting is done instead of c highlighting.
Tested on x86_64-linux.
Approved-By: Tom Tromey <tom@tromey.com>
PR cli/30966
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30966
|
|
This cleans up the last codespell report in the Python directory and
adds gdb/python to pre-commit.
Approved-By: Tom de Vries <tdevries@suse.de>
|
|
Use GDB/MI command "-complete" to implement.
Co-authored-by: Simon Farre <simon.farre.cx@gmail.com>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31140
Approved-By: Tom Tromey <tom@tromey.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
|
|
Fix typos:
...
gdb/python/lib/gdb/disassembler.py:84: dissables ==> disables
gdb/python/lib/gdb/command/xmethods.py:40: experession ==> expression
...
|
|
Fix typos:
...
overriden -> overridden
reate -> create
...
Tested on x86_64-linux.
I
|
|
I'm currently reading the DAP code, and I think this would help. This
is pretty much standard Python style, we do it as some places but not
others. I think it helps readability, by saying that this attribute
isn't mean to be accessed outside the class.
A similar pass could be done for internal methods, I haven't done that.
Change-Id: I8e8789b39adafe62d14404d19f7fc75e2a364e01
Approved-By: Tom Tromey <tom@tromey.com>
|
|
Run `pre-commit autoupdate`.
This picks up a fresh Black version from 2025, and with it comes a small
but welcome formatting change.
There is a new version of isort as well, but no formatting change there.
Change-Id: Ie654a9c14c3a4096893011082668efb57c166fa4
|
|
A comment in bugzilla pointed out a bug in my earlier patch to handle
the DAP "linesStartAt1" setting. In particular, in the backtrace
code, "line" can be None, which would lead to an exception from
export_line.
This patch fixes the problem.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32468
|
|
The DAP initialize request has a "linesStartAt1" option, where the
client can indicate that it prefers whether line numbers be 0-based or
1-based.
This patch implements this. I audited all the line-related code in
the DAP implementation.
Note that while a similar option exists for column numbers, gdb
doesn't handle these yet, so nothing is done here.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32468
|
|
I noticed that an earlier commit caused a change in the isort output.
This patch repairs the problem.
|
|
With test-case gdb.dap/ada-arrays.exp, on Leap openSUSE 15.6 with python 3.6,
I run into:
...
Python Exception <class 'TypeError'>: 'type' object is not subscriptable
Error occurred in Python: 'type' object is not subscriptable
ERROR: tcl error sourcing ada-arrays.exp.
...
This is due to using a python 3.9 construct:
...
thread_ids: dict[int, int] = {}
...
Fix this by using typing.Dict instead.
Tested on x86_64-linux.
|
|
It is impossible to set a breakpoint when the process is running,
which I find annoying. LLDB does not have this restriction. I made
`setBreakpoints` and `breakpointLocations` work when the process is
running. Probably more requests can be changed, but I only need these
two at the moment.
Approved-By: Tom Tromey <tom@tromey.com>
|
|
When you try to use a frame on one thread and it was created on
another you get an error. I fixed it by creating a map from a frame ID
to a thread ID. When a frame is created it is added to the map. When
you try to find a frame for an id it checks if it is on the correct
thread and if not switches to that thread. I had to store the frame id
instead of the frame itself in a "_ScopeReference".
Signed-off-by: Oleg Tolmatcev <oleg.tolmatcev@gmail.com>
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=32133
Approved-By: Tom Tromey <tom@tromey.com>
|