\input texinfo @c -*- texinfo -*- @setfilename gdbint.info @include gdb-cfg.texi @settitle @value{GDBN} Internals @setchapternewpage off @dircategory Software development @direntry * Gdb-Internals: (gdbint). The GNU debugger's internals. @end direntry @copying Copyright @copyright{} 1990-1994, 1996, 1998-2006, 2008-2012 Free Software Foundation, Inc. Contributed by Cygnus Solutions. Written by John Gilmore. Second Edition by Stan Shebs. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled ``GNU Free Documentation License''. @end copying @ifnottex This file documents the internals of the GNU debugger @value{GDBN}. @insertcopying @end ifnottex @syncodeindex vr fn @titlepage @title @value{GDBN} Internals @subtitle A guide to the internals of the GNU debugger @author John Gilmore @author Cygnus Solutions @author Second Edition: @author Stan Shebs @author Cygnus Solutions @page @tex \def\$#1${{#1}} % Kluge: collect RCS revision info without $...$ \xdef\manvers{\$Revision$} % For use in headers, footers too {\parskip=0pt \hfill Cygnus Solutions\par \hfill \manvers\par \hfill \TeX{}info \texinfoversion\par } @end tex @vskip 0pt plus 1filll @insertcopying @end titlepage @contents @node Top @c Perhaps this should be the title of the document (but only for info, @c not for TeX). Existing GNU manuals seem inconsistent on this point. @top Scope of this Document This document documents the internals of the GNU debugger, @value{GDBN}. It includes description of @value{GDBN}'s key algorithms and operations, as well as the mechanisms that adapt @value{GDBN} to specific hosts and targets. @menu * Summary:: * Overall Structure:: * Algorithms:: * User Interface:: * libgdb:: * Values:: * Stack Frames:: * Symbol Handling:: * Language Support:: * Host Definition:: * Target Architecture Definition:: * Target Descriptions:: * Target Vector Definition:: * Native Debugging:: * Support Libraries:: * Coding Standards:: * Misc Guidelines:: * Porting GDB:: * Versions and Branches:: * Start of New Year Procedure:: * Releasing GDB:: * Testsuite:: * Hints:: * GDB Observers:: @value{GDBN} Currently available observers * GNU Free Documentation License:: The license for this documentation * Concept Index:: * Function and Variable Index:: @end menu @node Summary @chapter Summary @menu * Requirements:: * Contributors:: @end menu @node Requirements @section Requirements @cindex requirements for @value{GDBN} Before diving into the internals, you should understand the formal requirements and other expectations for @value{GDBN}. Although some of these may seem obvious, there have been proposals for @value{GDBN} that have run counter to these requirements. First of all, @value{GDBN} is a debugger. It's not designed to be a front panel for embedded systems. It's not a text editor. It's not a shell. It's not a programming environment. @value{GDBN} is an interactive tool. Although a batch mode is available, @value{GDBN}'s primary role is to interact with a human programmer. @value{GDBN} should be responsive to the user. A programmer hot on the trail of a nasty bug, and operating under a looming deadline, is going to be very impatient of everything, including the response time to debugger commands. @value{GDBN} should be relatively permissive, such as for expressions. While the compiler should be picky (or have the option to be made picky), since source code lives for a long time usually, the programmer doing debugging shouldn't be spending time figuring out to mollify the debugger. @value{GDBN} will be called upon to deal with really large programs. Executable sizes of 50 to 100 megabytes occur regularly, and we've heard reports of programs approaching 1 gigabyte in size. @value{GDBN} should be able to run everywhere. No other debugger is available for even half as many configurations as @value{GDBN} supports. @node Contributors @section Contributors The first edition of this document was written by John Gilmore of Cygnus Solutions. The current second edition was written by Stan Shebs of Cygnus Solutions, who continues to update the manual. Over the years, many others have made additions and changes to this document. This section attempts to record the significant contributors to that effort. One of the virtues of free software is that everyone is free to contribute to it; with regret, we cannot actually acknowledge everyone here. @quotation @emph{Plea:} This section has only been added relatively recently (four years after publication of the second edition). Additions to this section are particularly welcome. If you or your friends (or enemies, to be evenhanded) have been unfairly omitted from this list, we would like to add your names! @end quotation A document such as this relies on being kept up to date by numerous small updates by contributing engineers as they make changes to the code base. The file @file{ChangeLog} in the @value{GDBN} distribution approximates a blow-by-blow account. The most prolific contributors to this important, but low profile task are Andrew Cagney (responsible for over half the entries), Daniel Jacobowitz, Mark Kettenis, Jim Blandy and Eli Zaretskii. Eli Zaretskii and Daniel Jacobowitz wrote the sections documenting watchpoints. Jeremy Bennett updated the sections on initializing a new architecture and register representation, and added the section on Frame Interpretation. @node Overall Structure @chapter Overall Structure @value{GDBN} consists of three major subsystems: user interface, symbol handling (the @dfn{symbol side}), and target system handling (the @dfn{target side}). The user interface consists of several actual interfaces, plus supporting code. The symbol side consists of object file readers, debugging info interpreters, symbol table management, source language expression parsing, type and value printing. The target side consists of execution control, stack frame analysis, and physical target manipulation. The target side/symbol side division is not formal, and there are a number of exceptions. For instance, core file support involves symbolic elements (the basic core file reader is in BFD) and target elements (it supplies the contents of memory and the values of registers). Instead, this division is useful for understanding how the minor subsystems should fit together. @section The Symbol Side The symbolic side of @value{GDBN} can be thought of as ``everything you can do in @value{GDBN} without having a live program running''. For instance, you can look at the types of variables, and evaluate many kinds of expressions. @section The Target Side The target side of @value{GDBN} is the ``bits and bytes manipulator''. Although it may make reference to symbolic info here and there, most of the target side will run with only a stripped executable available---or even no executable at all, in remote debugging cases. Operations such as disassembly, stack frame crawls, and register display, are able to work with no symbolic info at all. In some cases, such as disassembly, @value{GDBN} will use symbolic info to present addresses relative to symbols rather than as raw numbers, but it will work either way. @section Configurations @cindex host @cindex target @dfn{Host} refers to attributes of the system where @value{GDBN} runs. @dfn{Target} refers to the system where the program being debugged executes. In most cases they are the same machine, in which case a third type of @dfn{Native} attributes come into play. Defines and include files needed to build on the host are host support. Examples are tty support, system defined types, host byte order, host float format. These are all calculated by @code{autoconf} when the debugger is built. Defines and information needed to handle the target format are target dependent. Examples are the stack frame format, instruction set, breakpoint instruction, registers, and how to set up and tear down the stack to call a function. Information that is only needed when the host and target are the same, is native dependent. One example is Unix child process support; if the host and target are not the same, calling @code{fork} to start the target process is a bad idea. The various macros needed for finding the registers in the @code{upage}, running @code{ptrace}, and such are all in the native-dependent files. Another example of native-dependent code is support for features that are really part of the target environment, but which require @code{#include} files that are only available on the host system. Core file handling and @code{setjmp} handling are two common cases. When you want to make @value{GDBN} work as the traditional native debugger on a system, you will need to supply both target and native information. @section Source Tree Structure @cindex @value{GDBN} source tree structure The @value{GDBN} source directory has a mostly flat structure---there are only a few subdirectories. A file's name usually gives a hint as to what it does; for example, @file{stabsread.c} reads stabs, @file{dwarf2read.c} reads @sc{DWARF 2}, etc. Files that are related to some common task have names that share common substrings. For example, @file{*-thread.c} files deal with debugging threads on various platforms; @file{*read.c} files deal with reading various kinds of symbol and object files; @file{inf*.c} files deal with direct control of the @dfn{inferior program} (@value{GDBN} parlance for the program being debugged). There are several dozens of files in the @file{*-tdep.c} family. @samp{tdep} stands for @dfn{target-dependent code}---each of these files implements debug support for a specific target architecture (sparc, mips, etc). Usually, only one of these will be used in a specific @value{GDBN} configuration (sometimes two, closely related). Similarly, there are many @file{*-nat.c} files, each one for native debugging on a specific system (e.g., @file{sparc-linux-nat.c} is for native debugging of Sparc machines running the Linux kernel). The few subdirectories of the source tree are: @table @file @item cli Code that implements @dfn{CLI}, the @value{GDBN} Command-Line Interpreter. @xref{User Interface, Command Interpreter}. @item gdbserver Code for the @value{GDBN} remote server. @item gdbtk Code for Insight, the @value{GDBN} TK-based GUI front-end. @item mi The @dfn{GDB/MI}, the @value{GDBN} Machine Interface interpreter. @item signals Target signal translation code. @item tui Code for @dfn{TUI}, the @value{GDBN} Text-mode full-screen User Interface. @xref{User Interface, TUI}. @end table @node Algorithms @chapter Algorithms @cindex algorithms @value{GDBN} uses a number of debugging-specific algorithms. They are often not very complicated, but get lost in the thicket of special cases and real-world issues. This chapter describes the basic algorithms and mentions some of the specific target definitions that they use. @section Prologue Analysis @cindex prologue analysis @cindex call frame information @cindex CFI (call frame information) To produce a backtrace and allow the user to manipulate older frames' variables and arguments, @value{GDBN} needs to find the base addresses of older frames, and discover where those frames' registers have been saved. Since a frame's ``callee-saves'' registers get saved by younger frames if and when they're reused, a frame's registers may be scattered unpredictably across younger frames. This means that changing the value of a register-allocated variable in an older frame may actually entail writing to a save slot in some younger frame. Modern versions of GCC emit Dwarf call frame information (``CFI''), which describes how to find frame base addresses and saved registers. But CFI is not always available, so as a fallback @value{GDBN} uses a technique called @dfn{prologue analysis} to find frame sizes and saved registers. A prologue analyzer disassembles the function's machine code starting from its entry point, and looks for instructions that allocate frame space, save the stack pointer in a frame pointer register, save registers, and so on. Obviously, this can't be done accurately in general, but it's tractable to do well enough to be very helpful. Prologue analysis predates the GNU toolchain's support for CFI; at one time, prologue analysis was the only mechanism @value{GDBN} used for stack unwinding at all, when the function calling conventions didn't specify a fixed frame layout. In the olden days, function prologues were generated by hand-written, target-specific code in GCC, and treated as opaque and untouchable by optimizers. Looking at this code, it was usually straightforward to write a prologue analyzer for @value{GDBN} that would accurately understand all the prologues GCC would generate. However, over time GCC became more aggressive about instruction scheduling, and began to understand more about the semantics of the prologue instructions themselves; in response, @value{GDBN}'s analyzers became more complex and fragile. Keeping the prologue analyzers working as GCC (and the instruction sets themselves) evolved became a substantial task. @cindex @file{prologue-value.c} @cindex abstract interpretation of function prologues @cindex pseudo-evaluation of function prologues To try to address this problem, the code in @file{prologue-value.h} and @file{prologue-value.c} provides a general framework for writing prologue analyzers that are simpler and more robust than ad-hoc analyzers. When we analyze a prologue using the prologue-value framework, we're really doing ``abstract interpretation'' or ``pseudo-evaluation'': running the function's code in simulation, but using conservative approximations of the values registers and memory would hold when the code actually runs. For example, if our function starts with the instruction: @example addi r1, 42 # add 42 to r1 @end example @noindent we don't know exactly what value will be in @code{r1} after executing this instruction, but we do know it'll be 42 greater than its original value. If we then see an instruction like: @example addi r1, 22 # add 22 to r1 @end example @noindent we still don't know what @code{r1's} value is, but again, we can say it is now 64 greater than its original value. If the next instruction were: @example mov r2, r1 # set r2 to r1's value @end example @noindent then we can say that @code{r2's} value is now the original value of @code{r1} plus 64. It's common for prologues to save registers on the stack, so we'll need to track the values of stack frame slots, as well as the registers. So after an instruction like this: @example mov (fp+4), r2 @end example @noindent then we'd know that the stack slot four bytes above the frame pointer holds the original value of @code{r1} plus 64. And so on. Of course, this can only go so far before it gets unreasonable. If we wanted to be able to say anything about the value of @code{r1} after the instruction: @example xor r1, r3 # exclusive-or r1 and r3, place result in r1 @end example @noindent then things would get pretty complex. But remember, we're just doing a conservative approximation; if exclusive-or instructions aren't relevant to prologues, we can just say @code{r1}'s value is now ``unknown''. We can ignore things that are too complex, if that loss of information is acceptable for our application. So when we say ``conservative approximation'' here, what we mean is an approximation that is either accurate, or marked ``unknown'', but never inaccurate. Using this framework, a prologue analyzer is simply an interpreter for machine code, but one that uses conservative approximations for the contents of registers and memory instead of actual values. Starting from the function's entry point, you simulate instructions up to the current PC, or an instruction that you don't know how to simulate. Now you can examine the state of the registers and stack slots you've kept track of. @itemize @bullet @item To see how large your stack frame is, just check the value of the stack pointer register; if it's the original value of the SP minus a constant, then that constant is the stack frame's size. If the SP's value has been marked as ``unknown'', then that means the prologue has done something too complex for us to track, and we don't know the frame size. @item To see where we've saved the previous frame's registers, we just search the values we've tracked --- stack slots, usually, but registers, too, if you want --- for something equal to the register's original value. If the calling conventions suggest a standard place to save a given register, then we can check there first, but really, anything that will get us back the original value will probably work. @end itemize This does take some work. But prologue analyzers aren't quick-and-simple pattern patching to recognize a few fixed prologue forms any more; they're big, hairy functions. Along with inferior function calls, prologue analysis accounts for a substantial portion of the time needed to stabilize a @value{GDBN} port. So it's worthwhile to look for an approach that will be easier to understand and maintain. In the approach described above: @itemize @bullet @item It's easier to see that the analyzer is correct: you just see whether the analyzer properly (albeit conservatively) simulates the effect of each instruction. @item It's easier to extend the analyzer: you can add support for new instructions, and know that you haven't broken anything that wasn't already broken before. @item It's orthogonal: to gather new information, you don't need to complicate the code for each instruction. As long as your domain of conservative values is already detailed enough to tell you what you need, then all the existing instruction simulations are already gathering the right data for you. @end itemize The file @file{prologue-value.h} contains detailed comments explaining the framework and how to use it. @section Breakpoint Handling @cindex breakpoints In general, a breakpoint is a user-designated location in the program where the user wants to regain control if program execution ever reaches that location. There are two main ways to implement breakpoints; either as ``hardware'' breakpoints or as ``software'' breakpoints. @cindex hardware breakpoints @cindex program counter Hardware breakpoints are sometimes available as a builtin debugging features with some chips. Typically these work by having dedicated register into which the breakpoint address may be stored. If the PC (shorthand for @dfn{program counter}) ever matches a value in a breakpoint registers, the CPU raises an exception and reports it to @value{GDBN}. Another possibility is when an emulator is in use; many emulators include circuitry that watches the address lines coming out from the processor, and force it to stop if the address matches a breakpoint's address. A third possibility is that the target already has the ability to do breakpoints somehow; for instance, a ROM monitor may do its own software breakpoints. So although these are not literally ``hardware breakpoints'', from @value{GDBN}'s point of view they work the same; @value{GDBN} need not do anything more than set the breakpoint and wait for something to happen. Since they depend on hardware resources, hardware breakpoints may be limited in number; when the user asks for more, @value{GDBN} will start trying to set software breakpoints. (On some architectures, notably the 32-bit x86 platforms, @value{GDBN} cannot always know whether there's enough hardware resources to insert all the hardware breakpoints and watchpoints. On those platforms, @value{GDBN} prints an error message only when the program being debugged is continued.) @cindex software breakpoints Software breakpoints require @value{GDBN} to do somewhat more work. The basic theory is that @value{GDBN} will replace a program instruction with a trap, illegal divide, or some other instruction that will cause an exception, and then when it's encountered, @value{GDBN} will take the exception and stop the program. When the user says to continue, @value{GDBN} will restore the original instruction, single-step, re-insert the trap, and continue on. Since it literally overwrites the program being tested, the program area must be writable, so this technique won't work on programs in ROM. It can also distort the behavior of programs that examine themselves, although such a situation would be highly unusual. Also, the software breakpoint instruction should be the smallest size of instruction, so it doesn't overwrite an instruction that might be a jump target, and cause disaster when the program jumps into the middle of the breakpoint instruction. (Strictly speaking, the breakpoint must be no larger than the smallest interval between instructions that may be jump targets; perhaps there is an architecture where only even-numbered instructions may jumped to.) Note that it's possible for an instruction set not to have any instructions usable for a software breakpoint, although in practice only the ARC has failed to define such an instruction. Basic breakpoint object handling is in @file{breakpoint.c}. However, much of the interesting breakpoint action is in @file{infrun.c}. @table @code @cindex insert or remove software breakpoint @findex target_remove_breakpoint @findex target_insert_breakpoint @item target_remove_breakpoint (@var{bp_tgt}) @itemx target_insert_breakpoint (@var{bp_tgt}) Insert or remove a software breakpoint at address @code{@var{bp_tgt}->placed_address}. Returns zero for success, non-zero for failure. On input, @var{bp_tgt} contains the address of the breakpoint, and is otherwise initialized to zero. The fields of the @code{struct bp_target_info} pointed to by @var{bp_tgt} are updated to contain other information about the breakpoint on output. The field @code{placed_address} may be updated if the breakpoint was placed at a related address; the field @code{shadow_contents} contains the real contents of the bytes where the breakpoint has been inserted, if reading memory would return the breakpoint instead of the underlying memory; the field @code{shadow_len} is the length of memory cached in @code{shadow_contents}, if any; and the field @code{placed_size} is optionally set and used by the target, if it could differ from @code{shadow_len}. For example, the remote target @samp{Z0} packet does not require shadowing memory, so @code{shadow_len} is left at zero. However, the length reported by @code{gdbarch_breakpoint_from_pc} is cached in @code{placed_size}, so that a matching @samp{z0} packet can be used to remove the breakpoint. @cindex insert or remove hardware breakpoint @findex target_remove_hw_breakpoint @findex target_insert_hw_breakpoint @item target_remove_hw_breakpoint (@var{bp_tgt}) @itemx target_insert_hw_breakpoint (@var{bp_tgt}) Insert or remove a hardware-assisted breakpoint at address @code{@var{bp_tgt}->placed_address}. Returns zero for success, non-zero for failure. See @code{target_insert_breakpoint} for a description of the @code{struct bp_target_info} pointed to by @var{bp_tgt}; the @code{shadow_contents} and @code{shadow_len} members are not used for hardware breakpoints, but @code{placed_size} may be. @end table @section Single Stepping @section Signal Handling @section Thread Handling @section Inferior Function Calls @section Longjmp Support @cindex @code{longjmp} debugging @value{GDBN} has support for figuring out that the target is doing a @code{longjmp} and for stopping at the target of the jump, if we are stepping. This is done with a few specialized internal breakpoints, which are visible in the output of the @samp{maint info breakpoint} command. @findex gdbarch_get_longjmp_target To make this work, you need to define a function called @code{gdbarch_get_longjmp_target}, which will examine the @code{jmp_buf} structure and extract the @code{longjmp} target address. Since @code{jmp_buf} is target specific and typically defined in a target header not available to @value{GDBN}, you will need to determine the offset of the PC manually and return that; many targets define a @code{jb_pc_offset} field in the tdep structure to save the value once calculated. @section Watchpoints @cindex watchpoints Watchpoints are a special kind of breakpoints (@pxref{Algorithms, breakpoints}) which break when data is accessed rather than when some instruction is executed. When you have data which changes without your knowing what code does that, watchpoints are the silver bullet to hunt down and kill such bugs. @cindex hardware watchpoints @cindex software watchpoints Watchpoints can be either hardware-assisted or not; the latter type is known as ``software watchpoints.'' @value{GDBN} always uses hardware-assisted watchpoints if they are available, and falls back on software watchpoints otherwise. Typical situations where @value{GDBN} will use software watchpoints are: @itemize @bullet @item The watched memory region is too large for the underlying hardware watchpoint support. For example, each x86 debug register can watch up to 4 bytes of memory, so trying to watch data structures whose size is more than 16 bytes will cause @value{GDBN} to use software watchpoints. @item The value of the expression to be watched depends on data held in registers (as opposed to memory). @item Too many different watchpoints requested. (On some architectures, this situation is impossible to detect until the debugged program is resumed.) Note that x86 debug registers are used both for hardware breakpoints and for watchpoints, so setting too many hardware breakpoints might cause watchpoint insertion to fail. @item No hardware-assisted watchpoints provided by the target implementation. @end itemize Software watchpoints are very slow, since @value{GDBN} needs to single-step the program being debugged and test the value of the watched expression(s) after each instruction. The rest of this section is mostly irrelevant for software watchpoints. When the inferior stops, @value{GDBN} tries to establish, among other possible reasons, whether it stopped due to a watchpoint being hit. It first uses @code{STOPPED_BY_WATCHPOINT} to see if any watchpoint was hit. If not, all watchpoint checking is skipped. Then @value{GDBN} calls @code{target_stopped_data_address} exactly once. This method returns the address of the watchpoint which triggered, if the target can determine it. If the triggered address is available, @value{GDBN} compares the address returned by this method with each watched memory address in each active watchpoint. For data-read and data-access watchpoints, @value{GDBN} announces every watchpoint that watches the triggered address as being hit. For this reason, data-read and data-access watchpoints @emph{require} that the triggered address be available; if not, read and access watchpoints will never be considered hit. For data-write watchpoints, if the triggered address is available, @value{GDBN} considers only those watchpoints which match that address; otherwise, @value{GDBN} considers all data-write watchpoints. For each data-write watchpoint that @value{GDBN} considers, it evaluates the expression whose value is being watched, and tests whether the watched value has changed. Watchpoints whose watched values have changed are announced as hit. @c FIXME move these to the main lists of target/native defns @value{GDBN} uses several macros and primitives to support hardware watchpoints: @table @code @findex TARGET_CAN_USE_HARDWARE_WATCHPOINT @item TARGET_CAN_USE_HARDWARE_WATCHPOINT (@var{type}, @var{count}, @var{other}) Return the number of hardware watchpoints of type @var{type} that are possible to be set. The value is positive if @var{count} watchpoints of this type can be set, zero if setting watchpoints of this type is not supported, and negative if @var{count} is more than the maximum number of watchpoints of type @var{type} that can be set. @var{other} is non-zero if other types of watchpoints are currently enabled (there are architectures which cannot set watchpoints of different types at the same time). @findex TARGET_REGION_OK_FOR_HW_WATCHPOINT @item TARGET_REGION_OK_FOR_HW_WATCHPOINT (@var{addr}, @var{len}) Return non-zero if hardware watchpoints can be used to watch a region whose address is @var{addr} and whose length in bytes is @var{len}. @cindex insert or remove hardware watchpoint @findex target_insert_watchpoint @findex target_remove_watchpoint @item target_insert_watchpoint (@var{addr}, @var{len}, @var{type}) @itemx target_remove_watchpoint (@var{addr}, @var{len}, @var{type}) Insert or remove a hardware watchpoint starting at @var{addr}, for @var{len} bytes. @var{type} is the watchpoint type, one of the possible values of the enumerated data type @code{target_hw_bp_type}, defined by @file{breakpoint.h} as follows: @smallexample enum target_hw_bp_type @{ hw_write = 0, /* Common (write) HW watchpoint */ hw_read = 1, /* Read HW watchpoint */ hw_access = 2, /* Access (read or write) HW watchpoint */ hw_execute = 3 /* Execute HW breakpoint */ @}; @end smallexample @noindent These two macros should return 0 for success, non-zero for failure. @findex target_stopped_data_address @item target_stopped_data_address (@var{addr_p}) If the inferior has some watchpoint that triggered, place the address associated with the watchpoint at the location pointed to by @var{addr_p} and return non-zero. Otherwise, return zero. This is required for data-read and data-access watchpoints. It is not required for data-write watchpoints, but @value{GDBN} uses it to improve handling of those also. @value{GDBN} will only call this method once per watchpoint stop, immediately after calling @code{STOPPED_BY_WATCHPOINT}. If the target's watchpoint indication is sticky, i.e., stays set after resuming, this method should clear it. For instance, the x86 debug control register has sticky triggered flags. @findex target_watchpoint_addr_within_range @item target_watchpoint_addr_within_range (@var{target}, @var{addr}, @var{start}, @var{length}) Check whether @var{addr} (as returned by @code{target_stopped_data_address}) lies within the hardware-defined watchpoint region described by @var{start} and @var{length}. This only needs to be provided if the granularity of a watchpoint is greater than one byte, i.e., if the watchpoint can also trigger on nearby addresses outside of the watched region. @findex HAVE_STEPPABLE_WATCHPOINT @item HAVE_STEPPABLE_WATCHPOINT If defined to a non-zero value, it is not necessary to disable a watchpoint to step over it. Like @code{gdbarch_have_nonsteppable_watchpoint}, this is usually set when watchpoints trigger at the instruction which will perform an interesting read or write. It should be set if there is a temporary disable bit which allows the processor to step over the interesting instruction without raising the watchpoint exception again. @findex gdbarch_have_nonsteppable_watchpoint @item int gdbarch_have_nonsteppable_watchpoint (@var{gdbarch}) If it returns a non-zero value, @value{GDBN} should disable a watchpoint to step the inferior over it. This is usually set when watchpoints trigger at the instruction which will perform an interesting read or write. @findex HAVE_CONTINUABLE_WATCHPOINT @item HAVE_CONTINUABLE_WATCHPOINT If defined to a non-zero value, it is possible to continue the inferior after a watchpoint has been hit. This is usually set when watchpoints trigger at the instruction following an interesting read or write. @findex STOPPED_BY_WATCHPOINT @item STOPPED_BY_WATCHPOINT (@var{wait_status}) Return non-zero if stopped by a watchpoint. @var{wait_status} is of the type @code{struct target_waitstatus}, defined by @file{target.h}. Normally, this macro is defined to invoke the function pointed to by the @code{to_stopped_by_watchpoint} member of the structure (of the type @code{target_ops}, defined on @file{target.h}) that describes the target-specific operations; @code{to_stopped_by_watchpoint} ignores the @var{wait_status} argument. @value{GDBN} does not require the non-zero value returned by @code{STOPPED_BY_WATCHPOINT} to be 100% correct, so if a target cannot determine for sure whether the inferior stopped due to a watchpoint, it could return non-zero ``just in case''. @end table @subsection Watchpoints and Threads @cindex watchpoints, with threads @value{GDBN} only supports process-wide watchpoints, which trigger in all threads. @value{GDBN} uses the thread ID to make watchpoints act as if they were thread-specific, but it cannot set hardware watchpoints that only trigger in a specific thread. Therefore, even if the target supports threads, per-thread debug registers, and watchpoints which only affect a single thread, it should set the per-thread debug registers for all threads to the same value. On @sc{gnu}/Linux native targets, this is accomplished by using @code{ALL_LWPS} in @code{target_insert_watchpoint} and @code{target_remove_watchpoint} and by using @code{linux_set_new_thread} to register a handler for newly created threads. @value{GDBN}'s @sc{gnu}/Linux support only reports a single event at a time, although multiple events can trigger simultaneously for multi-threaded programs. When multiple events occur, @file{linux-nat.c} queues subsequent events and returns them the next time the program is resumed. This means that @code{STOPPED_BY_WATCHPOINT} and @code{target_stopped_data_address} only need to consult the current thread's state---the thread indicated by @code{inferior_ptid}. If two threads have hit watchpoints simultaneously, those routines will be called a second time for the second thread. @subsection x86 Watchpoints @cindex x86 debug registers @cindex watchpoints, on x86 The 32-bit Intel x86 (a.k.a.@: ia32) processors feature special debug registers designed to facilitate debugging. @value{GDBN} provides a generic library of functions that x86-based ports can use to implement support for watchpoints and hardware-assisted breakpoints. This subsection documents the x86 watchpoint facilities in @value{GDBN}. (At present, the library functions read and write debug registers directly, and are thus only available for native configurations.) To use the generic x86 watchpoint support, a port should do the following: @itemize @bullet @findex I386_USE_GENERIC_WATCHPOINTS @item Define the macro @code{I386_USE_GENERIC_WATCHPOINTS} somewhere in the target-dependent headers. @item Include the @file{config/i386/nm-i386.h} header file @emph{after} defining @code{I386_USE_GENERIC_WATCHPOINTS}. @item Add @file{i386-nat.o} to the value of the Make variable @code{NATDEPFILES} (@pxref{Native Debugging, NATDEPFILES}). @item Provide implementations for the @code{I386_DR_LOW_*} macros described below. Typically, each macro should call a target-specific function which does the real work. @end itemize The x86 watchpoint support works by maintaining mirror images of the debug registers. Values are copied between the mirror images and the real debug registers via a set of macros which each target needs to provide: @table @code @findex I386_DR_LOW_SET_CONTROL @item I386_DR_LOW_SET_CONTROL (@var{val}) Set the Debug Control (DR7) register to the value @var{val}. @findex I386_DR_LOW_SET_ADDR @item I386_DR_LOW_SET_ADDR (@var{idx}, @var{addr}) Put the address @var{addr} into the debug register number @var{idx}. @findex I386_DR_LOW_RESET_ADDR @item I386_DR_LOW_RESET_ADDR (@var{idx}) Reset (i.e.@: zero out) the address stored in the debug register number @var{idx}. @findex I386_DR_LOW_GET_STATUS @item I386_DR_LOW_GET_STATUS Return the value of the Debug Status (DR6) register. This value is used immediately after it is returned by @code{I386_DR_LOW_GET_STATUS}, so as to support per-thread status register values. @end table For each one of the 4 debug registers (whose indices are from 0 to 3) that store addresses, a reference count is maintained by @value{GDBN}, to allow sharing of debug registers by several watchpoints. This allows users to define several watchpoints that watch the same expression, but with different conditions and/or commands, without wasting debug registers which are in short supply. @value{GDBN} maintains the reference counts internally, targets don't have to do anything to use this feature. The x86 debug registers can each watch a region that is 1, 2, or 4 bytes long. The ia32 architecture requires that each watched region be appropriately aligned: 2-byte region on 2-byte boundary, 4-byte region on 4-byte boundary. However, the x86 watchpoint support in @value{GDBN} can watch unaligned regions and regions larger than 4 bytes (up to 16 bytes) by allocating several debug registers to watch a single region. This allocation of several registers per a watched region is also done automatically without target code intervention. The generic x86 watchpoint support provides the following API for the @value{GDBN}'s application code: @table @code @findex i386_region_ok_for_watchpoint @item i386_region_ok_for_watchpoint (@var{addr}, @var{len}) The macro @code{TARGET_REGION_OK_FOR_HW_WATCHPOINT} is set to call this function. It counts the number of debug registers required to watch a given region, and returns a non-zero value if that number is less than 4, the number of debug registers available to x86 processors. @findex i386_stopped_data_address @item i386_stopped_data_address (@var{addr_p}) The target function @code{target_stopped_data_address} is set to call this function. This function examines the breakpoint condition bits in the DR6 Debug Status register, as returned by the @code{I386_DR_LOW_GET_STATUS} macro, and returns the address associated with the first bit that is set in DR6. @findex i386_stopped_by_watchpoint @item i386_stopped_by_watchpoint (void) The macro @code{STOPPED_BY_WATCHPOINT} is set to call this function. The argument passed to @code{STOPPED_BY_WATCHPOINT} is ignored. This function examines the breakpoint condition bits in the DR6 Debug Status register, as returned by the @code{I386_DR_LOW_GET_STATUS} macro, and returns true if any bit is set. Otherwise, false is returned. @findex i386_insert_watchpoint @findex i386_remove_watchpoint @item i386_insert_watchpoint (@var{addr}, @var{len}, @var{type}) @itemx i386_remove_watchpoint (@var{addr}, @var{len}, @var{type}) Insert or remove a watchpoint. The macros @code{target_insert_watchpoint} and @code{target_remove_watchpoint} are set to call these functions. @code{i386_insert_watchpoint} first looks for a debug register which is already set to watch the same region for the same access types; if found, it just increments the reference count of that debug register, thus implementing debug register sharing between watchpoints. If no such register is found, the function looks for a vacant debug register, sets its mirrored value to @var{addr}, sets the mirrored value of DR7 Debug Control register as appropriate for the @var{len} and @var{type} parameters, and then passes the new values of the debug register and DR7 to the inferior by calling @code{I386_DR_LOW_SET_ADDR} and @code{I386_DR_LOW_SET_CONTROL}. If more than one debug register is required to cover the given region, the above process is repeated for each debug register. @code{i386_remove_watchpoint} does the opposite: it resets the address in the mirrored value of the debug register and its read/write and length bits in the mirrored value of DR7, then passes these new values to the inferior via @code{I386_DR_LOW_RESET_ADDR} and @code{I386_DR_LOW_SET_CONTROL}. If a register is shared by several watchpoints, each time a @code{i386_remove_watchpoint} is called, it decrements the reference count, and only calls @code{I386_DR_LOW_RESET_ADDR} and @code{I386_DR_LOW_SET_CONTROL} when the count goes to zero. @findex i386_insert_hw_breakpoint @findex i386_remove_hw_breakpoint @item i386_insert_hw_breakpoint (@var{bp_tgt}) @itemx i386_remove_hw_breakpoint (@var{bp_tgt}) These functions insert and remove hardware-assisted breakpoints. The macros @code{target_insert_hw_breakpoint} and @code{target_remove_hw_breakpoint} are set to call these functions. The argument is a @code{struct bp_target_info *}, as described in the documentation for @code{target_insert_breakpoint}. These functions work like @code{i386_insert_watchpoint} and @code{i386_remove_watchpoint}, respectively, except that they set up the debug registers to watch instruction execution, and each hardware-assisted breakpoint always requires exactly one debug register. @findex i386_cleanup_dregs @item i386_cleanup_dregs (void) This function clears all the reference counts, addresses, and control bits in the mirror images of the debug registers. It doesn't affect the actual debug registers in the inferior process. @end table @noindent @strong{Notes:} @enumerate 1 @item x86 processors support setting watchpoints on I/O reads or writes. However, since no target supports this (as of March 2001), and since @code{enum target_hw_bp_type} doesn't even have an enumeration for I/O watchpoints, this feature is not yet available to @value{GDBN} running on x86. @item x86 processors can enable watchpoints locally, for the current task only, or globally, for all the tasks. For each debug register, there's a bit in the DR7 Debug Control register that determines whether the associated address is watched locally or globally. The current implementation of x86 watchpoint support in @value{GDBN} always sets watchpoints to be locally enabled, since global watchpoints might interfere with the underlying OS and are probably unavailable in many platforms. @end enumerate @section Checkpoints @cindex checkpoints @cindex restart In the abstract, a checkpoint is a point in the execution history of the program, which the user may wish to return to at some later time. Internally, a checkpoint is a saved copy of the program state, including whatever information is required in order to restore the program to that state at a later time. This can be expected to include the state of registers and memory, and may include external state such as the state of open files and devices. There are a number of ways in which checkpoints may be implemented in gdb, e.g.@: as corefiles, as forked processes, and as some opaque method implemented on the target side. A corefile can be used to save an image of target memory and register state, which can in principle be restored later --- but corefiles do not typically include information about external entities such as open files. Currently this method is not implemented in gdb. A forked process can save the state of user memory and registers, as well as some subset of external (kernel) state. This method is used to implement checkpoints on Linux, and in principle might be used on other systems. Some targets, e.g.@: simulators, might have their own built-in method for saving checkpoints, and gdb might be able to take advantage of that capability without necessarily knowing any details of how it is done. @section Observing changes in @value{GDBN} internals @cindex observer pattern interface @cindex notifications about changes in internals In order to function properly, several modules need to be notified when some changes occur in the @value{GDBN} internals. Traditionally, these modules have relied on several paradigms, the most common ones being hooks and gdb-events. Unfortunately, none of these paradigms was versatile enough to become the standard notification mechanism in @value{GDBN}. The fact that they only supported one ``client'' was also a strong limitation. A new paradigm, based on the Observer pattern of the @cite{Design Patterns} book, has therefore been implemented. The goal was to provide a new interface overcoming the issues with the notification mechanisms previously available. This new interface needed to be strongly typed, easy to extend, and versatile enough to be used as the standard interface when adding new notifications. See @ref{GDB Observers} for a brief description of the observers currently implemented in GDB. The rationale for the current implementation is also briefly discussed. @node User Interface @chapter User Interface @value{GDBN} has several user interfaces, of which the traditional command-line interface is perhaps the most familiar. @section Command Interpreter @cindex command interpreter @cindex CLI The command interpreter in @value{GDBN} is fairly simple. It is designed to allow for the set of commands to be augmented dynamically, and also has a recursive subcommand capability, where the first argument to a command may itself direct a lookup on a different command list. For instance, the @samp{set} command just starts a lookup on the @code{setlist} command list, while @samp{set thread} recurses to the @code{set_thread_cmd_list}. @findex add_cmd @findex add_com To add commands in general, use @code{add_cmd}. @code{add_com} adds to the main command list, and should be used for those commands. The usual place to add commands is in the @code{_initialize_@var{xyz}} routines at the ends of most source files. @findex add_setshow_cmd @findex add_setshow_cmd_full To add paired @samp{set} and @samp{show} commands, use @code{add_setshow_cmd} or @code{add_setshow_cmd_full}. The former is a slightly simpler interface which is useful when you don't need to further modify the new command structures, while the latter returns the new command structures for manipulation. @cindex deprecating commands @findex deprecate_cmd Before removing commands from the command set it is a good idea to deprecate them for some time. Use @code{deprecate_cmd} on commands or aliases to set the deprecated flag. @code{deprecate_cmd} takes a @code{struct cmd_list_element} as it's first argument. You can use the return value from @code{add_com} or @code{add_cmd} to deprecate the command immediately after it is created. The first time a command is used the user will be warned and offered a replacement (if one exists). Note that the replacement string passed to @code{deprecate_cmd} should be the full name of the command, i.e., the entire string the user should type at the command line. @anchor{UI-Independent Output} @section UI-Independent Output---the @code{ui_out} Functions @c This section is based on the documentation written by Fernando @c Nasser . @cindex @code{ui_out} functions The @code{ui_out} functions present an abstraction level for the @value{GDBN} output code. They hide the specifics of different user interfaces supported by @value{GDBN}, and thus free the programmer from the need to write several versions of the same code, one each for every UI, to produce output. @subsection Overview and Terminology In general, execution of each @value{GDBN} command produces some sort of output, and can even generate an input request. Output can be generated for the following purposes: @itemize @bullet @item to display a @emph{result} of an operation; @item to convey @emph{info} or produce side-effects of a requested operation; @item to provide a @emph{notification} of an asynchronous event (including progress indication of a prolonged asynchronous operation); @item to display @emph{error messages} (including warnings); @item to show @emph{debug data}; @item to @emph{query} or prompt a user for input (a special case). @end itemize @noindent This section mainly concentrates on how to build result output, although some of it also applies to other kinds of output. Generation of output that displays the results of an operation involves one or more of the following: @itemize @bullet @item output of the actual data @item formatting the output as appropriate for console output, to make it easily readable by humans @item machine oriented formatting--a more terse formatting to allow for easy parsing by programs which read @value{GDBN}'s output @item annotation, whose purpose is to help legacy GUIs to identify interesting parts in the output @end itemize The @code{ui_out} routines take care of the first three aspects. Annotations are provided by separate annotation routines. Note that use of annotations for an interface between a GUI and @value{GDBN} is deprecated. Output can be in the form of a single item, which we call a @dfn{field}; a @dfn{list} consisting of identical fields; a @dfn{tuple} consisting of non-identical fields; or a @dfn{table}, which is a tuple consisting of a header and a body. In a BNF-like form: @table @code @item @expansion{} @code{
} @item
@expansion{} @code{@{ @}} @item @expansion{} @code{ } @item <body> @expansion{} @code{@{<row>@}} @end table @subsection General Conventions Most @code{ui_out} routines are of type @code{void}, the exceptions are @code{ui_out_stream_new} (which returns a pointer to the newly created object) and the @code{make_cleanup} routines. The first parameter is always the @code{ui_out} vector object, a pointer to a @code{struct ui_out}. The @var{format} parameter is like in @code{printf} family of functions. When it is present, there must also be a variable list of arguments sufficient used to satisfy the @code{%} specifiers in the supplied format. When a character string argument is not used in a @code{ui_out} function call, a @code{NULL} pointer has to be supplied instead. @subsection Table, Tuple and List Functions @cindex list output functions @cindex table output functions @cindex tuple output functions This section introduces @code{ui_out} routines for building lists, tuples and tables. The routines to output the actual data items (fields) are presented in the next section. To recap: A @dfn{tuple} is a sequence of @dfn{fields}, each field containing information about an object; a @dfn{list} is a sequence of fields where each field describes an identical object. Use the @dfn{table} functions when your output consists of a list of rows (tuples) and the console output should include a heading. Use this even when you are listing just one object but you still want the header. @cindex nesting level in @code{ui_out} functions Tables can not be nested. Tuples and lists can be nested up to a maximum of five levels. The overall structure of the table output code is something like this: @smallexample ui_out_table_begin ui_out_table_header @dots{} ui_out_table_body ui_out_tuple_begin ui_out_field_* @dots{} ui_out_tuple_end @dots{} ui_out_table_end @end smallexample Here is the description of table-, tuple- and list-related @code{ui_out} functions: @deftypefun void ui_out_table_begin (struct ui_out *@var{uiout}, int @var{nbrofcols}, int @var{nr_rows}, const char *@var{tblid}) The function @code{ui_out_table_begin} marks the beginning of the output of a table. It should always be called before any other @code{ui_out} function for a given table. @var{nbrofcols} is the number of columns in the table. @var{nr_rows} is the number of rows in the table. @var{tblid} is an optional string identifying the table. The string pointed to by @var{tblid} is copied by the implementation of @code{ui_out_table_begin}, so the application can free the string if it was @code{malloc}ed. The companion function @code{ui_out_table_end}, described below, marks the end of the table's output. @end deftypefun @deftypefun void ui_out_table_header (struct ui_out *@var{uiout}, int @var{width}, enum ui_align @var{alignment}, const char *@var{colhdr}) @code{ui_out_table_header} provides the header information for a single table column. You call this function several times, one each for every column of the table, after @code{ui_out_table_begin}, but before @code{ui_out_table_body}. The value of @var{width} gives the column width in characters. The value of @var{alignment} is one of @code{left}, @code{center}, and @code{right}, and it specifies how to align the header: left-justify, center, or right-justify it. @var{colhdr} points to a string that specifies the column header; the implementation copies that string, so column header strings in @code{malloc}ed storage can be freed after the call. @end deftypefun @deftypefun void ui_out_table_body (struct ui_out *@var{uiout}) This function delimits the table header from the table body. @end deftypefun @deftypefun void ui_out_table_end (struct ui_out *@var{uiout}) This function signals the end of a table's output. It should be called after the table body has been produced by the list and field output functions. There should be exactly one call to @code{ui_out_table_end} for each call to @code{ui_out_table_begin}, otherwise the @code{ui_out} functions will signal an internal error. @end deftypefun The output of the tuples that represent the table rows must follow the call to @code{ui_out_table_body} and precede the call to @code{ui_out_table_end}. You build a tuple by calling @code{ui_out_tuple_begin} and @code{ui_out_tuple_end}, with suitable calls to functions which actually output fields between them. @deftypefun void ui_out_tuple_begin (struct ui_out *@var{uiout}, const char *@var{id}) This function marks the beginning of a tuple output. @var{id} points to an optional string that identifies the tuple; it is copied by the implementation, and so strings in @code{malloc}ed storage can be freed after the call. @end deftypefun @deftypefun void ui_out_tuple_end (struct ui_out *@var{uiout}) This function signals an end of a tuple output. There should be exactly one call to @code{ui_out_tuple_end} for each call to @code{ui_out_tuple_begin}, otherwise an internal @value{GDBN} error will be signaled. @end deftypefun @deftypefun {struct cleanup *} make_cleanup_ui_out_tuple_begin_end (struct ui_out *@var{uiout}, const char *@var{id}) This function first opens the tuple and then establishes a cleanup (@pxref{Misc Guidelines, Cleanups}) to close the tuple. It provides a convenient and correct implementation of the non-portable@footnote{The function cast is not portable ISO C.} code sequence: @smallexample struct cleanup *old_cleanup; ui_out_tuple_begin (uiout, "..."); old_cleanup = make_cleanup ((void(*)(void *)) ui_out_tuple_end, uiout); @end smallexample @end deftypefun @deftypefun void ui_out_list_begin (struct ui_out *@var{uiout}, const char *@var{id}) This function marks the beginning of a list output. @var{id} points to an optional string that identifies the list; it is copied by the implementation, and so strings in @code{malloc}ed storage can be freed after the call. @end deftypefun @deftypefun void ui_out_list_end (struct ui_out *@var{uiout}) This function signals an end of a list output. There should be exactly one call to @code{ui_out_list_end} for each call to @code{ui_out_list_begin}, otherwise an internal @value{GDBN} error will be signaled. @end deftypefun @deftypefun {struct cleanup *} make_cleanup_ui_out_list_begin_end (struct ui_out *@var{uiout}, const char *@var{id}) Similar to @code{make_cleanup_ui_out_tuple_begin_end}, this function opens a list and then establishes cleanup (@pxref{Misc Guidelines, Cleanups}) that will close the list. @end deftypefun @subsection Item Output Functions @cindex item output functions @cindex field output functions @cindex data output The functions described below produce output for the actual data items, or fields, which contain information about the object. Choose the appropriate function accordingly to your particular needs. @deftypefun void ui_out_field_fmt (struct ui_out *@var{uiout}, char *@var{fldname}, char *@var{format}, ...) This is the most general output function. It produces the representation of the data in the variable-length argument list according to formatting specifications in @var{format}, a @code{printf}-like format string. The optional argument @var{fldname} supplies the name of the field. The data items themselves are supplied as additional arguments after @var{format}. This generic function should be used only when it is not possible to use one of the specialized versions (see below). @end deftypefun @deftypefun void ui_out_field_int (struct ui_out *@var{uiout}, const char *@var{fldname}, int @var{value}) This function outputs a value of an @code{int} variable. It uses the @code{"%d"} output conversion specification. @var{fldname} specifies the name of the field. @end deftypefun @deftypefun void ui_out_field_fmt_int (struct ui_out *@var{uiout}, int @var{width}, enum ui_align @var{alignment}, const char *@var{fldname}, int @var{value}) This function outputs a value of an @code{int} variable. It differs from @code{ui_out_field_int} in that the caller specifies the desired @var{width} and @var{alignment} of the output. @var{fldname} specifies the name of the field. @end deftypefun @deftypefun void ui_out_field_core_addr (struct ui_out *@var{uiout}, const char *@var{fldname}, struct gdbarch *@var{gdbarch}, CORE_ADDR @var{address}) This function outputs an address as appropriate for @var{gdbarch}. @end deftypefun @deftypefun void ui_out_field_string (struct ui_out *@var{uiout}, const char *@var{fldname}, const char *@var{string}) This function outputs a string using the @code{"%s"} conversion specification. @end deftypefun Sometimes, there's a need to compose your output piece by piece using functions that operate on a stream, such as @code{value_print} or @code{fprintf_symbol_filtered}. These functions accept an argument of the type @code{struct ui_file *}, a pointer to a @code{ui_file} object used to store the data stream used for the output. When you use one of these functions, you need a way to pass their results stored in a @code{ui_file} object to the @code{ui_out} functions. To this end, you first create a @code{ui_stream} object by calling @code{ui_out_stream_new}, pass the @code{stream} member of that @code{ui_stream} object to @code{value_print} and similar functions, and finally call @code{ui_out_field_stream} to output the field you constructed. When the @code{ui_stream} object is no longer needed, you should destroy it and free its memory by calling @code{ui_out_stream_delete}. @deftypefun {struct ui_stream *} ui_out_stream_new (struct ui_out *@var{uiout}) This function creates a new @code{ui_stream} object which uses the same output methods as the @code{ui_out} object whose pointer is passed in @var{uiout}. It returns a pointer to the newly created @code{ui_stream} object. @end deftypefun @deftypefun void ui_out_stream_delete (struct ui_stream *@var{streambuf}) This functions destroys a @code{ui_stream} object specified by @var{streambuf}. @end deftypefun @deftypefun void ui_out_field_stream (struct ui_out *@var{uiout}, const char *@var{fieldname}, struct ui_stream *@var{streambuf}) This function consumes all the data accumulated in @code{streambuf->stream} and outputs it like @code{ui_out_field_string} does. After a call to @code{ui_out_field_stream}, the accumulated data no longer exists, but the stream is still valid and may be used for producing more fields. @end deftypefun @strong{Important:} If there is any chance that your code could bail out before completing output generation and reaching the point where @code{ui_out_stream_delete} is called, it is necessary to set up a cleanup, to avoid leaking memory and other resources. Here's a skeleton code to do that: @smallexample struct ui_stream *mybuf = ui_out_stream_new (uiout); struct cleanup *old = make_cleanup (ui_out_stream_delete, mybuf); ... do_cleanups (old); @end smallexample If the function already has the old cleanup chain set (for other kinds of cleanups), you just have to add your cleanup to it: @smallexample mybuf = ui_out_stream_new (uiout); make_cleanup (ui_out_stream_delete, mybuf); @end smallexample Note that with cleanups in place, you should not call @code{ui_out_stream_delete} directly, or you would attempt to free the same buffer twice. @subsection Utility Output Functions @deftypefun void ui_out_field_skip (struct ui_out *@var{uiout}, const char *@var{fldname}) This function skips a field in a table. Use it if you have to leave an empty field without disrupting the table alignment. The argument @var{fldname} specifies a name for the (missing) filed. @end deftypefun @deftypefun void ui_out_text (struct ui_out *@var{uiout}, const char *@var{string}) This function outputs the text in @var{string} in a way that makes it easy to be read by humans. For example, the console implementation of this method filters the text through a built-in pager, to prevent it from scrolling off the visible portion of the screen. Use this function for printing relatively long chunks of text around the actual field data: the text it produces is not aligned according to the table's format. Use @code{ui_out_field_string} to output a string field, and use @code{ui_out_message}, described below, to output short messages. @end deftypefun @deftypefun void ui_out_spaces (struct ui_out *@var{uiout}, int @var{nspaces}) This function outputs @var{nspaces} spaces. It is handy to align the text produced by @code{ui_out_text} with the rest of the table or list. @end deftypefun @deftypefun void ui_out_message (struct ui_out *@var{uiout}, int @var{verbosity}, const char *@var{format}, ...) This function produces a formatted message, provided that the current verbosity level is at least as large as given by @var{verbosity}. The current verbosity level is specified by the user with the @samp{set verbositylevel} command.@footnote{As of this writing (April 2001), setting verbosity level is not yet implemented, and is always returned as zero. So calling @code{ui_out_message} with a @var{verbosity} argument more than zero will cause the message to never be printed.} @end deftypefun @deftypefun void ui_out_wrap_hint (struct ui_out *@var{uiout}, char *@var{indent}) This function gives the console output filter (a paging filter) a hint of where to break lines which are too long. Ignored for all other output consumers. @var{indent}, if non-@code{NULL}, is the string to be printed to indent the wrapped text on the next line; it must remain accessible until the next call to @code{ui_out_wrap_hint}, or until an explicit newline is produced by one of the other functions. If @var{indent} is @code{NULL}, the wrapped text will not be indented. @end deftypefun @deftypefun void ui_out_flush (struct ui_out *@var{uiout}) This function flushes whatever output has been accumulated so far, if the UI buffers output. @end deftypefun @subsection Examples of Use of @code{ui_out} functions @cindex using @code{ui_out} functions @cindex @code{ui_out} functions, usage examples This section gives some practical examples of using the @code{ui_out} functions to generalize the old console-oriented code in @value{GDBN}. The examples all come from functions defined on the @file{breakpoints.c} file. This example, from the @code{breakpoint_1} function, shows how to produce a table. The original code was: @smallexample if (!found_a_breakpoint++) @{ annotate_breakpoints_headers (); annotate_field (0); printf_filtered ("Num "); annotate_field (1); printf_filtered ("Type "); annotate_field (2); printf_filtered ("Disp "); annotate_field (3); printf_filtered ("Enb "); if (addressprint) @{ annotate_field (4); printf_filtered ("Address "); @} annotate_field (5); printf_filtered ("What\n"); annotate_breakpoints_table (); @} @end smallexample Here's the new version: @smallexample nr_printable_breakpoints = @dots{}; if (addressprint) ui_out_table_begin (ui, 6, nr_printable_breakpoints, "BreakpointTable"); else ui_out_table_begin (ui, 5, nr_printable_breakpoints, "BreakpointTable"); if (nr_printable_breakpoints > 0) annotate_breakpoints_headers (); if (nr_printable_breakpoints > 0) annotate_field (0); ui_out_table_header (uiout, 3, ui_left, "number", "Num"); /* 1 */ if (nr_printable_breakpoints > 0) annotate_field (1); ui_out_table_header (uiout, 14, ui_left, "type", "Type"); /* 2 */ if (nr_printable_breakpoints > 0) annotate_field (2); ui_out_table_header (uiout, 4, ui_left, "disp", "Disp"); /* 3 */ if (nr_printable_breakpoints > 0) annotate_field (3); ui_out_table_header (uiout, 3, ui_left, "enabled", "Enb"); /* 4 */ if (addressprint) @{ if (nr_printable_breakpoints > 0) annotate_field (4); if (print_address_bits <= 32) ui_out_table_header (uiout, 10, ui_left, "addr", "Address");/* 5 */ else ui_out_table_header (uiout, 18, ui_left, "addr", "Address");/* 5 */ @} if (nr_printable_breakpoints > 0) annotate_field (5); ui_out_table_header (uiout, 40, ui_noalign, "what", "What"); /* 6 */ ui_out_table_body (uiout); if (nr_printable_breakpoints > 0) annotate_breakpoints_table (); @end smallexample This example, from the @code{print_one_breakpoint} function, shows how to produce the actual data for the table whose structure was defined in the above example. The original code was: @smallexample annotate_record (); annotate_field (0); printf_filtered ("%-3d ", b->number); annotate_field (1); if ((int)b->type > (sizeof(bptypes)/sizeof(bptypes[0])) || ((int) b->type != bptypes[(int) b->type].type)) internal_error ("bptypes table does not describe type #%d.", (int)b->type); printf_filtered ("%-14s ", bptypes[(int)b->type].description); annotate_field (2); printf_filtered ("%-4s ", bpdisps[(int)b->disposition]); annotate_field (3); printf_filtered ("%-3c ", bpenables[(int)b->enable]); @dots{} @end smallexample This is the new version: @smallexample annotate_record (); ui_out_tuple_begin (uiout, "bkpt"); annotate_field (0); ui_out_field_int (uiout, "number", b->number); annotate_field (1); if (((int) b->type > (sizeof (bptypes) / sizeof (bptypes[0]))) || ((int) b->type != bptypes[(int) b->type].type)) internal_error ("bptypes table does not describe type #%d.", (int) b->type); ui_out_field_string (uiout, "type", bptypes[(int)b->type].description); annotate_field (2); ui_out_field_string (uiout, "disp", bpdisps[(int)b->disposition]); annotate_field (3); ui_out_field_fmt (uiout, "enabled", "%c", bpenables[(int)b->enable]); @dots{} @end smallexample This example, also from @code{print_one_breakpoint}, shows how to produce a complicated output field using the @code{print_expression} functions which requires a stream to be passed. It also shows how to automate stream destruction with cleanups. The original code was: @smallexample annotate_field (5); print_expression (b->exp, gdb_stdout); @end smallexample The new version is: @smallexample struct ui_stream *stb = ui_out_stream_new (uiout); struct cleanup *old_chain = make_cleanup_ui_out_stream_delete (stb); ... annotate_field (5); print_expression (b->exp, stb->stream); ui_out_field_stream (uiout, "what", local_stream); @end smallexample This example, also from @code{print_one_breakpoint}, shows how to use @code{ui_out_text} and @code{ui_out_field_string}. The original code was: @smallexample annotate_field (5); if (b->dll_pathname == NULL) printf_filtered ("<any library> "); else printf_filtered ("library \"%s\" ", b->dll_pathname); @end smallexample It became: @smallexample annotate_field (5); if (b->dll_pathname == NULL) @{ ui_out_field_string (uiout, "what", "<any library>"); ui_out_spaces (uiout, 1); @} else @{ ui_out_text (uiout, "library \""); ui_out_field_string (uiout, "what", b->dll_pathname); ui_out_text (uiout, "\" "); @} @end smallexample The following example from @code{print_one_breakpoint} shows how to use @code{ui_out_field_int} and @code{ui_out_spaces}. The original code was: @smallexample annotate_field (5); if (b->forked_inferior_pid != 0) printf_filtered ("process %d ", b->forked_inferior_pid); @end smallexample It became: @smallexample annotate_field (5); if (b->forked_inferior_pid != 0) @{ ui_out_text (uiout, "process "); ui_out_field_int (uiout, "what", b->forked_inferior_pid); ui_out_spaces (uiout, 1); @} @end smallexample Here's an example of using @code{ui_out_field_string}. The original code was: @smallexample annotate_field (5); if (b->exec_pathname != NULL) printf_filtered ("program \"%s\" ", b->exec_pathname); @end smallexample It became: @smallexample annotate_field (5); if (b->exec_pathname != NULL) @{ ui_out_text (uiout, "program \""); ui_out_field_string (uiout, "what", b->exec_pathname); ui_out_text (uiout, "\" "); @} @end smallexample Finally, here's an example of printing an address. The original code: @smallexample annotate_field (4); printf_filtered ("%s ", hex_string_custom ((unsigned long) b->address, 8)); @end smallexample It became: @smallexample annotate_field (4); ui_out_field_core_addr (uiout, "Address", b->address); @end smallexample @section Console Printing @section TUI @node libgdb @chapter libgdb @section libgdb 1.0 @cindex @code{libgdb} @code{libgdb} 1.0 was an abortive project of years ago. The theory was to provide an API to @value{GDBN}'s functionality. @section libgdb 2.0 @cindex @code{libgdb} @code{libgdb} 2.0 is an ongoing effort to update @value{GDBN} so that is better able to support graphical and other environments. Since @code{libgdb} development is on-going, its architecture is still evolving. The following components have so far been identified: @itemize @bullet @item Observer - @file{gdb-events.h}. @item Builder - @file{ui-out.h} @item Event Loop - @file{event-loop.h} @item Library - @file{gdb.h} @end itemize The model that ties these components together is described below. @section The @code{libgdb} Model A client of @code{libgdb} interacts with the library in two ways. @itemize @bullet @item As an observer (using @file{gdb-events}) receiving notifications from @code{libgdb} of any internal state changes (break point changes, run state, etc). @item As a client querying @code{libgdb} (using the @file{ui-out} builder) to obtain various status values from @value{GDBN}. @end itemize Since @code{libgdb} could have multiple clients (e.g., a GUI supporting the existing @value{GDBN} CLI), those clients must co-operate when controlling @code{libgdb}. In particular, a client must ensure that @code{libgdb} is idle (i.e.@: no other client is using @code{libgdb}) before responding to a @file{gdb-event} by making a query. @section CLI support At present @value{GDBN}'s CLI is very much entangled in with the core of @code{libgdb}. Consequently, a client wishing to include the CLI in their interface needs to carefully co-ordinate its own and the CLI's requirements. It is suggested that the client set @code{libgdb} up to be bi-modal (alternate between CLI and client query modes). The notes below sketch out the theory: @itemize @bullet @item The client registers itself as an observer of @code{libgdb}. @item The client create and install @code{cli-out} builder using its own versions of the @code{ui-file} @code{gdb_stderr}, @code{gdb_stdtarg} and @code{gdb_stdout} streams. @item The client creates a separate custom @code{ui-out} builder that is only used while making direct queries to @code{libgdb}. @end itemize When the client receives input intended for the CLI, it simply passes it along. Since the @code{cli-out} builder is installed by default, all the CLI output in response to that command is routed (pronounced rooted) through to the client controlled @code{gdb_stdout} et.@: al.@: streams. At the same time, the client is kept abreast of internal changes by virtue of being a @code{libgdb} observer. The only restriction on the client is that it must wait until @code{libgdb} becomes idle before initiating any queries (using the client's custom builder). @section @code{libgdb} components @subheading Observer - @file{gdb-events.h} @file{gdb-events} provides the client with a very raw mechanism that can be used to implement an observer. At present it only allows for one observer and that observer must, internally, handle the need to delay the processing of any event notifications until after @code{libgdb} has finished the current command. @subheading Builder - @file{ui-out.h} @file{ui-out} provides the infrastructure necessary for a client to create a builder. That builder is then passed down to @code{libgdb} when doing any queries. @subheading Event Loop - @file{event-loop.h} @c There could be an entire section on the event-loop @file{event-loop}, currently non-re-entrant, provides a simple event loop. A client would need to either plug its self into this loop or, implement a new event-loop that @value{GDBN} would use. The event-loop will eventually be made re-entrant. This is so that @value{GDBN} can better handle the problem of some commands blocking instead of returning. @subheading Library - @file{gdb.h} @file{libgdb} is the most obvious component of this system. It provides the query interface. Each function is parameterized by a @code{ui-out} builder. The result of the query is constructed using that builder before the query function returns. @node Values @chapter Values @section Values @cindex values @cindex @code{value} structure @value{GDBN} uses @code{struct value}, or @dfn{values}, as an internal abstraction for the representation of a variety of inferior objects and @value{GDBN} convenience objects. Values have an associated @code{struct type}, that describes a virtual view of the raw data or object stored in or accessed through the value. A value is in addition discriminated by its lvalue-ness, given its @code{enum lval_type} enumeration type: @cindex @code{lval_type} enumeration, for values. @table @code @item @code{not_lval} This value is not an lval. It can't be assigned to. @item @code{lval_memory} This value represents an object in memory. @item @code{lval_register} This value represents an object that lives in a register. @item @code{lval_internalvar} Represents the value of an internal variable. @item @code{lval_internalvar_component} Represents part of a @value{GDBN} internal variable. E.g., a structure field. @cindex computed values @item @code{lval_computed} These are ``computed'' values. They allow creating specialized value objects for specific purposes, all abstracted away from the core value support code. The creator of such a value writes specialized functions to handle the reading and writing to/from the value's backend data, and optionally, a ``copy operator'' and a ``destructor''. Pointers to these functions are stored in a @code{struct lval_funcs} instance (declared in @file{value.h}), and passed to the @code{allocate_computed_value} function, as in the example below. @smallexample static void nil_value_read (struct value *v) @{ /* This callback reads data from some backend, and stores it in V. In this case, we always read null data. You'll want to fill in something more interesting. */ memset (value_contents_all_raw (v), value_offset (v), TYPE_LENGTH (value_type (v))); @} static void nil_value_write (struct value *v, struct value *fromval) @{ /* Takes the data from FROMVAL and stores it in the backend of V. */ to_oblivion (value_contents_all_raw (fromval), value_offset (v), TYPE_LENGTH (value_type (fromval))); @} static struct lval_funcs nil_value_funcs = @{ nil_value_read, nil_value_write @}; struct value * make_nil_value (void) @{ struct type *type; struct value *v; type = make_nils_type (); v = allocate_computed_value (type, &nil_value_funcs, NULL); return v; @} @end smallexample See the implementation of the @code{$_siginfo} convenience variable in @file{infrun.c} as a real example use of lval_computed. @end table @node Stack Frames @chapter Stack Frames @cindex frame @cindex call stack frame A frame is a construct that @value{GDBN} uses to keep track of calling and called functions. @cindex unwind frame @value{GDBN}'s frame model, a fresh design, was implemented with the need to support @sc{dwarf}'s Call Frame Information in mind. In fact, the term ``unwind'' is taken directly from that specification. Developers wishing to learn more about unwinders, are encouraged to read the @sc{dwarf} specification, available from @url{http://www.dwarfstd.org}. @findex frame_register_unwind @findex get_frame_register @value{GDBN}'s model is that you find a frame's registers by ``unwinding'' them from the next younger frame. That is, @samp{get_frame_register} which returns the value of a register in frame #1 (the next-to-youngest frame), is implemented by calling frame #0's @code{frame_register_unwind} (the youngest frame). But then the obvious question is: how do you access the registers of the youngest frame itself? @cindex sentinel frame @findex get_frame_type @vindex SENTINEL_FRAME To answer this question, @value{GDBN} has the @dfn{sentinel} frame, the ``-1st'' frame. Unwinding registers from the sentinel frame gives you the current values of the youngest real frame's registers. If @var{f} is a sentinel frame, then @code{get_frame_type (@var{f}) @equiv{} SENTINEL_FRAME}. @section Selecting an Unwinder @findex frame_unwind_prepend_unwinder @findex frame_unwind_append_unwinder The architecture registers a list of frame unwinders (@code{struct frame_unwind}), using the functions @code{frame_unwind_prepend_unwinder} and @code{frame_unwind_append_unwinder}. Each unwinder includes a sniffer. Whenever @value{GDBN} needs to unwind a frame (to fetch the previous frame's registers or the current frame's ID), it calls registered sniffers in order to find one which recognizes the frame. The first time a sniffer returns non-zero, the corresponding unwinder is assigned to the frame. @section Unwinding the Frame ID @cindex frame ID Every frame has an associated ID, of type @code{struct frame_id}. The ID includes the stack base and function start address for the frame. The ID persists through the entire life of the frame, including while other called frames are running; it is used to locate an appropriate @code{struct frame_info} from the cache. Every time the inferior stops, and at various other times, the frame cache is flushed. Because of this, parts of @value{GDBN} which need to keep track of individual frames cannot use pointers to @code{struct frame_info}. A frame ID provides a stable reference to a frame, even when the unwinder must be run again to generate a new @code{struct frame_info} for the same frame. The frame's unwinder's @code{this_id} method is called to find the ID. Note that this is different from register unwinding, where the next frame's @code{prev_register} is called to unwind this frame's registers. Both stack base and function address are required to identify the frame, because a recursive function has the same function address for two consecutive frames and a leaf function may have the same stack address as its caller. On some platforms, a third address is part of the ID to further disambiguate frames---for instance, on IA-64 the separate register stack address is included in the ID. An invalid frame ID (@code{outer_frame_id}) returned from the @code{this_id} method means to stop unwinding after this frame. @code{null_frame_id} is another invalid frame ID which should be used when there is no frame. For instance, certain breakpoints are attached to a specific frame, and that frame is identified through its frame ID (we use this to implement the "finish" command). Using @code{null_frame_id} as the frame ID for a given breakpoint means that the breakpoint is not specific to any frame. The @code{this_id} method should never return @code{null_frame_id}. @section Unwinding Registers Each unwinder includes a @code{prev_register} method. This method takes a frame, an associated cache pointer, and a register number. It returns a @code{struct value *} describing the requested register, as saved by this frame. This is the value of the register that is current in this frame's caller. The returned value must have the same type as the register. It may have any lvalue type. In most circumstances one of these routines will generate the appropriate value: @table @code @item frame_unwind_got_optimized @findex frame_unwind_got_optimized This register was not saved. @item frame_unwind_got_register @findex frame_unwind_got_register This register was copied into another register in this frame. This is also used for unchanged registers; they are ``copied'' into the same register. @item frame_unwind_got_memory @findex frame_unwind_got_memory This register was saved in memory. @item frame_unwind_got_constant @findex frame_unwind_got_constant This register was not saved, but the unwinder can compute the previous value some other way. @item frame_unwind_got_address @findex frame_unwind_got_address Same as @code{frame_unwind_got_constant}, except that the value is a target address. This is frequently used for the stack pointer, which is not explicitly saved but has a known offset from this frame's stack pointer. For architectures with a flat unified address space, this is generally the same as @code{frame_unwind_got_constant}. @end table @node Symbol Handling @chapter Symbol Handling Symbols are a key part of @value{GDBN}'s operation. Symbols include variables, functions, and types. Symbol information for a large program can be truly massive, and reading of symbol information is one of the major performance bottlenecks in @value{GDBN}; it can take many minutes to process it all. Studies have shown that nearly all the time spent is computational, rather than file reading. One of the ways for @value{GDBN} to provide a good user experience is to start up quickly, taking no more than a few seconds. It is simply not possible to process all of a program's debugging info in that time, and so we attempt to handle symbols incrementally. For instance, we create @dfn{partial symbol tables} consisting of only selected symbols, and only expand them to full symbol tables when necessary. @section Symbol Reading @cindex symbol reading @cindex reading of symbols @cindex symbol files @value{GDBN} reads symbols from @dfn{symbol files}. The usual symbol file is the file containing the program which @value{GDBN} is debugging. @value{GDBN} can be directed to use a different file for symbols (with the @samp{symbol-file} command), and it can also read more symbols via the @samp{add-file} and @samp{load} commands. In addition, it may bring in more symbols while loading shared libraries. @findex find_sym_fns Symbol files are initially opened by code in @file{symfile.c} using the BFD library (@pxref{Support Libraries}). BFD identifies the type of the file by examining its header. @code{find_sym_fns} then uses this identification to locate a set of symbol-reading functions. @findex add_symtab_fns @cindex @code{sym_fns} structure @cindex adding a symbol-reading module Symbol-reading modules identify themselves to @value{GDBN} by calling @code{add_symtab_fns} during their module initialization. The argument to @code{add_symtab_fns} is a @code{struct sym_fns} which contains the name (or name prefix) of the symbol format, the length of the prefix, and pointers to four functions. These functions are called at various times to process symbol files whose identification matches the specified prefix. The functions supplied by each module are: @table @code @item @var{xyz}_symfile_init(struct sym_fns *sf) @cindex secondary symbol file Called from @code{symbol_file_add} when we are about to read a new symbol file. This function should clean up any internal state (possibly resulting from half-read previous files, for example) and prepare to read a new symbol file. Note that the symbol file which we are reading might be a new ``main'' symbol file, or might be a secondary symbol file whose symbols are being added to the existing symbol table. The argument to @code{@var{xyz}_symfile_init} is a newly allocated @code{struct sym_fns} whose @code{bfd} field contains the BFD for the new symbol file being read. Its @code{private} field has been zeroed, and can be modified as desired. Typically, a struct of private information will be @code{malloc}'d, and a pointer to it will be placed in the @code{private} field. There is no result from @code{@var{xyz}_symfile_init}, but it can call @code{error} if it detects an unavoidable problem. @item @var{xyz}_new_init() Called from @code{symbol_file_add} when discarding existing symbols. This function needs only handle the symbol-reading module's internal state; the symbol table data structures visible to the rest of @value{GDBN} will be discarded by @code{symbol_file_add}. It has no arguments and no result. It may be called after @code{@var{xyz}_symfile_init}, if a new symbol table is being read, or may be called alone if all symbols are simply being discarded. @item @var{xyz}_symfile_read(struct sym_fns *sf, CORE_ADDR addr, int mainline) Called from @code{symbol_file_add} to actually read the symbols from a symbol-file into a set of psymtabs or symtabs. @code{sf} points to the @code{struct sym_fns} originally passed to @code{@var{xyz}_sym_init} for possible initialization. @code{addr} is the offset between the file's specified start address and its true address in memory. @code{mainline} is 1 if this is the main symbol table being read, and 0 if a secondary symbol file (e.g., shared library or dynamically loaded file) is being read.@refill @end table In addition, if a symbol-reading module creates psymtabs when @var{xyz}_symfile_read is called, these psymtabs will contain a pointer to a function @code{@var{xyz}_psymtab_to_symtab}, which can be called from any point in the @value{GDBN} symbol-handling code. @table @code @item @var{xyz}_psymtab_to_symtab (struct partial_symtab *pst) Called from @code{psymtab_to_symtab} (or the @code{PSYMTAB_TO_SYMTAB} macro) if the psymtab has not already been read in and had its @code{pst->symtab} pointer set. The argument is the psymtab to be fleshed-out into a symtab. Upon return, @code{pst->readin} should have been set to 1, and @code{pst->symtab} should contain a pointer to the new corresponding symtab, or zero if there were no symbols in that part of the symbol file. @end table @section Partial Symbol Tables @value{GDBN} has three types of symbol tables: @itemize @bullet @cindex full symbol table @cindex symtabs @item Full symbol tables (@dfn{symtabs}). These contain the main information about symbols and addresses. @cindex psymtabs @item Partial symbol tables (@dfn{psymtabs}). These contain enough information to know when to read the corresponding part of the full symbol table. @cindex minimal symbol table @cindex minsymtabs @item Minimal symbol tables (@dfn{msymtabs}). These contain information gleaned from non-debugging symbols. @end itemize @cindex partial symbol table This section describes partial symbol tables. A psymtab is constructed by doing a very quick pass over an executable file's debugging information. Small amounts of information are extracted---enough to identify which parts of the symbol table will need to be re-read and fully digested later, when the user needs the information. The speed of this pass causes @value{GDBN} to start up very quickly. Later, as the detailed rereading occurs, it occurs in small pieces, at various times, and the delay therefrom is mostly invisible to the user. @c (@xref{Symbol Reading}.) The symbols that show up in a file's psymtab should be, roughly, those visible to the debugger's user when the program is not running code from that file. These include external symbols and types, static symbols and types, and @code{enum} values declared at file scope. The psymtab also contains the range of instruction addresses that the full symbol table would represent. @cindex finding a symbol @cindex symbol lookup The idea is that there are only two ways for the user (or much of the code in the debugger) to reference a symbol: @itemize @bullet @findex find_pc_function @findex find_pc_line @item By its address (e.g., execution stops at some address which is inside a function in this file). The address will be noticed to be in the range of this psymtab, and the full symtab will be read in. @code{find_pc_function}, @code{find_pc_line}, and other @code{find_pc_@dots{}} functions handle this. @cindex lookup_symbol @item By its name (e.g., the user asks to print a variable, or set a breakpoint on a function). Global names and file-scope names will be found in the psymtab, which will cause the symtab to be pulled in. Local names will have to be qualified by a global name, or a file-scope name, in which case we will have already read in the symtab as we evaluated the qualifier. Or, a local symbol can be referenced when we are ``in'' a local scope, in which case the first case applies. @code{lookup_symbol} does most of the work here. @end itemize The only reason that psymtabs exist is to cause a symtab to be read in at the right moment. Any symbol that can be elided from a psymtab, while still causing that to happen, should not appear in it. Since psymtabs don't have the idea of scope, you can't put local symbols in them anyway. Psymtabs don't have the idea of the type of a symbol, either, so types need not appear, unless they will be referenced by name. It is a bug for @value{GDBN} to behave one way when only a psymtab has been read, and another way if the corresponding symtab has been read in. Such bugs are typically caused by a psymtab that does not contain all the visible symbols, or which has the wrong instruction address ranges. The psymtab for a particular section of a symbol file (objfile) could be thrown away after the symtab has been read in. The symtab should always be searched before the psymtab, so the psymtab will never be used (in a bug-free environment). Currently, psymtabs are allocated on an obstack, and all the psymbols themselves are allocated in a pair of large arrays on an obstack, so there is little to be gained by trying to free them unless you want to do a lot more work. Whether or not psymtabs are created depends on the objfile's symbol reader. The core of @value{GDBN} hides the details of partial symbols and partial symbol tables behind a set of function pointers known as the @dfn{quick symbol functions}. These are documented in @file{symfile.h}. @section Types @unnumberedsubsec Fundamental Types (e.g., @code{FT_VOID}, @code{FT_BOOLEAN}). @cindex fundamental types These are the fundamental types that @value{GDBN} uses internally. Fundamental types from the various debugging formats (stabs, ELF, etc) are mapped into one of these. They are basically a union of all fundamental types that @value{GDBN} knows about for all the languages that @value{GDBN} knows about. @unnumberedsubsec Type Codes (e.g., @code{TYPE_CODE_PTR}, @code{TYPE_CODE_ARRAY}). @cindex type codes Each time @value{GDBN} builds an internal type, it marks it with one of these types. The type may be a fundamental type, such as @code{TYPE_CODE_INT}, or a derived type, such as @code{TYPE_CODE_PTR} which is a pointer to another type. Typically, several @code{FT_*} types map to one @code{TYPE_CODE_*} type, and are distinguished by other members of the type struct, such as whether the type is signed or unsigned, and how many bits it uses. @unnumberedsubsec Builtin Types (e.g., @code{builtin_type_void}, @code{builtin_type_char}). These are instances of type structs that roughly correspond to fundamental types and are created as global types for @value{GDBN} to use for various ugly historical reasons. We eventually want to eliminate these. Note for example that @code{builtin_type_int} initialized in @file{gdbtypes.c} is basically the same as a @code{TYPE_CODE_INT} type that is initialized in @file{c-lang.c} for an @code{FT_INTEGER} fundamental type. The difference is that the @code{builtin_type} is not associated with any particular objfile, and only one instance exists, while @file{c-lang.c} builds as many @code{TYPE_CODE_INT} types as needed, with each one associated with some particular objfile. @section Object File Formats @cindex object file formats @subsection a.out @cindex @code{a.out} format The @code{a.out} format is the original file format for Unix. It consists of three sections: @code{text}, @code{data}, and @code{bss}, which are for program code, initialized data, and uninitialized data, respectively. The @code{a.out} format is so simple that it doesn't have any reserved place for debugging information. (Hey, the original Unix hackers used @samp{adb}, which is a machine-language debugger!) The only debugging format for @code{a.out} is stabs, which is encoded as a set of normal symbols with distinctive attributes. The basic @code{a.out} reader is in @file{dbxread.c}. @subsection COFF @cindex COFF format The COFF format was introduced with System V Release 3 (SVR3) Unix. COFF files may have multiple sections, each prefixed by a header. The number of sections is limited. The COFF specification includes support for debugging. Although this was a step forward, the debugging information was woefully limited. For instance, it was not possible to represent code that came from an included file. GNU's COFF-using configs often use stabs-type info, encapsulated in special sections. The COFF reader is in @file{coffread.c}. @subsection ECOFF @cindex ECOFF format ECOFF is an extended COFF originally introduced for Mips and Alpha workstations. The basic ECOFF reader is in @file{mipsread.c}. @subsection XCOFF @cindex XCOFF format The IBM RS/6000 running AIX uses an object file format called XCOFF. The COFF sections, symbols, and line numbers are used, but debugging symbols are @code{dbx}-style stabs whose strings are located in the @code{.debug} section (rather than the string table). For more information, see @ref{Top,,,stabs,The Stabs Debugging Format}. The shared library scheme has a clean interface for figuring out what shared libraries are in use, but the catch is that everything which refers to addresses (symbol tables and breakpoints at least) needs to be relocated for both shared libraries and the main executable. At least using the standard mechanism this can only be done once the program has been run (or the core file has been read). @subsection PE @cindex PE-COFF format Windows 95 and NT use the PE (@dfn{Portable Executable}) format for their executables. PE is basically COFF with additional headers. While BFD includes special PE support, @value{GDBN} needs only the basic COFF reader. @subsection ELF @cindex ELF format The ELF format came with System V Release 4 (SVR4) Unix. ELF is similar to COFF in being organized into a number of sections, but it removes many of COFF's limitations. Debugging info may be either stabs encapsulated in ELF sections, or more commonly these days, DWARF. The basic ELF reader is in @file{elfread.c}. @subsection SOM @cindex SOM format SOM is HP's object file and debug format (not to be confused with IBM's SOM, which is a cross-language ABI). The SOM reader is in @file{somread.c}. @section Debugging File Formats This section describes characteristics of debugging information that are independent of the object file format. @subsection stabs @cindex stabs debugging info @code{stabs} started out as special symbols within the @code{a.out} format. Since then, it has been encapsulated into other file formats, such as COFF and ELF. While @file{dbxread.c} does some of the basic stab processing, including for encapsulated versions, @file{stabsread.c} does the real work. @subsection COFF @cindex COFF debugging info The basic COFF definition includes debugging information. The level of support is minimal and non-extensible, and is not often used. @subsection Mips debug (Third Eye) @cindex ECOFF debugging info ECOFF includes a definition of a special debug format. The file @file{mdebugread.c} implements reading for this format. @c mention DWARF 1 as a formerly-supported format @subsection DWARF 2 @cindex DWARF 2 debugging info DWARF 2 is an improved but incompatible version of DWARF 1. The DWARF 2 reader is in @file{dwarf2read.c}. @subsection Compressed DWARF 2 @cindex Compressed DWARF 2 debugging info Compressed DWARF 2 is not technically a separate debugging format, but merely DWARF 2 debug information that has been compressed. In this format, every object-file section holding DWARF 2 debugging information is compressed and prepended with a header. (The section is also typically renamed, so a section called @code{.debug_info} in a DWARF 2 binary would be called @code{.zdebug_info} in a compressed DWARF 2 binary.) The header is 12 bytes long: @itemize @bullet @item 4 bytes: the literal string ``ZLIB'' @item 8 bytes: the uncompressed size of the section, in big-endian byte order. @end itemize The same reader is used for both compressed an normal DWARF 2 info. Section decompression is done in @code{zlib_decompress_section} in @file{dwarf2read.c}. @subsection DWARF 3 @cindex DWARF 3 debugging info DWARF 3 is an improved version of DWARF 2. @subsection SOM @cindex SOM debugging info Like COFF, the SOM definition includes debugging information. @section Adding a New Symbol Reader to @value{GDBN} @cindex adding debugging info reader If you are using an existing object file format (@code{a.out}, COFF, ELF, etc), there is probably little to be done. If you need to add a new object file format, you must first add it to BFD. This is beyond the scope of this document. You must then arrange for the BFD code to provide access to the debugging symbols. Generally @value{GDBN} will have to call swapping routines from BFD and a few other BFD internal routines to locate the debugging information. As much as possible, @value{GDBN} should not depend on the BFD internal data structures. For some targets (e.g., COFF), there is a special transfer vector used to call swapping routines, since the external data structures on various platforms have different sizes and layouts. Specialized routines that will only ever be implemented by one object file format may be called directly. This interface should be described in a file @file{bfd/lib@var{xyz}.h}, which is included by @value{GDBN}. @section Memory Management for Symbol Files Most memory associated with a loaded symbol file is stored on its @code{objfile_obstack}. This includes symbols, types, namespace data, and other information produced by the symbol readers. Because this data lives on the objfile's obstack, it is automatically released when the objfile is unloaded or reloaded. Therefore one objfile must not reference symbol or type data from another objfile; they could be unloaded at different times. User convenience variables, et cetera, have associated types. Normally these types live in the associated objfile. However, when the objfile is unloaded, those types are deep copied to global memory, so that the values of the user variables and history items are not lost. @node Language Support @chapter Language Support @cindex language support @value{GDBN}'s language support is mainly driven by the symbol reader, although it is possible for the user to set the source language manually. @value{GDBN} chooses the source language by looking at the extension of the file recorded in the debug info; @file{.c} means C, @file{.f} means Fortran, etc. It may also use a special-purpose language identifier if the debug format supports it, like with DWARF. @section Adding a Source Language to @value{GDBN} @cindex adding source language To add other languages to @value{GDBN}'s expression parser, follow the following steps: @table @emph @item Create the expression parser. @cindex expression parser This should reside in a file @file{@var{lang}-exp.y}. Routines for building parsed expressions into a @code{union exp_element} list are in @file{parse.c}. @cindex language parser Since we can't depend upon everyone having Bison, and YACC produces parsers that define a bunch of global names, the following lines @strong{must} be included at the top of the YACC parser, to prevent the various parsers from defining the same global names: @smallexample #define yyparse @var{lang}_parse #define yylex @var{lang}_lex #define yyerror @var{lang}_error #define yylval @var{lang}_lval #define yychar @var{lang}_char #define yydebug @var{lang}_debug #define yypact @var{lang}_pact #define yyr1 @var{lang}_r1 #define yyr2 @var{lang}_r2 #define yydef @var{lang}_def #define yychk @var{lang}_chk #define yypgo @var{lang}_pgo #define yyact @var{lang}_act #define yyexca @var{lang}_exca #define yyerrflag @var{lang}_errflag #define yynerrs @var{lang}_nerrs @end smallexample At the bottom of your parser, define a @code{struct language_defn} and initialize it with the right values for your language. Define an @code{initialize_@var{lang}} routine and have it call @samp{add_language(@var{lang}_language_defn)} to tell the rest of @value{GDBN} that your language exists. You'll need some other supporting variables and functions, which will be used via pointers from your @code{@var{lang}_language_defn}. See the declaration of @code{struct language_defn} in @file{language.h}, and the other @file{*-exp.y} files, for more information. @item Add any evaluation routines, if necessary @cindex expression evaluation routines @findex evaluate_subexp @findex prefixify_subexp @findex length_of_subexp If you need new opcodes (that represent the operations of the language), add them to the enumerated type in @file{expression.h}. Add support code for these operations in the @code{evaluate_subexp} function defined in the file @file{eval.c}. Add cases for new opcodes in two functions from @file{parse.c}: @code{prefixify_subexp} and @code{length_of_subexp}. These compute the number of @code{exp_element}s that a given operation takes up. @item Update some existing code Add an enumerated identifier for your language to the enumerated type @code{enum language} in @file{defs.h}. Update the routines in @file{language.c} so your language is included. These routines include type predicates and such, which (in some cases) are language dependent. If your language does not appear in the switch statement, an error is reported. @vindex current_language Also included in @file{language.c} is the code that updates the variable @code{current_language}, and the routines that translate the @code{language_@var{lang}} enumerated identifier into a printable string. @findex _initialize_language Update the function @code{_initialize_language} to include your language. This function picks the default language upon startup, so is dependent upon which languages that @value{GDBN} is built for. @findex allocate_symtab Update @code{allocate_symtab} in @file{symfile.c} and/or symbol-reading code so that the language of each symtab (source file) is set properly. This is used to determine the language to use at each stack frame level. Currently, the language is set based upon the extension of the source file. If the language can be better inferred from the symbol information, please set the language of the symtab in the symbol-reading code. @findex print_subexp @findex op_print_tab Add helper code to @code{print_subexp} (in @file{expprint.c}) to handle any new expression opcodes you have added to @file{expression.h}. Also, add the printed representations of your operators to @code{op_print_tab}. @item Add a place of call @findex parse_exp_1 Add a call to @code{@var{lang}_parse()} and @code{@var{lang}_error} in @code{parse_exp_1} (defined in @file{parse.c}). @item Edit @file{Makefile.in} Add dependencies in @file{Makefile.in}. Make sure you update the macro variables such as @code{HFILES} and @code{OBJS}, otherwise your code may not get linked in, or, worse yet, it may not get @code{tar}red into the distribution! @end table @node Host Definition @chapter Host Definition With the advent of Autoconf, it's rarely necessary to have host definition machinery anymore. The following information is provided, mainly, as an historical reference. @section Adding a New Host @cindex adding a new host @cindex host, adding @value{GDBN}'s host configuration support normally happens via Autoconf. New host-specific definitions should not be needed. Older hosts @value{GDBN} still use the host-specific definitions and files listed below, but these mostly exist for historical reasons, and will eventually disappear. @table @file @item gdb/config/@var{arch}/@var{xyz}.mh This file is a Makefile fragment that once contained both host and native configuration information (@pxref{Native Debugging}) for the machine @var{xyz}. The host configuration information is now handled by Autoconf. Host configuration information included definitions for @code{CC}, @code{SYSV_DEFINE}, @code{XM_CFLAGS}, @code{XM_ADD_FILES}, @code{XM_CLIBS}, @code{XM_CDEPS}, etc.; see @file{Makefile.in}. New host-only configurations do not need this file. @end table (Files named @file{gdb/config/@var{arch}/xm-@var{xyz}.h} were once used to define host-specific macros, but were no longer needed and have all been removed.) @subheading Generic Host Support Files @cindex generic host support There are some ``generic'' versions of routines that can be used by various systems. @table @file @cindex remote debugging support @cindex serial line support @item ser-unix.c This contains serial line support for Unix systems. It is included by default on all Unix-like hosts. @item ser-pipe.c This contains serial pipe support for Unix systems. It is included by default on all Unix-like hosts. @item ser-mingw.c This contains serial line support for 32-bit programs running under Windows using MinGW. @item ser-go32.c This contains serial line support for 32-bit programs running under DOS, using the DJGPP (a.k.a.@: GO32) execution environment. @cindex TCP remote support @item ser-tcp.c This contains generic TCP support using sockets. It is included by default on all Unix-like hosts and with MinGW. @end table @section Host Conditionals When @value{GDBN} is configured and compiled, various macros are defined or left undefined, to control compilation based on the attributes of the host system. While formerly they could be set in host-specific header files, at present they can be changed only by setting @code{CFLAGS} when building, or by editing the source code. These macros and their meanings (or if the meaning is not documented here, then one of the source files where they are used is indicated) are: @ftable @code @item @value{GDBN}INIT_FILENAME The default name of @value{GDBN}'s initialization file (normally @file{.gdbinit}). @item CRLF_SOURCE_FILES @cindex DOS text files Define this if host files use @code{\r\n} rather than @code{\n} as a line terminator. This will cause source file listings to omit @code{\r} characters when printing and it will allow @code{\r\n} line endings of files which are ``sourced'' by gdb. It must be possible to open files in binary mode using @code{O_BINARY} or, for fopen, @code{"rb"}. @item DEFAULT_PROMPT @cindex prompt The default value of the prompt string (normally @code{"(gdb) "}). @item DEV_TTY @cindex terminal device The name of the generic TTY device, defaults to @code{"/dev/tty"}. @item ISATTY Substitute for isatty, if not available. @item FOPEN_RB Define this if binary files are opened the same way as text files. @item PRINTF_HAS_LONG_LONG Define this if the host can handle printing of long long integers via the printf format conversion specifier @code{ll}. This is set by the @code{configure} script. @item LSEEK_NOT_LINEAR Define this if @code{lseek (n)} does not necessarily move to byte number @code{n} in the file. This is only used when reading source files. It is normally faster to define @code{CRLF_SOURCE_FILES} when possible. @item lint Define this to help placate @code{lint} in some situations. @item volatile Define this to override the defaults of @code{__volatile__} or @code{/**/}. @end ftable @node Target Architecture Definition @chapter Target Architecture Definition @cindex target architecture definition @value{GDBN}'s target architecture defines what sort of machine-language programs @value{GDBN} can work with, and how it works with them. The target architecture object is implemented as the C structure @code{struct gdbarch *}. The structure, and its methods, are generated using the Bourne shell script @file{gdbarch.sh}. @menu * OS ABI Variant Handling:: * Initialize New Architecture:: * Registers and Memory:: * Pointers and Addresses:: * Address Classes:: * Register Representation:: * Frame Interpretation:: * Inferior Call Setup:: * Adding support for debugging core files:: * Defining Other Architecture Features:: * Adding a New Target:: @end menu @node OS ABI Variant Handling @section Operating System ABI Variant Handling @cindex OS ABI variants @value{GDBN} provides a mechanism for handling variations in OS ABIs. An OS ABI variant may have influence over any number of variables in the target architecture definition. There are two major components in the OS ABI mechanism: sniffers and handlers. A @dfn{sniffer} examines a file matching a BFD architecture/flavour pair (the architecture may be wildcarded) in an attempt to determine the OS ABI of that file. Sniffers with a wildcarded architecture are considered to be @dfn{generic}, while sniffers for a specific architecture are considered to be @dfn{specific}. A match from a specific sniffer overrides a match from a generic sniffer. Multiple sniffers for an architecture/flavour may exist, in order to differentiate between two different operating systems which use the same basic file format. The OS ABI framework provides a generic sniffer for ELF-format files which examines the @code{EI_OSABI} field of the ELF header, as well as note sections known to be used by several operating systems. @cindex fine-tuning @code{gdbarch} structure A @dfn{handler} is used to fine-tune the @code{gdbarch} structure for the selected OS ABI. There may be only one handler for a given OS ABI for each BFD architecture. The following OS ABI variants are defined in @file{defs.h}: @table @code @findex GDB_OSABI_UNINITIALIZED @item GDB_OSABI_UNINITIALIZED Used for struct gdbarch_info if ABI is still uninitialized. @findex GDB_OSABI_UNKNOWN @item GDB_OSABI_UNKNOWN The ABI of the inferior is unknown. The default @code{gdbarch} settings for the architecture will be used. @findex GDB_OSABI_SVR4 @item GDB_OSABI_SVR4 UNIX System V Release 4. @findex GDB_OSABI_HURD @item GDB_OSABI_HURD GNU using the Hurd kernel. @findex GDB_OSABI_SOLARIS @item GDB_OSABI_SOLARIS Sun Solaris. @findex GDB_OSABI_OSF1 @item GDB_OSABI_OSF1 OSF/1, including Digital UNIX and Compaq Tru64 UNIX. @findex GDB_OSABI_LINUX @item GDB_OSABI_LINUX GNU using the Linux kernel. @findex GDB_OSABI_FREEBSD_AOUT @item GDB_OSABI_FREEBSD_AOUT FreeBSD using the @code{a.out} executable format. @findex GDB_OSABI_FREEBSD_ELF @item GDB_OSABI_FREEBSD_ELF FreeBSD using the ELF executable format. @findex GDB_OSABI_NETBSD_AOUT @item GDB_OSABI_NETBSD_AOUT NetBSD using the @code{a.out} executable format. @findex GDB_OSABI_NETBSD_ELF @item GDB_OSABI_NETBSD_ELF NetBSD using the ELF executable format. @findex GDB_OSABI_OPENBSD_ELF @item GDB_OSABI_OPENBSD_ELF OpenBSD using the ELF executable format. @findex GDB_OSABI_WINCE @item GDB_OSABI_WINCE Windows CE. @findex GDB_OSABI_GO32 @item GDB_OSABI_GO32 DJGPP. @findex GDB_OSABI_IRIX @item GDB_OSABI_IRIX Irix. @findex GDB_OSABI_INTERIX @item GDB_OSABI_INTERIX Interix (Posix layer for MS-Windows systems). @findex GDB_OSABI_HPUX_ELF @item GDB_OSABI_HPUX_ELF HP/UX using the ELF executable format. @findex GDB_OSABI_HPUX_SOM @item GDB_OSABI_HPUX_SOM HP/UX using the SOM executable format. @findex GDB_OSABI_QNXNTO @item GDB_OSABI_QNXNTO QNX Neutrino. @findex GDB_OSABI_CYGWIN @item GDB_OSABI_CYGWIN Cygwin. @findex GDB_OSABI_AIX @item GDB_OSABI_AIX AIX. @end table Here are the functions that make up the OS ABI framework: @deftypefun {const char *} gdbarch_osabi_name (enum gdb_osabi @var{osabi}) Return the name of the OS ABI corresponding to @var{osabi}. @end deftypefun @deftypefun void gdbarch_register_osabi (enum bfd_architecture @var{arch}, unsigned long @var{machine}, enum gdb_osabi @var{osabi}, void (*@var{init_osabi})(struct gdbarch_info @var{info}, struct gdbarch *@var{gdbarch})) Register the OS ABI handler specified by @var{init_osabi} for the architecture, machine type and OS ABI specified by @var{arch}, @var{machine} and @var{osabi}. In most cases, a value of zero for the machine type, which implies the architecture's default machine type, will suffice. @end deftypefun @deftypefun void gdbarch_register_osabi_sniffer (enum bfd_architecture @var{arch}, enum bfd_flavour @var{flavour}, enum gdb_osabi (*@var{sniffer})(bfd *@var{abfd})) Register the OS ABI file sniffer specified by @var{sniffer} for the BFD architecture/flavour pair specified by @var{arch} and @var{flavour}. If @var{arch} is @code{bfd_arch_unknown}, the sniffer is considered to be generic, and is allowed to examine @var{flavour}-flavoured files for any architecture. @end deftypefun @deftypefun {enum gdb_osabi} gdbarch_lookup_osabi (bfd *@var{abfd}) Examine the file described by @var{abfd} to determine its OS ABI. The value @code{GDB_OSABI_UNKNOWN} is returned if the OS ABI cannot be determined. @end deftypefun @deftypefun void gdbarch_init_osabi (struct gdbarch info @var{info}, struct gdbarch *@var{gdbarch}, enum gdb_osabi @var{osabi}) Invoke the OS ABI handler corresponding to @var{osabi} to fine-tune the @code{gdbarch} structure specified by @var{gdbarch}. If a handler corresponding to @var{osabi} has not been registered for @var{gdbarch}'s architecture, a warning will be issued and the debugging session will continue with the defaults already established for @var{gdbarch}. @end deftypefun @deftypefun void generic_elf_osabi_sniff_abi_tag_sections (bfd *@var{abfd}, asection *@var{sect}, void *@var{obj}) Helper routine for ELF file sniffers. Examine the file described by @var{abfd} and look at ABI tag note sections to determine the OS ABI from the note. This function should be called via @code{bfd_map_over_sections}. @end deftypefun @node Initialize New Architecture @section Initializing a New Architecture @menu * How an Architecture is Represented:: * Looking Up an Existing Architecture:: * Creating a New Architecture:: @end menu @node How an Architecture is Represented @subsection How an Architecture is Represented @cindex architecture representation @cindex representation of architecture Each @code{gdbarch} is associated with a single @sc{bfd} architecture, via a @code{bfd_arch_@var{arch}} in the @code{bfd_architecture} enumeration. The @code{gdbarch} is registered by a call to @code{register_gdbarch_init}, usually from the file's @code{_initialize_@var{filename}} routine, which will be automatically called during @value{GDBN} startup. The arguments are a @sc{bfd} architecture constant and an initialization function. @findex _initialize_@var{arch}_tdep @cindex @file{@var{arch}-tdep.c} A @value{GDBN} description for a new architecture, @var{arch} is created by defining a global function @code{_initialize_@var{arch}_tdep}, by convention in the source file @file{@var{arch}-tdep.c}. For example, in the case of the OpenRISC 1000, this function is called @code{_initialize_or1k_tdep} and is found in the file @file{or1k-tdep.c}. @cindex @file{configure.tgt} @cindex @code{gdbarch} @findex gdbarch_register The resulting object files containing the implementation of the @code{_initialize_@var{arch}_tdep} function are specified in the @value{GDBN} @file{configure.tgt} file, which includes a large case statement pattern matching against the @code{--target} option of the @code{configure} script. The new @code{struct gdbarch} is created within the @code{_initialize_@var{arch}_tdep} function by calling @code{gdbarch_register}: @smallexample void gdbarch_register (enum bfd_architecture @var{architecture}, gdbarch_init_ftype *@var{init_func}, gdbarch_dump_tdep_ftype *@var{tdep_dump_func}); @end smallexample The @var{architecture} will identify the unique @sc{bfd} to be associated with this @code{gdbarch}. The @var{init_func} funciton is called to create and return the new @code{struct gdbarch}. The @var{tdep_dump_func} function will dump the target specific details associated with this architecture. For example the function @code{_initialize_or1k_tdep} creates its architecture for 32-bit OpenRISC 1000 architectures by calling: @smallexample gdbarch_register (bfd_arch_or32, or1k_gdbarch_init, or1k_dump_tdep); @end smallexample @node Looking Up an Existing Architecture @subsection Looking Up an Existing Architecture @cindex @code{gdbarch} lookup The initialization function has this prototype: @smallexample static struct gdbarch * @var{arch}_gdbarch_init (struct gdbarch_info @var{info}, struct gdbarch_list *@var{arches}) @end smallexample The @var{info} argument contains parameters used to select the correct architecture, and @var{arches} is a list of architectures which have already been created with the same @code{bfd_arch_@var{arch}} value. The initialization function should first make sure that @var{info} is acceptable, and return @code{NULL} if it is not. Then, it should search through @var{arches} for an exact match to @var{info}, and return one if found. Lastly, if no exact match was found, it should create a new architecture based on @var{info} and return it. @findex gdbarch_list_lookup_by_info @cindex @code{gdbarch_info} The lookup is done using @code{gdbarch_list_lookup_by_info}. It is passed the list of existing architectures, @var{arches}, and the @code{struct gdbarch_info}, @var{info}, and returns the first matching architecture it finds, or @code{NULL} if none are found. If an architecture is found it can be returned as the result from the initialization function, otherwise a new @code{struct gdbach} will need to be created. The struct gdbarch_info has the following components: @smallexample struct gdbarch_info @{ const struct bfd_arch_info *bfd_arch_info; int byte_order; bfd *abfd; struct gdbarch_tdep_info *tdep_info; enum gdb_osabi osabi; const struct target_desc *target_desc; @}; @end smallexample @vindex bfd_arch_info The @code{bfd_arch_info} member holds the key details about the architecture. The @code{byte_order} member is a value in an enumeration indicating the endianism. The @code{abfd} member is a pointer to the full @sc{bfd}, the @code{tdep_info} member is additional custom target specific information, @code{osabi} identifies which (if any) of a number of operating specific ABIs are used by this architecture and the @code{target_desc} member is a set of name-value pairs with information about register usage in this target. When the @code{struct gdbarch} initialization function is called, not all the fields are provided---only those which can be deduced from the @sc{bfd}. The @code{struct gdbarch_info}, @var{info} is used as a look-up key with the list of existing architectures, @var{arches} to see if a suitable architecture already exists. The @var{tdep_info}, @var{osabi} and @var{target_desc} fields may be added before this lookup to refine the search. Only information in @var{info} should be used to choose the new architecture. Historically, @var{info} could be sparse, and defaults would be collected from the first element on @var{arches}. However, @value{GDBN} now fills in @var{info} more thoroughly, so new @code{gdbarch} initialization functions should not take defaults from @var{arches}. @node Creating a New Architecture @subsection Creating a New Architecture @cindex @code{struct gdbarch} creation @findex gdbarch_alloc @cindex @code{gdbarch_tdep} when allocating new @code{gdbarch} If no architecture is found, then a new architecture must be created, by calling @code{gdbarch_alloc} using the supplied @code{@w{struct gdbarch_info}} and any additional custom target specific information in a @code{struct gdbarch_tdep}. The prototype for @code{gdbarch_alloc} is: @smallexample struct gdbarch *gdbarch_alloc (const struct gdbarch_info *@var{info}, struct gdbarch_tdep *@var{tdep}); @end smallexample @cindex @code{set_gdbarch} functions @cindex @code{gdbarch} accessor functions The newly created struct gdbarch must then be populated. Although there are default values, in most cases they are not what is required. For each element, @var{X}, there is are a pair of corresponding accessor functions, one to set the value of that element, @code{set_gdbarch_@var{X}}, the second to either get the value of an element (if it is a variable) or to apply the element (if it is a function), @code{gdbarch_@var{X}}. Note that both accessor functions take a pointer to the @code{@w{struct gdbarch}} as first argument. Populating the new @code{gdbarch} should use the @code{set_gdbarch} functions. The following sections identify the main elements that should be set in this way. This is not the complete list, but represents the functions and elements that must commonly be specified for a new architecture. Many of the functions and variables are described in the header file @file{gdbarch.h}. This is the main work in defining a new architecture. Implementing the set of functions to populate the @code{struct gdbarch}. @cindex @code{gdbarch_tdep} definition @code{struct gdbarch_tdep} is not defined within @value{GDBN}---it is up to the user to define this struct if it is needed to hold custom target information that is not covered by the standard @code{@w{struct gdbarch}}. For example with the OpenRISC 1000 architecture it is used to hold the number of matchpoints available in the target (along with other information). If there is no additional target specific information, it can be set to @code{NULL}. @node Registers and Memory @section Registers and Memory @value{GDBN}'s model of the target machine is rather simple. @value{GDBN} assumes the machine includes a bank of registers and a block of memory. Each register may have a different size. @value{GDBN} does not have a magical way to match up with the compiler's idea of which registers are which; however, it is critical that they do match up accurately. The only way to make this work is to get accurate information about the order that the compiler uses, and to reflect that in the @code{gdbarch_register_name} and related functions. @value{GDBN} can handle big-endian, little-endian, and bi-endian architectures. @node Pointers and Addresses @section Pointers Are Not Always Addresses @cindex pointer representation @cindex address representation @cindex word-addressed machines @cindex separate data and code address spaces @cindex spaces, separate data and code address @cindex address spaces, separate data and code @cindex code pointers, word-addressed @cindex converting between pointers and addresses @cindex D10V addresses On almost all 32-bit architectures, the representation of a pointer is indistinguishable from the representation of some fixed-length number whose value is the byte address of the object pointed to. On such machines, the words ``pointer'' and ``address'' can be used interchangeably. However, architectures with smaller word sizes are often cramped for address space, so they may choose a pointer representation that breaks this identity, and allows a larger code address space. @c D10V is gone from sources - more current example? For example, the Renesas D10V is a 16-bit VLIW processor whose instructions are 32 bits long@footnote{Some D10V instructions are actually pairs of 16-bit sub-instructions. However, since you can't jump into the middle of such a pair, code addresses can only refer to full 32 bit instructions, which is what matters in this explanation.}. If the D10V used ordinary byte addresses to refer to code locations, then the processor would only be able to address 64kb of instructions. However, since instructions must be aligned on four-byte boundaries, the low two bits of any valid instruction's byte address are always zero---byte addresses waste two bits. So instead of byte addresses, the D10V uses word addresses---byte addresses shifted right two bits---to refer to code. Thus, the D10V can use 16-bit words to address 256kb of code space. However, this means that code pointers and data pointers have different forms on the D10V. The 16-bit word @code{0xC020} refers to byte address @code{0xC020} when used as a data address, but refers to byte address @code{0x30080} when used as a code address. (The D10V also uses separate code and data address spaces, which also affects the correspondence between pointers and addresses, but we're going to ignore that here; this example is already too long.) To cope with architectures like this---the D10V is not the only one!---@value{GDBN} tries to distinguish between @dfn{addresses}, which are byte numbers, and @dfn{pointers}, which are the target's representation of an address of a particular type of data. In the example above, @code{0xC020} is the pointer, which refers to one of the addresses @code{0xC020} or @code{0x30080}, depending on the type imposed upon it. @value{GDBN} provides functions for turning a pointer into an address and vice versa, in the appropriate way for the current architecture. Unfortunately, since addresses and pointers are identical on almost all processors, this distinction tends to bit-rot pretty quickly. Thus, each time you port @value{GDBN} to an architecture which does distinguish between pointers and addresses, you'll probably need to clean up some architecture-independent code. Here are functions which convert between pointers and addresses: @deftypefun CORE_ADDR extract_typed_address (void *@var{buf}, struct type *@var{type}) Treat the bytes at @var{buf} as a pointer or reference of type @var{type}, and return the address it represents, in a manner appropriate for the current architecture. This yields an address @value{GDBN} can use to read target memory, disassemble, etc. Note that @var{buf} refers to a buffer in @value{GDBN}'s memory, not the inferior's. For example, if the current architecture is the Intel x86, this function extracts a little-endian integer of the appropriate length from @var{buf} and returns it. However, if the current architecture is the D10V, this function will return a 16-bit integer extracted from @var{buf}, multiplied by four if @var{type} is a pointer to a function. If @var{type} is not a pointer or reference type, then this function will signal an internal error. @end deftypefun @deftypefun CORE_ADDR store_typed_address (void *@var{buf}, struct type *@var{type}, CORE_ADDR @var{addr}) Store the address @var{addr} in @var{buf}, in the proper format for a pointer of type @var{type} in the current architecture. Note that @var{buf} refers to a buffer in @value{GDBN}'s memory, not the inferior's. For example, if the current architecture is the Intel x86, this function stores @var{addr} unmodified as a little-endian integer of the appropriate length in @var{buf}. However, if the current architecture is the D10V, this function divides @var{addr} by four if @var{type} is a pointer to a function, and then stores it in @var{buf}. If @var{type} is not a pointer or reference type, then this function will signal an internal error. @end deftypefun @deftypefun CORE_ADDR value_as_address (struct value *@var{val}) Assuming that @var{val} is a pointer, return the address it represents, as appropriate for the current architecture. This function actually works on integral values, as well as pointers. For pointers, it performs architecture-specific conversions as described above for @code{extract_typed_address}. @end deftypefun @deftypefun CORE_ADDR value_from_pointer (struct type *@var{type}, CORE_ADDR @var{addr}) Create and return a value representing a pointer of type @var{type} to the address @var{addr}, as appropriate for the current architecture. This function performs architecture-specific conversions as described above for @code{store_typed_address}. @end deftypefun Here are two functions which architectures can define to indicate the relationship between pointers and addresses. These have default definitions, appropriate for architectures on which all pointers are simple unsigned byte addresses. @deftypefun CORE_ADDR gdbarch_pointer_to_address (struct gdbarch *@var{gdbarch}, struct type *@var{type}, char *@var{buf}) Assume that @var{buf} holds a pointer of type @var{type}, in the appropriate format for the current architecture. Return the byte address the pointer refers to. This function may safely assume that @var{type} is either a pointer or a C@t{++} reference type. @end deftypefun @deftypefun void gdbarch_address_to_pointer (struct gdbarch *@var{gdbarch}, struct type *@var{type}, char *@var{buf}, CORE_ADDR @var{addr}) Store in @var{buf} a pointer of type @var{type} representing the address @var{addr}, in the appropriate format for the current architecture. This function may safely assume that @var{type} is either a pointer or a C@t{++} reference type. @end deftypefun @node Address Classes @section Address Classes @cindex address classes @cindex DW_AT_byte_size @cindex DW_AT_address_class Sometimes information about different kinds of addresses is available via the debug information. For example, some programming environments define addresses of several different sizes. If the debug information distinguishes these kinds of address classes through either the size info (e.g, @code{DW_AT_byte_size} in @w{DWARF 2}) or through an explicit address class attribute (e.g, @code{DW_AT_address_class} in @w{DWARF 2}), the following macros should be defined in order to disambiguate these types within @value{GDBN} as well as provide the added information to a @value{GDBN} user when printing type expressions. @deftypefun int gdbarch_address_class_type_flags (struct gdbarch *@var{gdbarch}, int @var{byte_size}, int @var{dwarf2_addr_class}) Returns the type flags needed to construct a pointer type whose size is @var{byte_size} and whose address class is @var{dwarf2_addr_class}. This function is normally called from within a symbol reader. See @file{dwarf2read.c}. @end deftypefun @deftypefun {char *} gdbarch_address_class_type_flags_to_name (struct gdbarch *@var{gdbarch}, int @var{type_flags}) Given the type flags representing an address class qualifier, return its name. @end deftypefun @deftypefun int gdbarch_address_class_name_to_type_flags (struct gdbarch *@var{gdbarch}, int @var{name}, int *@var{type_flags_ptr}) Given an address qualifier name, set the @code{int} referenced by @var{type_flags_ptr} to the type flags for that address class qualifier. @end deftypefun Since the need for address classes is rather rare, none of the address class functions are defined by default. Predicate functions are provided to detect when they are defined. Consider a hypothetical architecture in which addresses are normally 32-bits wide, but 16-bit addresses are also supported. Furthermore, suppose that the @w{DWARF 2} information for this architecture simply uses a @code{DW_AT_byte_size} value of 2 to indicate the use of one of these "short" pointers. The following functions could be defined to implement the address class functions: @smallexample somearch_address_class_type_flags (int byte_size, int dwarf2_addr_class) @{ if (byte_size == 2) return TYPE_FLAG_ADDRESS_CLASS_1; else return 0; @} static char * somearch_address_class_type_flags_to_name (int type_flags) @{ if (type_flags & TYPE_FLAG_ADDRESS_CLASS_1) return "short"; else return NULL; @} int somearch_address_class_name_to_type_flags (char *name, int *type_flags_ptr) @{ if (strcmp (name, "short") == 0) @{ *type_flags_ptr = TYPE_FLAG_ADDRESS_CLASS_1; return 1; @} else return 0; @} @end smallexample The qualifier @code{@@short} is used in @value{GDBN}'s type expressions to indicate the presence of one of these ``short'' pointers. For example if the debug information indicates that @code{short_ptr_var} is one of these short pointers, @value{GDBN} might show the following behavior: @smallexample (gdb) ptype short_ptr_var type = int * @@short @end smallexample @node Register Representation @section Register Representation @menu * Raw and Cooked Registers:: * Register Architecture Functions & Variables:: * Register Information Functions:: * Register and Memory Data:: * Register Caching:: @end menu @node Raw and Cooked Registers @subsection Raw and Cooked Registers @cindex raw register representation @cindex cooked register representation @cindex representations, raw and cooked registers @value{GDBN} considers registers to be a set with members numbered linearly from 0 upwards. The first part of that set corresponds to real physical registers, the second part to any @dfn{pseudo-registers}. Pseudo-registers have no independent physical existence, but are useful representations of information within the architecture. For example the OpenRISC 1000 architecture has up to 32 general purpose registers, which are typically represented as 32-bit (or 64-bit) integers. However the GPRs are also used as operands to the floating point operations, and it could be convenient to define a set of pseudo-registers, to show the GPRs represented as floating point values. For any architecture, the implementer will decide on a mapping from hardware to @value{GDBN} register numbers. The registers corresponding to real hardware are referred to as @dfn{raw} registers, the remaining registers are @dfn{pseudo-registers}. The total register set (raw and pseudo) is called the @dfn{cooked} register set. @node Register Architecture Functions & Variables @subsection Functions and Variables Specifying the Register Architecture @cindex @code{gdbarch} register architecture functions These @code{struct gdbarch} functions and variables specify the number and type of registers in the architecture. @deftypefn {Architecture Function} CORE_ADDR read_pc (struct regcache *@var{regcache}) @end deftypefn @deftypefn {Architecture Function} void write_pc (struct regcache *@var{regcache}, CORE_ADDR @var{val}) Read or write the program counter. The default value of both functions is @code{NULL} (no function available). If the program counter is just an ordinary register, it can be specified in @code{struct gdbarch} instead (see @code{pc_regnum} below) and it will be read or written using the standard routines to access registers. This function need only be specified if the program counter is not an ordinary register. Any register information can be obtained using the supplied register cache, @var{regcache}. @xref{Register Caching, , Register Caching}. @end deftypefn @deftypefn {Architecture Function} void pseudo_register_read (struct gdbarch *@var{gdbarch}, struct regcache *@var{regcache}, int @var{regnum}, const gdb_byte *@var{buf}) @end deftypefn @deftypefn {Architecture Function} void pseudo_register_write (struct gdbarch *@var{gdbarch}, struct regcache *@var{regcache}, int @var{regnum}, const gdb_byte *@var{buf}) These functions should be defined if there are any pseudo-registers. The default value is @code{NULL}. @var{regnum} is the number of the register to read or write (which will be a @dfn{cooked} register number) and @var{buf} is the buffer where the value read will be placed, or from which the value to be written will be taken. The value in the buffer may be converted to or from a signed or unsigned integral value using one of the utility functions (@pxref{Register and Memory Data, , Using Different Register and Memory Data Representations}). The access should be for the specified architecture, @var{gdbarch}. Any register information can be obtained using the supplied register cache, @var{regcache}. @xref{Register Caching, , Register Caching}. @end deftypefn @deftypevr {Architecture Variable} int sp_regnum @vindex sp_regnum @cindex stack pointer @cindex @kbd{$sp} This specifies the register holding the stack pointer, which may be a raw or pseudo-register. It defaults to -1 (not defined), but it is an error for it not to be defined. The value of the stack pointer register can be accessed withing @value{GDBN} as the variable @kbd{$sp}. @end deftypevr @deftypevr {Architecture Variable} int pc_regnum @vindex pc_regnum @cindex program counter @cindex @kbd{$pc} This specifies the register holding the program counter, which may be a raw or pseudo-register. It defaults to -1 (not defined). If @code{pc_regnum} is not defined, then the functions @code{read_pc} and @code{write_pc} (see above) must be defined. The value of the program counter (whether defined as a register, or through @code{read_pc} and @code{write_pc}) can be accessed withing @value{GDBN} as the variable @kbd{$pc}. @end deftypevr @deftypevr {Architecture Variable} int ps_regnum @vindex ps_regnum @cindex processor status register @cindex status register @cindex @kbd{$ps} This specifies the register holding the processor status (often called the status register), which may be a raw or pseudo-register. It defaults to -1 (not defined). If defined, the value of this register can be accessed withing @value{GDBN} as the variable @kbd{$ps}. @end deftypevr @deftypevr {Architecture Variable} int fp0_regnum @vindex fp0_regnum @cindex first floating point register This specifies the first floating point register. It defaults to 0. @code{fp0_regnum} is not needed unless the target offers support for floating point. @end deftypevr @node Register Information Functions @subsection Functions Giving Register Information @cindex @code{gdbarch} register information functions These functions return information about registers. @deftypefn {Architecture Function} {const char *} register_name (struct gdbarch *@var{gdbarch}, int @var{regnum}) This function should convert a register number (raw or pseudo) to a register name (as a C @code{const char *}). This is used both to determine the name of a register for output and to work out the meaning of any register names used as input. The function may also return @code{NULL}, to indicate that @var{regnum} is not a valid register. For example with the OpenRISC 1000, @value{GDBN} registers 0-31 are the General Purpose Registers, register 32 is the program counter and register 33 is the supervision register (i.e.@: the processor status register), which map to the strings @code{"gpr00"} through @code{"gpr31"}, @code{"pc"} and @code{"sr"} respectively. This means that the @value{GDBN} command @kbd{print $gpr5} should print the value of the OR1K general purpose register 5@footnote{ @cindex frame pointer @cindex @kbd{$fp} Historically, @value{GDBN} always had a concept of a frame pointer register, which could be accessed via the @value{GDBN} variable, @kbd{$fp}. That concept is now deprecated, recognizing that not all architectures have a frame pointer. However if an architecture does have a frame pointer register, and defines a register or pseudo-register with the name @code{"fp"}, then that register will be used as the value of the @kbd{$fp} variable.}. The default value for this function is @code{NULL}, meaning undefined. It should always be defined. The access should be for the specified architecture, @var{gdbarch}. @end deftypefn @deftypefn {Architecture Function} {struct type *} register_type (struct gdbarch *@var{gdbarch}, int @var{regnum}) Given a register number, this function identifies the type of data it may be holding, specified as a @code{struct type}. @value{GDBN} allows creation of arbitrary types, but a number of built in types are provided (@code{builtin_type_void}, @code{builtin_type_int32} etc), together with functions to derive types from these. Typically the program counter will have a type of ``pointer to function'' (it points to code), the frame pointer and stack pointer will have types of ``pointer to void'' (they point to data on the stack) and all other integer registers will have a type of 32-bit integer or 64-bit integer. This information guides the formatting when displaying register information. The default value is @code{NULL} meaning no information is available to guide formatting when displaying registers. @end deftypefn @deftypefn {Architecture Function} void print_registers_info (struct gdbarch *@var{gdbarch}, struct ui_file *@var{file}, struct frame_info *@var{frame}, int @var{regnum}, int @var{all}) Define this function to print out one or all of the registers for the @value{GDBN} @kbd{info registers} command. The default value is the function @code{default_print_registers_info}, which uses the register type information (see @code{register_type} above) to determine how each register should be printed. Define a custom version of this function for fuller control over how the registers are displayed. The access should be for the specified architecture, @var{gdbarch}, with output to the file specified by the User Interface Independent Output file handle, @var{file} (@pxref{UI-Independent Output, , UI-Independent Output---the @code{ui_out} Functions}). The registers should show their values in the frame specified by @var{frame}. If @var{regnum} is -1 and @var{all} is zero, then all the ``significant'' registers should be shown (the implementer should decide which registers are ``significant''). Otherwise only the value of the register specified by @var{regnum} should be output. If @var{regnum} is -1 and @var{all} is non-zero (true), then the value of all registers should be shown. By default @code{default_print_registers_info} prints one register per line, and if @var{all} is zero omits floating-point registers. @end deftypefn @deftypefn {Architecture Function} void print_float_info (struct gdbarch *@var{gdbarch}, struct ui_file *@var{file}, struct frame_info *@var{frame}, const char *@var{args}) Define this function to provide output about the floating point unit and registers for the @value{GDBN} @kbd{info float} command respectively. The default value is @code{NULL} (not defined), meaning no information will be provided. The @var{gdbarch} and @var{file} and @var{frame} arguments have the same meaning as in the @code{print_registers_info} function above. The string @var{args} contains any supplementary arguments to the @kbd{info float} command. Define this function if the target supports floating point operations. @end deftypefn @deftypefn {Architecture Function} void print_vector_info (struct gdbarch *@var{gdbarch}, struct ui_file *@var{file}, struct frame_info *@var{frame}, const char *@var{args}) Define this function to provide output about the vector unit and registers for the @value{GDBN} @kbd{info vector} command respectively. The default value is @code{NULL} (not defined), meaning no information will be provided. The @var{gdbarch}, @var{file} and @var{frame} arguments have the same meaning as in the @code{print_registers_info} function above. The string @var{args} contains any supplementary arguments to the @kbd{info vector} command. Define this function if the target supports vector operations. @end deftypefn @deftypefn {Architecture Function} int register_reggroup_p (struct gdbarch *@var{gdbarch}, int @var{regnum}, struct reggroup *@var{group}) @value{GDBN} groups registers into different categories (general, vector, floating point etc). This function, given a register, @var{regnum}, and group, @var{group}, returns 1 (true) if the register is in the group and 0 (false) otherwise. The information should be for the specified architecture, @var{gdbarch} The default value is the function @code{default_register_reggroup_p} which will do a reasonable job based on the type of the register (see the function @code{register_type} above), with groups for general purpose registers, floating point registers, vector registers and raw (i.e not pseudo) registers. @end deftypefn @node Register and Memory Data @subsection Using Different Register and Memory Data Representations @cindex register representation @cindex memory representation @cindex representations, register and memory @cindex register data formats, converting @cindex @code{struct value}, converting register contents to Some architectures have different representations of data objects, depending whether the object is held in a register or memory. For example: @itemize @bullet @item The Alpha architecture can represent 32 bit integer values in floating-point registers. @item The x86 architecture supports 80-bit floating-point registers. The @code{long double} data type occupies 96 bits in memory but only 80 bits when stored in a register. @end itemize In general, the register representation of a data type is determined by the architecture, or @value{GDBN}'s interface to the architecture, while the memory representation is determined by the Application Binary Interface. For almost all data types on almost all architectures, the two representations are identical, and no special handling is needed. However, they do occasionally differ. An architecture may define the following @code{struct gdbarch} functions to request conversions between the register and memory representations of a data type: @deftypefn {Architecture Function} int gdbarch_convert_register_p (struct gdbarch *@var{gdbarch}, int @var{reg}) Return non-zero (true) if the representation of a data value stored in this register may be different to the representation of that same data value when stored in memory. The default value is @code{NULL} (undefined). If this function is defined and returns non-zero, the @code{struct gdbarch} functions @code{gdbarch_register_to_value} and @code{gdbarch_value_to_register} (see below) should be used to perform any necessary conversion. If defined, this function should return zero for the register's native type, when no conversion is necessary. @end deftypefn @deftypefn {Architecture Function} void gdbarch_register_to_value (struct gdbarch *@var{gdbarch}, int @var{reg}, struct type *@var{type}, char *@var{from}, char *@var{to}) Convert the value of register number @var{reg} to a data object of type @var{type}. The buffer at @var{from} holds the register's value in raw format; the converted value should be placed in the buffer at @var{to}. @quotation @emph{Note:} @code{gdbarch_register_to_value} and @code{gdbarch_value_to_register} take their @var{reg} and @var{type} arguments in different orders. @end quotation @code{gdbarch_register_to_value} should only be used with registers for which the @code{gdbarch_convert_register_p} function returns a non-zero value. @end deftypefn @deftypefn {Architecture Function} void gdbarch_value_to_register (struct gdbarch *@var{gdbarch}, struct type *@var{type}, int @var{reg}, char *@var{from}, char *@var{to}) Convert a data value of type @var{type} to register number @var{reg}' raw format. @quotation @emph{Note:} @code{gdbarch_register_to_value} and @code{gdbarch_value_to_register} take their @var{reg} and @var{type} arguments in different orders. @end quotation @code{gdbarch_value_to_register} should only be used with registers for which the @code{gdbarch_convert_register_p} function returns a non-zero value. @end deftypefn @node Register Caching @subsection Register Caching @cindex register caching Caching of registers is used, so that the target does not need to be accessed and reanalyzed multiple times for each register in circumstances where the register value cannot have changed. @cindex @code{struct regcache} @value{GDBN} provides @code{struct regcache}, associated with a particular @code{struct gdbarch} to hold the cached values of the raw registers. A set of functions is provided to access both the raw registers (with @code{raw} in their name) and the full set of cooked registers (with @code{cooked} in their name). Functions are provided to ensure the register cache is kept synchronized with the values of the actual registers in the target. Accessing registers through the @code{struct regcache} routines will ensure that the appropriate @code{struct gdbarch} functions are called when necessary to access the underlying target architecture. In general users should use the @dfn{cooked} functions, since these will map to the @dfn{raw} functions automatically as appropriate. @findex regcache_cooked_read @findex regcache_cooked_write @cindex @code{gdb_byte} @findex regcache_cooked_read_signed @findex regcache_cooked_read_unsigned @findex regcache_cooked_write_signed @findex regcache_cooked_write_unsigned The two key functions are @code{regcache_cooked_read} and @code{regcache_cooked_write} which read or write a register from or to a byte buffer (type @code{gdb_byte *}). For convenience the wrapper functions @code{regcache_cooked_read_signed}, @code{regcache_cooked_read_unsigned}, @code{regcache_cooked_write_signed} and @code{regcache_cooked_write_unsigned} are provided, which read or write the value using the buffer and convert to or from an integral value as appropriate. @node Frame Interpretation @section Frame Interpretation @menu * All About Stack Frames:: * Frame Handling Terminology:: * Prologue Caches:: * Functions and Variable to Analyze Frames:: * Functions to Access Frame Data:: * Analyzing Stacks---Frame Sniffers:: @end menu @node All About Stack Frames @subsection All About Stack Frames @value{GDBN} needs to understand the stack on which local (automatic) variables are stored. The area of the stack containing all the local variables for a function invocation is known as the @dfn{stack frame} for that function (or colloquially just as the @dfn{frame}). In turn the function that called the function will have its stack frame, and so on back through the chain of functions that have been called. Almost all architectures have one register dedicated to point to the end of the stack (the @dfn{stack pointer}). Many have a second register which points to the start of the currently active stack frame (the @dfn{frame pointer}). The specific arrangements for an architecture are a key part of the ABI. A diagram helps to explain this. Here is a simple program to compute factorials: @smallexample #include <stdio.h> int fact (int n) @{ if (0 == n) @{ return 1; @} else @{ return n * fact (n - 1); @} @} main () @{ int i; for (i = 0; i < 10; i++) @{ int f = fact (i); printf ("%d! = %d\n", i, f); @} @} @end smallexample Consider the state of the stack when the code reaches line 6 after the main program has called @code{fact@w{ }(3)}. The chain of function calls will be @code{main ()}, @code{fact@w{ }(3)}, @code{fact@w{ }(2)}, @code{@w{fact (1)}} and @code{fact@w{ }(0)}. In this illustration the stack is falling (as used for example by the OpenRISC 1000 ABI). The stack pointer (SP) is at the end of the stack (lowest address) and the frame pointer (FP) is at the highest address in the current stack frame. The following diagram shows how the stack looks. @center @image{stack_frame,14cm} In each stack frame, offset 0 from the stack pointer is the frame pointer of the previous frame and offset 4 (this is illustrating a 32-bit architecture) from the stack pointer is the return address. Local variables are indexed from the frame pointer, with negative indexes. In the function @code{fact}, offset -4 from the frame pointer is the argument @var{n}. In the @code{main} function, offset -4 from the frame pointer is the local variable @var{i} and offset -8 from the frame pointer is the local variable @var{f}@footnote{This is a simplified example for illustrative purposes only. Good optimizing compilers would not put anything on the stack for such simple functions. Indeed they might eliminate the recursion and use of the stack entirely!}. It is very easy to get confused when examining stacks. @value{GDBN} has terminology it uses rigorously throughout. The stack frame of the function currently executing, or where execution stopped is numbered zero. In this example frame #0 is the stack frame of the call to @code{fact@w{ }(0)}. The stack frame of its calling function (@code{fact@w{ }(1)} in this case) is numbered #1 and so on back through the chain of calls. The main @value{GDBN} data structure describing frames is @code{@w{struct frame_info}}. It is not used directly, but only via its accessor functions. @code{frame_info} includes information about the registers in the frame and a pointer to the code of the function with which the frame is associated. The entire stack is represented as a linked list of @code{frame_info} structs. @node Frame Handling Terminology @subsection Frame Handling Terminology It is easy to get confused when referencing stack frames. @value{GDBN} uses some precise terminology. @itemize @bullet @item @cindex THIS frame @cindex stack frame, definition of THIS frame @cindex frame, definition of THIS frame @dfn{THIS} frame is the frame currently under consideration. @item @cindex NEXT frame @cindex stack frame, definition of NEXT frame @cindex frame, definition of NEXT frame The @dfn{NEXT} frame, also sometimes called the inner or newer frame is the frame of the function called by the function of THIS frame. @item @cindex PREVIOUS frame @cindex stack frame, definition of PREVIOUS frame @cindex frame, definition of PREVIOUS frame The @dfn{PREVIOUS} frame, also sometimes called the outer or older frame is the frame of the function which called the function of THIS frame. @end itemize So in the example in the previous section (@pxref{All About Stack Frames, , All About Stack Frames}), if THIS frame is #3 (the call to @code{fact@w{ }(3)}), the NEXT frame is frame #2 (the call to @code{fact@w{ }(2)}) and the PREVIOUS frame is frame #4 (the call to @code{main@w{ }()}). @cindex innermost frame @cindex stack frame, definition of innermost frame @cindex frame, definition of innermost frame The @dfn{innermost} frame is the frame of the current executing function, or where the program stopped, in this example, in the middle of the call to @code{@w{fact (0))}}. It is always numbered frame #0. @cindex base of a frame @cindex stack frame, definition of base of a frame @cindex frame, definition of base of a frame The @dfn{base} of a frame is the address immediately before the start of the NEXT frame. For a stack which grows down in memory (a @dfn{falling} stack) this will be the lowest address and for a stack which grows up in memory (a @dfn{rising} stack) this will be the highest address in the frame. @value{GDBN} functions to analyze the stack are typically given a pointer to the NEXT frame to determine information about THIS frame. Information about THIS frame includes data on where the registers of the PREVIOUS frame are stored in this stack frame. In this example the frame pointer of the PREVIOUS frame is stored at offset 0 from the stack pointer of THIS frame. @cindex unwinding @cindex stack frame, definition of unwinding @cindex frame, definition of unwinding The process whereby a function is given a pointer to the NEXT frame to work out information about THIS frame is referred to as @dfn{unwinding}. The @value{GDBN} functions involved in this typically include unwind in their name. @cindex sniffing @cindex stack frame, definition of sniffing @cindex frame, definition of sniffing The process of analyzing a target to determine the information that should go in struct frame_info is called @dfn{sniffing}. The functions that carry this out are called sniffers and typically include sniffer in their name. More than one sniffer may be required to extract all the information for a particular frame. @cindex sentinel frame @cindex stack frame, definition of sentinel frame @cindex frame, definition of sentinel frame Because so many functions work using the NEXT frame, there is an issue about addressing the innermost frame---it has no NEXT frame. To solve this @value{GDBN} creates a dummy frame #-1, known as the @dfn{sentinel} frame. @node Prologue Caches @subsection Prologue Caches @cindex function prologue @cindex prologue of a function All the frame sniffing functions typically examine the code at the start of the corresponding function, to determine the state of registers. The ABI will save old values and set new values of key registers at the start of each function in what is known as the function @dfn{prologue}. @cindex prologue cache For any particular stack frame this data does not change, so all the standard unwinding functions, in addition to receiving a pointer to the NEXT frame as their first argument, receive a pointer to a @dfn{prologue cache} as their second argument. This can be used to store values associated with a particular frame, for reuse on subsequent calls involving the same frame. It is up to the user to define the structure used (it is a @code{void@w{ }*} pointer) and arrange allocation and deallocation of storage. However for general use, @value{GDBN} provides @code{@w{struct trad_frame_cache}}, with a set of accessor routines. This structure holds the stack and code address of THIS frame, the base address of the frame, a pointer to the struct @code{frame_info} for the NEXT frame and details of where the registers of the PREVIOUS frame may be found in THIS frame. Typically the first time any sniffer function is called with NEXT frame, the prologue sniffer for THIS frame will be @code{NULL}. The sniffer will analyze the frame, allocate a prologue cache structure and populate it. Subsequent calls using the same NEXT frame will pass in this prologue cache, so the data can be returned with no additional analysis. @node Functions and Variable to Analyze Frames @subsection Functions and Variable to Analyze Frames These struct @code{gdbarch} functions and variable should be defined to provide analysis of the stack frame and allow it to be adjusted as required. @deftypefn {Architecture Function} CORE_ADDR skip_prologue (struct gdbarch *@var{gdbarch}, CORE_ADDR @var{pc}) The prologue of a function is the code at the beginning of the function which sets up the stack frame, saves the return address etc. The code representing the behavior of the function starts after the prologue. This function skips past the prologue of a function if the program counter, @var{pc}, is within the prologue of a function. The result is the program counter immediately after the prologue. With modern optimizing compilers, this may be a far from trivial exercise. However the required information may be within the binary as DWARF2 debugging information, making the job much easier. The default value is @code{NULL} (not defined). This function should always be provided, but can take advantage of DWARF2 debugging information, if that is available. @end deftypefn @deftypefn {Architecture Function} int inner_than (CORE_ADDR @var{lhs}, CORE_ADDR @var{rhs}) @findex core_addr_lessthan @findex core_addr_greaterthan Given two frame or stack pointers, return non-zero (true) if the first represents the @dfn{inner} stack frame and 0 (false) otherwise. This is used to determine whether the target has a stack which grows up in memory (rising stack) or grows down in memory (falling stack). @xref{All About Stack Frames, , All About Stack Frames}, for an explanation of @dfn{inner} frames. The default value of this function is @code{NULL} and it should always be defined. However for almost all architectures one of the built-in functions can be used: @code{core_addr_lessthan} (for stacks growing down in memory) or @code{core_addr_greaterthan} (for stacks growing up in memory). @end deftypefn @anchor{frame_align} @deftypefn {Architecture Function} CORE_ADDR frame_align (struct gdbarch *@var{gdbarch}, CORE_ADDR @var{address}) @findex align_down @findex align_up The architecture may have constraints on how its frames are aligned. For example the OpenRISC 1000 ABI requires stack frames to be double-word aligned, but 32-bit versions of the architecture allocate single-word values to the stack. Thus extra padding may be needed at the end of a stack frame. Given a proposed address for the stack pointer, this function returns a suitably aligned address (by expanding the stack frame). The default value is @code{NULL} (undefined). This function should be defined for any architecture where it is possible the stack could become misaligned. The utility functions @code{align_down} (for falling stacks) and @code{align_up} (for rising stacks) will facilitate the implementation of this function. @end deftypefn @deftypevr {Architecture Variable} int frame_red_zone_size Some ABIs reserve space beyond the end of the stack for use by leaf functions without prologue or epilogue or by exception handlers (for example the OpenRISC 1000). This is known as a @dfn{red zone} (AMD terminology). The @sc{amd64} (nee x86-64) ABI documentation refers to the @dfn{red zone} when describing this scratch area. The default value is 0. Set this field if the architecture has such a red zone. The value must be aligned as required by the ABI (see @code{frame_align} above for an explanation of stack frame alignment). @end deftypevr @node Functions to Access Frame Data @subsection Functions to Access Frame Data These functions provide access to key registers and arguments in the stack frame. @deftypefn {Architecture Function} CORE_ADDR unwind_pc (struct gdbarch *@var{gdbarch}, struct frame_info *@var{next_frame}) This function is given a pointer to the NEXT stack frame (@pxref{All About Stack Frames, , All About Stack Frames}, for how frames are represented) and returns the value of the program counter in the PREVIOUS frame (i.e.@: the frame of the function that called THIS one). This is commonly referred to as the @dfn{return address}. The implementation, which must be frame agnostic (work with any frame), is typically no more than: @smallexample ULONGEST pc; pc = frame_unwind_register_unsigned (next_frame, @var{ARCH}_PC_REGNUM); return gdbarch_addr_bits_remove (gdbarch, pc); @end smallexample @end deftypefn @deftypefn {Architecture Function} CORE_ADDR unwind_sp (struct gdbarch *@var{gdbarch}, struct frame_info *@var{next_frame}) This function is given a pointer to the NEXT stack frame (@pxref{All About Stack Frames, , All About Stack Frames} for how frames are represented) and returns the value of the stack pointer in the PREVIOUS frame (i.e.@: the frame of the function that called THIS one). The implementation, which must be frame agnostic (work with any frame), is typically no more than: @smallexample ULONGEST sp; sp = frame_unwind_register_unsigned (next_frame, @var{ARCH}_SP_REGNUM); return gdbarch_addr_bits_remove (gdbarch, sp); @end smallexample @end deftypefn @deftypefn {Architecture Function} int frame_num_args (struct gdbarch *@var{gdbarch}, struct frame_info *@var{this_frame}) This function is given a pointer to THIS stack frame (@pxref{All About Stack Frames, , All About Stack Frames} for how frames are represented), and returns the number of arguments that are being passed, or -1 if not known. The default value is @code{NULL} (undefined), in which case the number of arguments passed on any stack frame is always unknown. For many architectures this will be a suitable default. @end deftypefn @node Analyzing Stacks---Frame Sniffers @subsection Analyzing Stacks---Frame Sniffers When a program stops, @value{GDBN} needs to construct the chain of struct @code{frame_info} representing the state of the stack using appropriate @dfn{sniffers}. Each architecture requires appropriate sniffers, but they do not form entries in @code{@w{struct gdbarch}}, since more than one sniffer may be required and a sniffer may be suitable for more than one @code{@w{struct gdbarch}}. Instead sniffers are associated with architectures using the following functions. @itemize @bullet @item @findex frame_unwind_append_sniffer @code{frame_unwind_append_sniffer} is used to add a new sniffer to analyze THIS frame when given a pointer to the NEXT frame. @item @findex frame_base_append_sniffer @code{frame_base_append_sniffer} is used to add a new sniffer which can determine information about the base of a stack frame. @item @findex frame_base_set_default @code{frame_base_set_default} is used to specify the default base sniffer. @end itemize These functions all take a reference to @code{@w{struct gdbarch}}, so they are associated with a specific architecture. They are usually called in the @code{gdbarch} initialization function, after the @code{gdbarch} struct has been set up. Unless a default has been set, the most recently appended sniffer will be tried first. The main frame unwinding sniffer (as set by @code{frame_unwind_append_sniffer)} returns a structure specifying a set of sniffing functions: @cindex @code{frame_unwind} @smallexample struct frame_unwind @{ enum frame_type type; frame_this_id_ftype *this_id; frame_prev_register_ftype *prev_register; const struct frame_data *unwind_data; frame_sniffer_ftype *sniffer; frame_prev_pc_ftype *prev_pc; frame_dealloc_cache_ftype *dealloc_cache; @}; @end smallexample The @code{type} field indicates the type of frame this sniffer can handle: normal, dummy (@pxref{Functions Creating Dummy Frames, , Functions Creating Dummy Frames}), signal handler or sentinel. Signal handlers sometimes have their own simplified stack structure for efficiency, so may need their own handlers. The @code{unwind_data} field holds additional information which may be relevant to particular types of frame. For example it may hold additional information for signal handler frames. The remaining fields define functions that yield different types of information when given a pointer to the NEXT stack frame. Not all functions need be provided. If an entry is @code{NULL}, the next sniffer will be tried instead. @itemize @bullet @item @code{this_id} determines the stack pointer and function (code entry point) for THIS stack frame. @item @code{prev_register} determines where the values of registers for the PREVIOUS stack frame are stored in THIS stack frame. @item @code{sniffer} takes a look at THIS frame's registers to determine if this is the appropriate unwinder. @item @code{prev_pc} determines the program counter for THIS frame. Only needed if the program counter is not an ordinary register (@pxref{Register Architecture Functions & Variables, , Functions and Variables Specifying the Register Architecture}). @item @code{dealloc_cache} frees any additional memory associated with the prologue cache for this frame (@pxref{Prologue Caches, , Prologue Caches}). @end itemize In general it is only the @code{this_id} and @code{prev_register} fields that need be defined for custom sniffers. The frame base sniffer is much simpler. It is a @code{@w{struct frame_base}}, which refers to the corresponding @code{frame_unwind} struct and whose fields refer to functions yielding various addresses within the frame. @cindex @code{frame_base} @smallexample struct frame_base @{ const struct frame_unwind *unwind; frame_this_base_ftype *this_base; frame_this_locals_ftype *this_locals; frame_this_args_ftype *this_args; @}; @end smallexample All the functions referred to take a pointer to the NEXT frame as argument. The function referred to by @code{this_base} returns the base address of THIS frame, the function referred to by @code{this_locals} returns the base address of local variables in THIS frame and the function referred to by @code{this_args} returns the base address of the function arguments in this frame. As described above, the base address of a frame is the address immediately before the start of the NEXT frame. For a falling stack, this is the lowest address in the frame and for a rising stack it is the highest address in the frame. For most architectures the same address is also the base address for local variables and arguments, in which case the same function can be used for all three entries@footnote{It is worth noting that if it cannot be determined in any other way (for example by there being a register with the name @code{"fp"}), then the result of the @code{this_base} function will be used as the value of the frame pointer variable @kbd{$fp} in @value{GDBN}. This is very often not correct (for example with the OpenRISC 1000, this value is the stack pointer, @kbd{$sp}). In this case a register (raw or pseudo) with the name @code{"fp"} should be defined. It will be used in preference as the value of @kbd{$fp}.}. @node Inferior Call Setup @section Inferior Call Setup @cindex calls to the inferior @menu * About Dummy Frames:: * Functions Creating Dummy Frames:: @end menu @node About Dummy Frames @subsection About Dummy Frames @cindex dummy frames @value{GDBN} can call functions in the target code (for example by using the @kbd{call} or @kbd{print} commands). These functions may be breakpointed, and it is essential that if a function does hit a breakpoint, commands like @kbd{backtrace} work correctly. This is achieved by making the stack look as though the function had been called from the point where @value{GDBN} had previously stopped. This requires that @value{GDBN} can set up stack frames appropriate for such function calls. @node Functions Creating Dummy Frames @subsection Functions Creating Dummy Frames The following functions provide the functionality to set up such @dfn{dummy} stack frames. @deftypefn {Architecture Function} CORE_ADDR push_dummy_call (struct gdbarch *@var{gdbarch}, struct value *@var{function}, struct regcache *@var{regcache}, CORE_ADDR @var{bp_addr}, int @var{nargs}, struct value **@var{args}, CORE_ADDR @var{sp}, int @var{struct_return}, CORE_ADDR @var{struct_addr}) This function sets up a dummy stack frame for the function about to be called. @code{push_dummy_call} is given the arguments to be passed and must copy them into registers or push them on to the stack as appropriate for the ABI. @var{function} is a pointer to the function that will be called and @var{regcache} the register cache from which values should be obtained. @var{bp_addr} is the address to which the function should return (which is breakpointed, so @value{GDBN} can regain control, hence the name). @var{nargs} is the number of arguments to pass and @var{args} an array containing the argument values. @var{struct_return} is non-zero (true) if the function returns a structure, and if so @var{struct_addr} is the address in which the structure should be returned. After calling this function, @value{GDBN} will pass control to the target at the address of the function, which will find the stack and registers set up just as expected. The default value of this function is @code{NULL} (undefined). If the function is not defined, then @value{GDBN} will not allow the user to call functions within the target being debugged. @end deftypefn @deftypefn {Architecture Function} {struct frame_id} unwind_dummy_id (struct gdbarch *@var{gdbarch}, struct frame_info *@var{next_frame}) This is the inverse of @code{push_dummy_call} which restores the stack pointer and program counter after a call to evaluate a function using a dummy stack frame. The result is a @code{@w{struct frame_id}}, which contains the value of the stack pointer and program counter to be used. The NEXT frame pointer is provided as argument, @var{next_frame}. THIS frame is the frame of the dummy function, which can be unwound, to yield the required stack pointer and program counter from the PREVIOUS frame. The default value is @code{NULL} (undefined). If @code{push_dummy_call} is defined, then this function should also be defined. @end deftypefn @deftypefn {Architecture Function} CORE_ADDR push_dummy_code (struct gdbarch *@var{gdbarch}, CORE_ADDR @var{sp}, CORE_ADDR @var{funaddr}, struct value **@var{args}, int @var{nargs}, struct type *@var{value_type}, CORE_ADDR *@var{real_pc}, CORE_ADDR *@var{bp_addr}, struct regcache *@var{regcache}) If this function is not defined (its default value is @code{NULL}), a dummy call will use the entry point of the currently loaded code on the target as its return address. A temporary breakpoint will be set there, so the location must be writable and have room for a breakpoint. It is possible that this default is not suitable. It might not be writable (in ROM possibly), or the ABI might require code to be executed on return from a call to unwind the stack before the breakpoint is encountered. If either of these is the case, then push_dummy_code should be defined to push an instruction sequence onto the end of the stack to which the dummy call should return. The arguments are essentially the same as those to @code{push_dummy_call}. However the function is provided with the type of the function result, @var{value_type}, @var{bp_addr} is used to return a value (the address at which the breakpoint instruction should be inserted) and @var{real pc} is used to specify the resume address when starting the call sequence. The function should return the updated innermost stack address. @quotation @emph{Note:} This does require that code in the stack can be executed. Some Harvard architectures may not allow this. @end quotation @end deftypefn @node Adding support for debugging core files @section Adding support for debugging core files @cindex core files The prerequisite for adding core file support in @value{GDBN} is to have core file support in BFD. Once BFD support is available, writing the apropriate @code{regset_from_core_section} architecture function should be all that is needed in order to add support for core files in @value{GDBN}. @node Defining Other Architecture Features @section Defining Other Architecture Features This section describes other functions and values in @code{gdbarch}, together with some useful macros, that you can use to define the target architecture. @table @code @item CORE_ADDR gdbarch_addr_bits_remove (@var{gdbarch}, @var{addr}) @findex gdbarch_addr_bits_remove If a raw machine instruction address includes any bits that are not really part of the address, then this function is used to zero those bits in @var{addr}. This is only used for addresses of instructions, and even then not in all contexts. For example, the two low-order bits of the PC on the Hewlett-Packard PA 2.0 architecture contain the privilege level of the corresponding instruction. Since instructions must always be aligned on four-byte boundaries, the processor masks out these bits to generate the actual address of the instruction. @code{gdbarch_addr_bits_remove} would then for example look like that: @smallexample arch_addr_bits_remove (CORE_ADDR addr) @{ return (addr &= ~0x3); @} @end smallexample @item int address_class_name_to_type_flags (@var{gdbarch}, @var{name}, @var{type_flags_ptr}) @findex address_class_name_to_type_flags If @var{name} is a valid address class qualifier name, set the @code{int} referenced by @var{type_flags_ptr} to the mask representing the qualifier and return 1. If @var{name} is not a valid address class qualifier name, return 0. The value for @var{type_flags_ptr} should be one of @code{TYPE_FLAG_ADDRESS_CLASS_1}, @code{TYPE_FLAG_ADDRESS_CLASS_2}, or possibly some combination of these values or'd together. @xref{Target Architecture Definition, , Address Classes}. @item int address_class_name_to_type_flags_p (@var{gdbarch}) @findex address_class_name_to_type_flags_p Predicate which indicates whether @code{address_class_name_to_type_flags} has been defined. @item int gdbarch_address_class_type_flags (@var{gdbarch}, @var{byte_size}, @var{dwarf2_addr_class}) @findex gdbarch_address_class_type_flags Given a pointers byte size (as described by the debug information) and the possible @code{DW_AT_address_class} value, return the type flags used by @value{GDBN} to represent this address class. The value returned should be one of @code{TYPE_FLAG_ADDRESS_CLASS_1}, @code{TYPE_FLAG_ADDRESS_CLASS_2}, or possibly some combination of these values or'd together. @xref{Target Architecture Definition, , Address Classes}. @item int gdbarch_address_class_type_flags_p (@var{gdbarch}) @findex gdbarch_address_class_type_flags_p Predicate which indicates whether @code{gdbarch_address_class_type_flags_p} has been defined. @item const char *gdbarch_address_class_type_flags_to_name (@var{gdbarch}, @var{type_flags}) @findex gdbarch_address_class_type_flags_to_name Return the name of the address class qualifier associated with the type flags given by @var{type_flags}. @item int gdbarch_address_class_type_flags_to_name_p (@var{gdbarch}) @findex gdbarch_address_class_type_flags_to_name_p Predicate which indicates whether @code{gdbarch_address_class_type_flags_to_name} has been defined. @xref{Target Architecture Definition, , Address Classes}. @item void gdbarch_address_to_pointer (@var{gdbarch}, @var{type}, @var{buf}, @var{addr}) @findex gdbarch_address_to_pointer Store in @var{buf} a pointer of type @var{type} representing the address @var{addr}, in the appropriate format for the current architecture. This function may safely assume that @var{type} is either a pointer or a C@t{++} reference type. @xref{Target Architecture Definition, , Pointers Are Not Always Addresses}. @item int gdbarch_believe_pcc_promotion (@var{gdbarch}) @findex gdbarch_believe_pcc_promotion Used to notify if the compiler promotes a @code{short} or @code{char} parameter to an @code{int}, but still reports the parameter as its original type, rather than the promoted type. @item gdbarch_bits_big_endian (@var{gdbarch}) @findex gdbarch_bits_big_endian This is used if the numbering of bits in the targets does @strong{not} match the endianism of the target byte order. A value of 1 means that the bits are numbered in a big-endian bit order, 0 means little-endian. @item set_gdbarch_bits_big_endian (@var{gdbarch}, @var{bits_big_endian}) @findex set_gdbarch_bits_big_endian Calling set_gdbarch_bits_big_endian with a value of 1 indicates that the bits in the target are numbered in a big-endian bit order, 0 indicates little-endian. @item BREAKPOINT @findex BREAKPOINT This is the character array initializer for the bit pattern to put into memory where a breakpoint is set. Although it's common to use a trap instruction for a breakpoint, it's not required; for instance, the bit pattern could be an invalid instruction. The breakpoint must be no longer than the shortest instruction of the architecture. @code{BREAKPOINT} has been deprecated in favor of @code{gdbarch_breakpoint_from_pc}. @item BIG_BREAKPOINT @itemx LITTLE_BREAKPOINT @findex LITTLE_BREAKPOINT @findex BIG_BREAKPOINT Similar to BREAKPOINT, but used for bi-endian targets. @code{BIG_BREAKPOINT} and @code{LITTLE_BREAKPOINT} have been deprecated in favor of @code{gdbarch_breakpoint_from_pc}. @item const gdb_byte *gdbarch_breakpoint_from_pc (@var{gdbarch}, @var{pcptr}, @var{lenptr}) @findex gdbarch_breakpoint_from_pc @anchor{gdbarch_breakpoint_from_pc} Use the program counter to determine the contents and size of a breakpoint instruction. It returns a pointer to a static string of bytes that encode a breakpoint instruction, stores the length of the string to @code{*@var{lenptr}}, and adjusts the program counter (if necessary) to point to the actual memory location where the breakpoint should be inserted. On input, the program counter (@code{*@var{pcptr}} is the encoded inferior's PC register. If software breakpoints are supported, the function sets this argument to the PC's plain address. If software breakpoints are not supported, the function returns NULL instead of the encoded breakpoint instruction. Although it is common to use a trap instruction for a breakpoint, it's not required; for instance, the bit pattern could be an invalid instruction. The breakpoint must be no longer than the shortest instruction of the architecture. Provided breakpoint bytes can be also used by @code{bp_loc_is_permanent} to detect permanent breakpoints. @code{gdbarch_breakpoint_from_pc} should return an unchanged memory copy if it was called for a location with permanent breakpoint as some architectures use breakpoint instructions containing arbitrary parameter value. Replaces all the other @var{BREAKPOINT} macros. @item int gdbarch_memory_insert_breakpoint (@var{gdbarch}, @var{bp_tgt}) @itemx gdbarch_memory_remove_breakpoint (@var{gdbarch}, @var{bp_tgt}) @findex gdbarch_memory_remove_breakpoint @findex gdbarch_memory_insert_breakpoint Insert or remove memory based breakpoints. Reasonable defaults (@code{default_memory_insert_breakpoint} and @code{default_memory_remove_breakpoint} respectively) have been provided so that it is not necessary to set these for most architectures. Architectures which may want to set @code{gdbarch_memory_insert_breakpoint} and @code{gdbarch_memory_remove_breakpoint} will likely have instructions that are oddly sized or are not stored in a conventional manner. It may also be desirable (from an efficiency standpoint) to define custom breakpoint insertion and removal routines if @code{gdbarch_breakpoint_from_pc} needs to read the target's memory for some reason. @item CORE_ADDR gdbarch_adjust_breakpoint_address (@var{gdbarch}, @var{bpaddr}) @findex gdbarch_adjust_breakpoint_address @cindex breakpoint address adjusted Given an address at which a breakpoint is desired, return a breakpoint address adjusted to account for architectural constraints on breakpoint placement. This method is not needed by most targets. The FR-V target (see @file{frv-tdep.c}) requires this method. The FR-V is a VLIW architecture in which a number of RISC-like instructions are grouped (packed) together into an aggregate instruction or instruction bundle. When the processor executes one of these bundles, the component instructions are executed in parallel. In the course of optimization, the compiler may group instructions from distinct source statements into the same bundle. The line number information associated with one of the latter statements will likely refer to some instruction other than the first one in the bundle. So, if the user attempts to place a breakpoint on one of these latter statements, @value{GDBN} must be careful to @emph{not} place the break instruction on any instruction other than the first one in the bundle. (Remember though that the instructions within a bundle execute in parallel, so the @emph{first} instruction is the instruction at the lowest address and has nothing to do with execution order.) The FR-V's @code{gdbarch_adjust_breakpoint_address} method will adjust a breakpoint's address by scanning backwards for the beginning of the bundle, returning the address of the bundle. Since the adjustment of a breakpoint may significantly alter a user's expectation, @value{GDBN} prints a warning when an adjusted breakpoint is initially set and each time that that breakpoint is hit. @item int gdbarch_call_dummy_location (@var{gdbarch}) @findex gdbarch_call_dummy_location See the file @file{inferior.h}. This method has been replaced by @code{gdbarch_push_dummy_code} (@pxref{gdbarch_push_dummy_code}). @item int gdbarch_cannot_fetch_register (@var{gdbarch}, @var{regum}) @findex gdbarch_cannot_fetch_register This function should return nonzero if @var{regno} cannot be fetched from an inferior process. @item int gdbarch_cannot_store_register (@var{gdbarch}, @var{regnum}) @findex gdbarch_cannot_store_register This function should return nonzero if @var{regno} should not be written to the target. This is often the case for program counters, status words, and other special registers. This function returns 0 as default so that @value{GDBN} will assume that all registers may be written. @item int gdbarch_convert_register_p (@var{gdbarch}, @var{regnum}, struct type *@var{type}) @findex gdbarch_convert_register_p Return non-zero if register @var{regnum} represents data values of type @var{type} in a non-standard form. @xref{Target Architecture Definition, , Using Different Register and Memory Data Representations}. @item int gdbarch_fp0_regnum (@var{gdbarch}) @findex gdbarch_fp0_regnum This function returns the number of the first floating point register, if the machine has such registers. Otherwise, it returns -1. @item CORE_ADDR gdbarch_decr_pc_after_break (@var{gdbarch}) @findex gdbarch_decr_pc_after_break This function shall return the amount by which to decrement the PC after the program encounters a breakpoint. This is often the number of bytes in @code{BREAKPOINT}, though not always. For most targets this value will be 0. @item DISABLE_UNSETTABLE_BREAK (@var{addr}) @findex DISABLE_UNSETTABLE_BREAK If defined, this should evaluate to 1 if @var{addr} is in a shared library in which breakpoints cannot be set and so should be disabled. @item int gdbarch_dwarf2_reg_to_regnum (@var{gdbarch}, @var{dwarf2_regnr}) @findex gdbarch_dwarf2_reg_to_regnum Convert DWARF2 register number @var{dwarf2_regnr} into @value{GDBN} regnum. If not defined, no conversion will be performed. @item int gdbarch_ecoff_reg_to_regnum (@var{gdbarch}, @var{ecoff_regnr}) @findex gdbarch_ecoff_reg_to_regnum Convert ECOFF register number @var{ecoff_regnr} into @value{GDBN} regnum. If not defined, no conversion will be performed. @item GCC_COMPILED_FLAG_SYMBOL @itemx GCC2_COMPILED_FLAG_SYMBOL @findex GCC2_COMPILED_FLAG_SYMBOL @findex GCC_COMPILED_FLAG_SYMBOL If defined, these are the names of the symbols that @value{GDBN} will look for to detect that GCC compiled the file. The default symbols are @code{gcc_compiled.} and @code{gcc2_compiled.}, respectively. (Currently only defined for the Delta 68.) @item gdbarch_get_longjmp_target @findex gdbarch_get_longjmp_target This function determines the target PC address that @code{longjmp} will jump to, assuming that we have just stopped at a @code{longjmp} breakpoint. It takes a @code{CORE_ADDR *} as argument, and stores the target PC value through this pointer. It examines the current state of the machine as needed, typically by using a manually-determined offset into the @code{jmp_buf}. (While we might like to get the offset from the target's @file{jmpbuf.h}, that header file cannot be assumed to be available when building a cross-debugger.) @item DEPRECATED_IBM6000_TARGET @findex DEPRECATED_IBM6000_TARGET Shows that we are configured for an IBM RS/6000 system. This conditional should be eliminated (FIXME) and replaced by feature-specific macros. It was introduced in haste and we are repenting at leisure. @item I386_USE_GENERIC_WATCHPOINTS An x86-based target can define this to use the generic x86 watchpoint support; see @ref{Algorithms, I386_USE_GENERIC_WATCHPOINTS}. @item gdbarch_in_function_epilogue_p (@var{gdbarch}, @var{addr}) @findex gdbarch_in_function_epilogue_p Returns non-zero if the given @var{addr} is in the epilogue of a function. The epilogue of a function is defined as the part of a function where the stack frame of the function already has been destroyed up to the final `return from function call' instruction. @item int gdbarch_in_solib_return_trampoline (@var{gdbarch}, @var{pc}, @var{name}) @findex gdbarch_in_solib_return_trampoline Define this function to return nonzero if the program is stopped in the trampoline that returns from a shared library. @item target_so_ops.in_dynsym_resolve_code (@var{pc}) @findex in_dynsym_resolve_code Define this to return nonzero if the program is stopped in the dynamic linker. @item SKIP_SOLIB_RESOLVER (@var{pc}) @findex SKIP_SOLIB_RESOLVER Define this to evaluate to the (nonzero) address at which execution should continue to get past the dynamic linker's symbol resolution function. A zero value indicates that it is not important or necessary to set a breakpoint to get through the dynamic linker and that single stepping will suffice. @item CORE_ADDR gdbarch_integer_to_address (@var{gdbarch}, @var{type}, @var{buf}) @findex gdbarch_integer_to_address @cindex converting integers to addresses Define this when the architecture needs to handle non-pointer to address conversions specially. Converts that value to an address according to the current architectures conventions. @emph{Pragmatics: When the user copies a well defined expression from their source code and passes it, as a parameter, to @value{GDBN}'s @code{print} command, they should get the same value as would have been computed by the target program. Any deviation from this rule can cause major confusion and annoyance, and needs to be justified carefully. In other words, @value{GDBN} doesn't really have the freedom to do these conversions in clever and useful ways. It has, however, been pointed out that users aren't complaining about how @value{GDBN} casts integers to pointers; they are complaining that they can't take an address from a disassembly listing and give it to @code{x/i}. Adding an architecture method like @code{gdbarch_integer_to_address} certainly makes it possible for @value{GDBN} to ``get it right'' in all circumstances.} @xref{Target Architecture Definition, , Pointers Are Not Always Addresses}. @item CORE_ADDR gdbarch_pointer_to_address (@var{gdbarch}, @var{type}, @var{buf}) @findex gdbarch_pointer_to_address Assume that @var{buf} holds a pointer of type @var{type}, in the appropriate format for the current architecture. Return the byte address the pointer refers to. @xref{Target Architecture Definition, , Pointers Are Not Always Addresses}. @item void gdbarch_register_to_value(@var{gdbarch}, @var{frame}, @var{regnum}, @var{type}, @var{fur}) @findex gdbarch_register_to_value Convert the raw contents of register @var{regnum} into a value of type @var{type}. @xref{Target Architecture Definition, , Using Different Register and Memory Data Representations}. @item REGISTER_CONVERT_TO_VIRTUAL(@var{reg}, @var{type}, @var{from}, @var{to}) @findex REGISTER_CONVERT_TO_VIRTUAL Convert the value of register @var{reg} from its raw form to its virtual form. @xref{Target Architecture Definition, , Raw and Virtual Register Representations}. @item REGISTER_CONVERT_TO_RAW(@var{type}, @var{reg}, @var{from}, @var{to}) @findex REGISTER_CONVERT_TO_RAW Convert the value of register @var{reg} from its virtual form to its raw form. @xref{Target Architecture Definition, , Raw and Virtual Register Representations}. @item const struct regset *regset_from_core_section (struct gdbarch * @var{gdbarch}, const char * @var{sect_name}, size_t @var{sect_size}) @findex regset_from_core_section Return the appropriate register set for a core file section with name @var{sect_name} and size @var{sect_size}. @item SOFTWARE_SINGLE_STEP_P() @findex SOFTWARE_SINGLE_STEP_P Define this as 1 if the target does not have a hardware single-step mechanism. The macro @code{SOFTWARE_SINGLE_STEP} must also be defined. @item SOFTWARE_SINGLE_STEP(@var{signal}, @var{insert_breakpoints_p}) @findex SOFTWARE_SINGLE_STEP A function that inserts or removes (depending on @var{insert_breakpoints_p}) breakpoints at each possible destinations of the next instruction. See @file{sparc-tdep.c} and @file{rs6000-tdep.c} for examples. @item set_gdbarch_sofun_address_maybe_missing (@var{gdbarch}, @var{set}) @findex set_gdbarch_sofun_address_maybe_missing Somebody clever observed that, the more actual addresses you have in the debug information, the more time the linker has to spend relocating them. So whenever there's some other way the debugger could find the address it needs, you should omit it from the debug info, to make linking faster. Calling @code{set_gdbarch_sofun_address_maybe_missing} with a non-zero argument @var{set} indicates that a particular set of hacks of this sort are in use, affecting @code{N_SO} and @code{N_FUN} entries in stabs-format debugging information. @code{N_SO} stabs mark the beginning and ending addresses of compilation units in the text segment. @code{N_FUN} stabs mark the starts and ends of functions. In this case, @value{GDBN} assumes two things: @itemize @bullet @item @code{N_FUN} stabs have an address of zero. Instead of using those addresses, you should find the address where the function starts by taking the function name from the stab, and then looking that up in the minsyms (the linker/assembler symbol table). In other words, the stab has the name, and the linker/assembler symbol table is the only place that carries the address. @item @code{N_SO} stabs have an address of zero, too. You just look at the @code{N_FUN} stabs that appear before and after the @code{N_SO} stab, and guess the starting and ending addresses of the compilation unit from them. @end itemize @item int gdbarch_stabs_argument_has_addr (@var{gdbarch}, @var{type}) @findex gdbarch_stabs_argument_has_addr @anchor{gdbarch_stabs_argument_has_addr} Define this function to return nonzero if a function argument of type @var{type} is passed by reference instead of value. @item CORE_ADDR gdbarch_push_dummy_call (@var{gdbarch}, @var{function}, @var{regcache}, @var{bp_addr}, @var{nargs}, @var{args}, @var{sp}, @var{struct_return}, @var{struct_addr}) @findex gdbarch_push_dummy_call @anchor{gdbarch_push_dummy_call} Define this to push the dummy frame's call to the inferior function onto the stack. In addition to pushing @var{nargs}, the code should push @var{struct_addr} (when @var{struct_return} is non-zero), and the return address (@var{bp_addr}, in inferior's PC register encoding). @var{function} is a pointer to a @code{struct value}; on architectures that use function descriptors, this contains the function descriptor value. Returns the updated top-of-stack pointer. @item CORE_ADDR gdbarch_push_dummy_code (@var{gdbarch}, @var{sp}, @var{funaddr}, @var{using_gcc}, @var{args}, @var{nargs}, @var{value_type}, @var{real_pc}, @var{bp_addr}, @var{regcache}) @findex gdbarch_push_dummy_code @anchor{gdbarch_push_dummy_code} Given a stack based call dummy, push the instruction sequence (including space for a breakpoint) to which the called function should return. Set @var{bp_addr} to the address at which the breakpoint instruction should be inserted (in inferior's PC register encoding), @var{real_pc} to the resume address when starting the call sequence, and return the updated inner-most stack address. By default, the stack is grown sufficient to hold a frame-aligned (@pxref{frame_align}) breakpoint, @var{bp_addr} is set to the address reserved for that breakpoint (in inferior's PC register encoding), and @var{real_pc} set to @var{funaddr}. This method replaces @w{@code{gdbarch_call_dummy_location (@var{gdbarch})}}. @item int gdbarch_sdb_reg_to_regnum (@var{gdbarch}, @var{sdb_regnr}) @findex gdbarch_sdb_reg_to_regnum Use this function to convert sdb register @var{sdb_regnr} into @value{GDBN} regnum. If not defined, no conversion will be done. @item enum return_value_convention gdbarch_return_value (struct gdbarch *@var{gdbarch}, struct type *@var{valtype}, struct regcache *@var{regcache}, void *@var{readbuf}, const void *@var{writebuf}) @findex gdbarch_return_value @anchor{gdbarch_return_value} Given a function with a return-value of type @var{rettype}, return which return-value convention that function would use. @value{GDBN} currently recognizes two function return-value conventions: @code{RETURN_VALUE_REGISTER_CONVENTION} where the return value is found in registers; and @code{RETURN_VALUE_STRUCT_CONVENTION} where the return value is found in memory and the address of that memory location is passed in as the function's first parameter. If the register convention is being used, and @var{writebuf} is non-@code{NULL}, also copy the return-value in @var{writebuf} into @var{regcache}. If the register convention is being used, and @var{readbuf} is non-@code{NULL}, also copy the return value from @var{regcache} into @var{readbuf} (@var{regcache} contains a copy of the registers from the just returned function). @emph{Maintainer note: This method replaces separate predicate, extract, store methods. By having only one method, the logic needed to determine the return-value convention need only be implemented in one place. If @value{GDBN} were written in an @sc{oo} language, this method would instead return an object that knew how to perform the register return-value extract and store.} @emph{Maintainer note: This method does not take a @var{gcc_p} parameter, and such a parameter should not be added. If an architecture that requires per-compiler or per-function information be identified, then the replacement of @var{rettype} with @code{struct value} @var{function} should be pursued.} @emph{Maintainer note: The @var{regcache} parameter limits this methods to the inner most frame. While replacing @var{regcache} with a @code{struct frame_info} @var{frame} parameter would remove that limitation there has yet to be a demonstrated need for such a change.} @item void gdbarch_skip_permanent_breakpoint (@var{gdbarch}, @var{regcache}) @findex gdbarch_skip_permanent_breakpoint Advance the inferior's PC past a permanent breakpoint. @value{GDBN} normally steps over a breakpoint by removing it, stepping one instruction, and re-inserting the breakpoint. However, permanent breakpoints are hardwired into the inferior, and can't be removed, so this strategy doesn't work. Calling @code{gdbarch_skip_permanent_breakpoint} adjusts the processor's state so that execution will resume just after the breakpoint. This function does the right thing even when the breakpoint is in the delay slot of a branch or jump. @item CORE_ADDR gdbarch_skip_trampoline_code (@var{gdbarch}, @var{frame}, @var{pc}) @findex gdbarch_skip_trampoline_code If the target machine has trampoline code that sits between callers and the functions being called, then define this function to return a new PC that is at the start of the real function. @item int gdbarch_deprecated_fp_regnum (@var{gdbarch}) @findex gdbarch_deprecated_fp_regnum If the frame pointer is in a register, use this function to return the number of that register. @item int gdbarch_stab_reg_to_regnum (@var{gdbarch}, @var{stab_regnr}) @findex gdbarch_stab_reg_to_regnum Use this function to convert stab register @var{stab_regnr} into @value{GDBN} regnum. If not defined, no conversion will be done. @item TARGET_CHAR_BIT @findex TARGET_CHAR_BIT Number of bits in a char; defaults to 8. @item int gdbarch_char_signed (@var{gdbarch}) @findex gdbarch_char_signed Non-zero if @code{char} is normally signed on this architecture; zero if it should be unsigned. The ISO C standard requires the compiler to treat @code{char} as equivalent to either @code{signed char} or @code{unsigned char}; any character in the standard execution set is supposed to be positive. Most compilers treat @code{char} as signed, but @code{char} is unsigned on the IBM S/390, RS6000, and PowerPC targets. @item int gdbarch_double_bit (@var{gdbarch}) @findex gdbarch_double_bit Number of bits in a double float; defaults to @w{@code{8 * TARGET_CHAR_BIT}}. @item int gdbarch_float_bit (@var{gdbarch}) @findex gdbarch_float_bit Number of bits in a float; defaults to @w{@code{4 * TARGET_CHAR_BIT}}. @item int gdbarch_int_bit (@var{gdbarch}) @findex gdbarch_int_bit Number of bits in an integer; defaults to @w{@code{4 * TARGET_CHAR_BIT}}. @item int gdbarch_long_bit (@var{gdbarch}) @findex gdbarch_long_bit Number of bits in a long integer; defaults to @w{@code{4 * TARGET_CHAR_BIT}}. @item int gdbarch_long_double_bit (@var{gdbarch}) @findex gdbarch_long_double_bit Number of bits in a long double float; defaults to @w{@code{2 * gdbarch_double_bit (@var{gdbarch})}}. @item int gdbarch_long_long_bit (@var{gdbarch}) @findex gdbarch_long_long_bit Number of bits in a long long integer; defaults to @w{@code{2 * gdbarch_long_bit (@var{gdbarch})}}. @item int gdbarch_ptr_bit (@var{gdbarch}) @findex gdbarch_ptr_bit Number of bits in a pointer; defaults to @w{@code{gdbarch_int_bit (@var{gdbarch})}}. @item int gdbarch_short_bit (@var{gdbarch}) @findex gdbarch_short_bit Number of bits in a short integer; defaults to @w{@code{2 * TARGET_CHAR_BIT}}. @item void gdbarch_virtual_frame_pointer (@var{gdbarch}, @var{pc}, @var{frame_regnum}, @var{frame_offset}) @findex gdbarch_virtual_frame_pointer Returns a @code{(@var{register}, @var{offset})} pair representing the virtual frame pointer in use at the code address @var{pc}. If virtual frame pointers are not used, a default definition simply returns @code{gdbarch_deprecated_fp_regnum} (or @code{gdbarch_sp_regnum}, if no frame pointer is defined), with an offset of zero. @c need to explain virtual frame pointers, they are recorded in agent @c expressions for tracepoints @item TARGET_HAS_HARDWARE_WATCHPOINTS If non-zero, the target has support for hardware-assisted watchpoints. @xref{Algorithms, watchpoints}, for more details and other related macros. @item int gdbarch_print_insn (@var{gdbarch}, @var{vma}, @var{info}) @findex gdbarch_print_insn This is the function used by @value{GDBN} to print an assembly instruction. It prints the instruction at address @var{vma} in debugged memory and returns the length of the instruction, in bytes. This usually points to a function in the @code{opcodes} library (@pxref{Support Libraries, ,Opcodes}). @var{info} is a structure (of type @code{disassemble_info}) defined in the header file @file{include/dis-asm.h}, and used to pass information to the instruction decoding routine. @item frame_id gdbarch_dummy_id (@var{gdbarch}, @var{frame}) @findex gdbarch_dummy_id @anchor{gdbarch_dummy_id} Given @var{frame} return a @w{@code{struct frame_id}} that uniquely identifies an inferior function call's dummy frame. The value returned must match the dummy frame stack value previously saved by @code{call_function_by_hand}. @item void gdbarch_value_to_register (@var{gdbarch}, @var{frame}, @var{type}, @var{buf}) @findex gdbarch_value_to_register Convert a value of type @var{type} into the raw contents of a register. @xref{Target Architecture Definition, , Using Different Register and Memory Data Representations}. @end table Motorola M68K target conditionals. @ftable @code @item BPT_VECTOR Define this to be the 4-bit location of the breakpoint trap vector. If not defined, it will default to @code{0xf}. @item REMOTE_BPT_VECTOR Defaults to @code{1}. @end ftable @node Adding a New Target @section Adding a New Target @cindex adding a target The following files add a target to @value{GDBN}: @table @file @cindex target dependent files @item gdb/@var{ttt}-tdep.c Contains any miscellaneous code required for this target machine. On some machines it doesn't exist at all. @item gdb/@var{arch}-tdep.c @itemx gdb/@var{arch}-tdep.h This is required to describe the basic layout of the target machine's processor chip (registers, stack, etc.). It can be shared among many targets that use the same processor architecture. @end table (Target header files such as @file{gdb/config/@var{arch}/tm-@var{ttt}.h}, @file{gdb/config/@var{arch}/tm-@var{arch}.h}, and @file{config/tm-@var{os}.h} are no longer used.) @findex _initialize_@var{arch}_tdep A @value{GDBN} description for a new architecture, arch is created by defining a global function @code{_initialize_@var{arch}_tdep}, by convention in the source file @file{@var{arch}-tdep.c}. For example, in the case of the OpenRISC 1000, this function is called @code{_initialize_or1k_tdep} and is found in the file @file{or1k-tdep.c}. The object file resulting from compiling this source file, which will contain the implementation of the @code{_initialize_@var{arch}_tdep} function is specified in the @value{GDBN} @file{configure.tgt} file, which includes a large case statement pattern matching against the @code{--target} option of the @kbd{configure} script. @quotation @emph{Note:} If the architecture requires multiple source files, the corresponding binaries should be included in @file{configure.tgt}. However if there are header files, the dependencies on these will not be picked up from the entries in @file{configure.tgt}. The @file{Makefile.in} file will need extending to show these dependencies. @end quotation @findex gdbarch_register A new struct gdbarch, defining the new architecture, is created within the @code{_initialize_@var{arch}_tdep} function by calling @code{gdbarch_register}: @smallexample void gdbarch_register (enum bfd_architecture architecture, gdbarch_init_ftype *init_func, gdbarch_dump_tdep_ftype *tdep_dump_func); @end smallexample This function has been described fully in an earlier section. @xref{How an Architecture is Represented, , How an Architecture is Represented}. The new @code{@w{struct gdbarch}} should contain implementations of the necessary functions (described in the previous sections) to describe the basic layout of the target machine's processor chip (registers, stack, etc.). It can be shared among many targets that use the same processor architecture. @node Target Descriptions @chapter Target Descriptions @cindex target descriptions The target architecture definition (@pxref{Target Architecture Definition}) contains @value{GDBN}'s hard-coded knowledge about an architecture. For some platforms, it is handy to have more flexible knowledge about a specific instance of the architecture---for instance, a processor or development board. @dfn{Target descriptions} provide a mechanism for the user to tell @value{GDBN} more about what their target supports, or for the target to tell @value{GDBN} directly. For details on writing, automatically supplying, and manually selecting target descriptions, see @ref{Target Descriptions, , , gdb, Debugging with @value{GDBN}}. This section will cover some related topics about the @value{GDBN} internals. @menu * Target Descriptions Implementation:: * Adding Target Described Register Support:: @end menu @node Target Descriptions Implementation @section Target Descriptions Implementation @cindex target descriptions, implementation Before @value{GDBN} connects to a new target, or runs a new program on an existing target, it discards any existing target description and reverts to a default gdbarch. Then, after connecting, it looks for a new target description by calling @code{target_find_description}. A description may come from a user specified file (XML), the remote @samp{qXfer:features:read} packet (also XML), or from any custom @code{to_read_description} routine in the target vector. For instance, the remote target supports guessing whether a MIPS target is 32-bit or 64-bit based on the size of the @samp{g} packet. If any target description is found, @value{GDBN} creates a new gdbarch incorporating the description by calling @code{gdbarch_update_p}. Any @samp{<architecture>} element is handled first, to determine which architecture's gdbarch initialization routine is called to create the new architecture. Then the initialization routine is called, and has a chance to adjust the constructed architecture based on the contents of the target description. For instance, it can recognize any properties set by a @code{to_read_description} routine. Also see @ref{Adding Target Described Register Support}. @node Adding Target Described Register Support @section Adding Target Described Register Support @cindex target descriptions, adding register support Target descriptions can report additional registers specific to an instance of the target. But it takes a little work in the architecture specific routines to support this. A target description must either have no registers or a complete set---this avoids complexity in trying to merge standard registers with the target defined registers. It is the architecture's responsibility to validate that a description with registers has everything it needs. To keep architecture code simple, the same mechanism is used to assign fixed internal register numbers to standard registers. If @code{tdesc_has_registers} returns 1, the description contains registers. The architecture's @code{gdbarch_init} routine should: @itemize @bullet @item Call @code{tdesc_data_alloc} to allocate storage, early, before searching for a matching gdbarch or allocating a new one. @item Use @code{tdesc_find_feature} to locate standard features by name. @item Use @code{tdesc_numbered_register} and @code{tdesc_numbered_register_choices} to locate the expected registers in the standard features. @item Return @code{NULL} if a required feature is missing, or if any standard feature is missing expected registers. This will produce a warning that the description was incomplete. @item Free the allocated data before returning, unless @code{tdesc_use_registers} is called. @item Call @code{set_gdbarch_num_regs} as usual, with a number higher than any fixed number passed to @code{tdesc_numbered_register}. @item Call @code{tdesc_use_registers} after creating a new gdbarch, before returning it. @end itemize After @code{tdesc_use_registers} has been called, the architecture's @code{register_name}, @code{register_type}, and @code{register_reggroup_p} routines will not be called; that information will be taken from the target description. @code{num_regs} may be increased to account for any additional registers in the description. Pseudo-registers require some extra care: @itemize @bullet @item Using @code{tdesc_numbered_register} allows the architecture to give constant register numbers to standard architectural registers, e.g.@: as an @code{enum} in @file{@var{arch}-tdep.h}. But because pseudo-registers are always numbered above @code{num_regs}, which may be increased by the description, constant numbers can not be used for pseudos. They must be numbered relative to @code{num_regs} instead. @item The description will not describe pseudo-registers, so the architecture must call @code{set_tdesc_pseudo_register_name}, @code{set_tdesc_pseudo_register_type}, and @code{set_tdesc_pseudo_register_reggroup_p} to supply routines describing pseudo registers. These routines will be passed internal register numbers, so the same routines used for the gdbarch equivalents are usually suitable. @end itemize @node Target Vector Definition @chapter Target Vector Definition @cindex target vector The target vector defines the interface between @value{GDBN}'s abstract handling of target systems, and the nitty-gritty code that actually exercises control over a process or a serial port. @value{GDBN} includes some 30-40 different target vectors; however, each configuration of @value{GDBN} includes only a few of them. @menu * Managing Execution State:: * Existing Targets:: @end menu @node Managing Execution State @section Managing Execution State @cindex execution state A target vector can be completely inactive (not pushed on the target stack), active but not running (pushed, but not connected to a fully manifested inferior), or completely active (pushed, with an accessible inferior). Most targets are only completely inactive or completely active, but some support persistent connections to a target even when the target has exited or not yet started. For example, connecting to the simulator using @code{target sim} does not create a running program. Neither registers nor memory are accessible until @code{run}. Similarly, after @code{kill}, the program can not continue executing. But in both cases @value{GDBN} remains connected to the simulator, and target-specific commands are directed to the simulator. A target which only supports complete activation should push itself onto the stack in its @code{to_open} routine (by calling @code{push_target}), and unpush itself from the stack in its @code{to_mourn_inferior} routine (by calling @code{unpush_target}). A target which supports both partial and complete activation should still call @code{push_target} in @code{to_open}, but not call @code{unpush_target} in @code{to_mourn_inferior}. Instead, it should call either @code{target_mark_running} or @code{target_mark_exited} in its @code{to_open}, depending on whether the target is fully active after connection. It should also call @code{target_mark_running} any time the inferior becomes fully active (e.g.@: in @code{to_create_inferior} and @code{to_attach}), and @code{target_mark_exited} when the inferior becomes inactive (in @code{to_mourn_inferior}). The target should also make sure to call @code{target_mourn_inferior} from its @code{to_kill}, to return the target to inactive state. @node Existing Targets @section Existing Targets @cindex targets @subsection File Targets Both executables and core files have target vectors. @subsection Standard Protocol and Remote Stubs @value{GDBN}'s file @file{remote.c} talks a serial protocol to code that runs in the target system. @value{GDBN} provides several sample @dfn{stubs} that can be integrated into target programs or operating systems for this purpose; they are named @file{@var{cpu}-stub.c}. Many operating systems, embedded targets, emulators, and simulators already have a @value{GDBN} stub built into them, and maintenance of the remote protocol must be careful to preserve compatibility. The @value{GDBN} user's manual describes how to put such a stub into your target code. What follows is a discussion of integrating the SPARC stub into a complicated operating system (rather than a simple program), by Stu Grossman, the author of this stub. The trap handling code in the stub assumes the following upon entry to @code{trap_low}: @enumerate @item %l1 and %l2 contain pc and npc respectively at the time of the trap; @item traps are disabled; @item you are in the correct trap window. @end enumerate As long as your trap handler can guarantee those conditions, then there is no reason why you shouldn't be able to ``share'' traps with the stub. The stub has no requirement that it be jumped to directly from the hardware trap vector. That is why it calls @code{exceptionHandler()}, which is provided by the external environment. For instance, this could set up the hardware traps to actually execute code which calls the stub first, and then transfers to its own trap handler. For the most point, there probably won't be much of an issue with ``sharing'' traps, as the traps we use are usually not used by the kernel, and often indicate unrecoverable error conditions. Anyway, this is all controlled by a table, and is trivial to modify. The most important trap for us is for @code{ta 1}. Without that, we can't single step or do breakpoints. Everything else is unnecessary for the proper operation of the debugger/stub. From reading the stub, it's probably not obvious how breakpoints work. They are simply done by deposit/examine operations from @value{GDBN}. @subsection ROM Monitor Interface @subsection Custom Protocols @subsection Transport Layer @subsection Builtin Simulator @node Native Debugging @chapter Native Debugging @cindex native debugging Several files control @value{GDBN}'s configuration for native support: @table @file @vindex NATDEPFILES @item gdb/config/@var{arch}/@var{xyz}.mh Specifies Makefile fragments needed by a @emph{native} configuration on machine @var{xyz}. In particular, this lists the required native-dependent object files, by defining @samp{NATDEPFILES=@dots{}}. Also specifies the header file which describes native support on @var{xyz}, by defining @samp{NAT_FILE= nm-@var{xyz}.h}. You can also define @samp{NAT_CFLAGS}, @samp{NAT_ADD_FILES}, @samp{NAT_CLIBS}, @samp{NAT_CDEPS}, @samp{NAT_GENERATED_FILES}, etc.; see @file{Makefile.in}. @emph{Maintainer's note: The @file{.mh} suffix is because this file originally contained @file{Makefile} fragments for hosting @value{GDBN} on machine @var{xyz}. While the file is no longer used for this purpose, the @file{.mh} suffix remains. Perhaps someone will eventually rename these fragments so that they have a @file{.mn} suffix.} @item gdb/config/@var{arch}/nm-@var{xyz}.h (@file{nm.h} is a link to this file, created by @code{configure}). Contains C macro definitions describing the native system environment, such as child process control and core file support. @item gdb/@var{xyz}-nat.c Contains any miscellaneous C code required for this native support of this machine. On some machines it doesn't exist at all. @end table There are some ``generic'' versions of routines that can be used by various systems. These can be customized in various ways by macros defined in your @file{nm-@var{xyz}.h} file. If these routines work for the @var{xyz} host, you can just include the generic file's name (with @samp{.o}, not @samp{.c}) in @code{NATDEPFILES}. Otherwise, if your machine needs custom support routines, you will need to write routines that perform the same functions as the generic file. Put them into @file{@var{xyz}-nat.c}, and put @file{@var{xyz}-nat.o} into @code{NATDEPFILES}. @table @file @item inftarg.c This contains the @emph{target_ops vector} that supports Unix child processes on systems which use ptrace and wait to control the child. @item procfs.c This contains the @emph{target_ops vector} that supports Unix child processes on systems which use /proc to control the child. @item fork-child.c This does the low-level grunge that uses Unix system calls to do a ``fork and exec'' to start up a child process. @item infptrace.c This is the low level interface to inferior processes for systems using the Unix @code{ptrace} call in a vanilla way. @end table @section ptrace @section /proc @section win32 @section shared libraries @section Native Conditionals @cindex native conditionals When @value{GDBN} is configured and compiled, various macros are defined or left undefined, to control compilation when the host and target systems are the same. These macros should be defined (or left undefined) in @file{nm-@var{system}.h}. @table @code @item I386_USE_GENERIC_WATCHPOINTS An x86-based machine can define this to use the generic x86 watchpoint support; see @ref{Algorithms, I386_USE_GENERIC_WATCHPOINTS}. @item SOLIB_ADD (@var{filename}, @var{from_tty}, @var{targ}, @var{readsyms}) @findex SOLIB_ADD Define this to expand into an expression that will cause the symbols in @var{filename} to be added to @value{GDBN}'s symbol table. If @var{readsyms} is zero symbols are not read but any necessary low level processing for @var{filename} is still done. @item SOLIB_CREATE_INFERIOR_HOOK @findex SOLIB_CREATE_INFERIOR_HOOK Define this to expand into any shared-library-relocation code that you want to be run just after the child process has been forked. @item START_INFERIOR_TRAPS_EXPECTED @findex START_INFERIOR_TRAPS_EXPECTED When starting an inferior, @value{GDBN} normally expects to trap twice; once when the shell execs, and once when the program itself execs. If the actual number of traps is something other than 2, then define this macro to expand into the number expected. @end table @node Support Libraries @chapter Support Libraries @section BFD @cindex BFD library BFD provides support for @value{GDBN} in several ways: @table @emph @item identifying executable and core files BFD will identify a variety of file types, including a.out, coff, and several variants thereof, as well as several kinds of core files. @item access to sections of files BFD parses the file headers to determine the names, virtual addresses, sizes, and file locations of all the various named sections in files (such as the text section or the data section). @value{GDBN} simply calls BFD to read or write section @var{x} at byte offset @var{y} for length @var{z}. @item specialized core file support BFD provides routines to determine the failing command name stored in a core file, the signal with which the program failed, and whether a core file matches (i.e.@: could be a core dump of) a particular executable file. @item locating the symbol information @value{GDBN} uses an internal interface of BFD to determine where to find the symbol information in an executable file or symbol-file. @value{GDBN} itself handles the reading of symbols, since BFD does not ``understand'' debug symbols, but @value{GDBN} uses BFD's cached information to find the symbols, string table, etc. @end table @section opcodes @cindex opcodes library The opcodes library provides @value{GDBN}'s disassembler. (It's a separate library because it's also used in binutils, for @file{objdump}). @section readline @cindex readline library The @code{readline} library provides a set of functions for use by applications that allow users to edit command lines as they are typed in. @section libiberty @cindex @code{libiberty} library The @code{libiberty} library provides a set of functions and features that integrate and improve on functionality found in modern operating systems. Broadly speaking, such features can be divided into three groups: supplemental functions (functions that may be missing in some environments and operating systems), replacement functions (providing a uniform and easier to use interface for commonly used standard functions), and extensions (which provide additional functionality beyond standard functions). @value{GDBN} uses various features provided by the @code{libiberty} library, for instance the C@t{++} demangler, the @acronym{IEEE} floating format support functions, the input options parser @samp{getopt}, the @samp{obstack} extension, and other functions. @subsection @code{obstacks} in @value{GDBN} @cindex @code{obstacks} The obstack mechanism provides a convenient way to allocate and free chunks of memory. Each obstack is a pool of memory that is managed like a stack. Objects (of any nature, size and alignment) are allocated and freed in a @acronym{LIFO} fashion on an obstack (see @code{libiberty}'s documentation for a more detailed explanation of @code{obstacks}). The most noticeable use of the @code{obstacks} in @value{GDBN} is in object files. There is an obstack associated with each internal representation of an object file. Lots of things get allocated on these @code{obstacks}: dictionary entries, blocks, blockvectors, symbols, minimal symbols, types, vectors of fundamental types, class fields of types, object files section lists, object files section offset lists, line tables, symbol tables, partial symbol tables, string tables, symbol table private data, macros tables, debug information sections and entries, import and export lists (som), unwind information (hppa), dwarf2 location expressions data. Plus various strings such as directory names strings, debug format strings, names of types. An essential and convenient property of all data on @code{obstacks} is that memory for it gets allocated (with @code{obstack_alloc}) at various times during a debugging session, but it is released all at once using the @code{obstack_free} function. The @code{obstack_free} function takes a pointer to where in the stack it must start the deletion from (much like the cleanup chains have a pointer to where to start the cleanups). Because of the stack like structure of the @code{obstacks}, this allows to free only a top portion of the obstack. There are a few instances in @value{GDBN} where such thing happens. Calls to @code{obstack_free} are done after some local data is allocated to the obstack. Only the local data is deleted from the obstack. Of course this assumes that nothing between the @code{obstack_alloc} and the @code{obstack_free} allocates anything else on the same obstack. For this reason it is best and safest to use temporary @code{obstacks}. Releasing the whole obstack is also not safe per se. It is safe only under the condition that we know the @code{obstacks} memory is no longer needed. In @value{GDBN} we get rid of the @code{obstacks} only when we get rid of the whole objfile(s), for instance upon reading a new symbol file. @section gnu-regex @cindex regular expressions library Regex conditionals. @table @code @item C_ALLOCA @item NFAILURES @item RE_NREGS @item SIGN_EXTEND_CHAR @item SWITCH_ENUM_BUG @item SYNTAX_TABLE @item Sword @item sparc @end table @section Array Containers @cindex Array Containers @cindex VEC Often it is necessary to manipulate a dynamic array of a set of objects. C forces some bookkeeping on this, which can get cumbersome and repetitive. The @file{vec.h} file contains macros for defining and using a typesafe vector type. The functions defined will be inlined when compiling, and so the abstraction cost should be zero. Domain checks are added to detect programming errors. An example use would be an array of symbols or section information. The array can be grown as symbols are read in (or preallocated), and the accessor macros provided keep care of all the necessary bookkeeping. Because the arrays are type safe, there is no danger of accidentally mixing up the contents. Think of these as C++ templates, but implemented in C. Because of the different behavior of structure objects, scalar objects and of pointers, there are three flavors of vector, one for each of these variants. Both the structure object and pointer variants pass pointers to objects around --- in the former case the pointers are stored into the vector and in the latter case the pointers are dereferenced and the objects copied into the vector. The scalar object variant is suitable for @code{int}-like objects, and the vector elements are returned by value. There are both @code{index} and @code{iterate} accessors. The iterator returns a boolean iteration condition and updates the iteration variable passed by reference. Because the iterator will be inlined, the address-of can be optimized away. The vectors are implemented using the trailing array idiom, thus they are not resizeable without changing the address of the vector object itself. This means you cannot have variables or fields of vector type --- always use a pointer to a vector. The one exception is the final field of a structure, which could be a vector type. You will have to use the @code{embedded_size} & @code{embedded_init} calls to create such objects, and they will probably not be resizeable (so don't use the @dfn{safe} allocation variants). The trailing array idiom is used (rather than a pointer to an array of data), because, if we allow @code{NULL} to also represent an empty vector, empty vectors occupy minimal space in the structure containing them. Each operation that increases the number of active elements is available in @dfn{quick} and @dfn{safe} variants. The former presumes that there is sufficient allocated space for the operation to succeed (it dies if there is not). The latter will reallocate the vector, if needed. Reallocation causes an exponential increase in vector size. If you know you will be adding N elements, it would be more efficient to use the reserve operation before adding the elements with the @dfn{quick} operation. This will ensure there are at least as many elements as you ask for, it will exponentially increase if there are too few spare slots. If you want reserve a specific number of slots, but do not want the exponential increase (for instance, you know this is the last allocation), use a negative number for reservation. You can also create a vector of a specific size from the get go. You should prefer the push and pop operations, as they append and remove from the end of the vector. If you need to remove several items in one go, use the truncate operation. The insert and remove operations allow you to change elements in the middle of the vector. There are two remove operations, one which preserves the element ordering @code{ordered_remove}, and one which does not @code{unordered_remove}. The latter function copies the end element into the removed slot, rather than invoke a memmove operation. The @code{lower_bound} function will determine where to place an item in the array using insert that will maintain sorted order. If you need to directly manipulate a vector, then the @code{address} accessor will return the address of the start of the vector. Also the @code{space} predicate will tell you whether there is spare capacity in the vector. You will not normally need to use these two functions. Vector types are defined using a @code{DEF_VEC_@{O,P,I@}(@var{typename})} macro. Variables of vector type are declared using a @code{VEC(@var{typename})} macro. The characters @code{O}, @code{P} and @code{I} indicate whether @var{typename} is an object (@code{O}), pointer (@code{P}) or integral (@code{I}) type. Be careful to pick the correct one, as you'll get an awkward and inefficient API if you use the wrong one. There is a check, which results in a compile-time warning, for the @code{P} and @code{I} versions, but there is no check for the @code{O} versions, as that is not possible in plain C. An example of their use would be, @smallexample DEF_VEC_P(tree); // non-managed tree vector. struct my_struct @{ VEC(tree) *v; // A (pointer to) a vector of tree pointers. @}; struct my_struct *s; if (VEC_length(tree, s->v)) @{ we have some contents @} VEC_safe_push(tree, s->v, decl); // append some decl onto the end for (ix = 0; VEC_iterate(tree, s->v, ix, elt); ix++) @{ do something with elt @} @end smallexample The @file{vec.h} file provides details on how to invoke the various accessors provided. They are enumerated here: @table @code @item VEC_length Return the number of items in the array, @item VEC_empty Return true if the array has no elements. @item VEC_last @itemx VEC_index Return the last or arbitrary item in the array. @item VEC_iterate Access an array element and indicate whether the array has been traversed. @item VEC_alloc @itemx VEC_free Create and destroy an array. @item VEC_embedded_size @itemx VEC_embedded_init Helpers for embedding an array as the final element of another struct. @item VEC_copy Duplicate an array. @item VEC_space Return the amount of free space in an array. @item VEC_reserve Ensure a certain amount of free space. @item VEC_quick_push @itemx VEC_safe_push Append to an array, either assuming the space is available, or making sure that it is. @item VEC_pop Remove the last item from an array. @item VEC_truncate Remove several items from the end of an array. @item VEC_safe_grow Add several items to the end of an array. @item VEC_replace Overwrite an item in the array. @item VEC_quick_insert @itemx VEC_safe_insert Insert an item into the middle of the array. Either the space must already exist, or the space is created. @item VEC_ordered_remove @itemx VEC_unordered_remove Remove an item from the array, preserving order or not. @item VEC_block_remove Remove a set of items from the array. @item VEC_address Provide the address of the first element. @item VEC_lower_bound Binary search the array. @end table @section include @node Coding Standards @chapter Coding Standards @cindex coding standards @section @value{GDBN} C Coding Standards @value{GDBN} follows the GNU coding standards, as described in @file{etc/standards.texi}. This file is also available for anonymous FTP from GNU archive sites. @value{GDBN} takes a strict interpretation of the standard; in general, when the GNU standard recommends a practice but does not require it, @value{GDBN} requires it. @value{GDBN} follows an additional set of coding standards specific to @value{GDBN}, as described in the following sections. @subsection ISO C @value{GDBN} assumes an ISO/IEC 9899:1990 (a.k.a.@: ISO C90) compliant compiler. @value{GDBN} does not assume an ISO C or POSIX compliant C library. @subsection Formatting @cindex source code formatting The standard GNU recommendations for formatting must be followed strictly. Any @value{GDBN}-specific deviation from GNU recomendations is described below. A function declaration should not have its name in column zero. A function definition should have its name in column zero. @smallexample /* Declaration */ static void foo (void); /* Definition */ void foo (void) @{ @} @end smallexample @emph{Pragmatics: This simplifies scripting. Function definitions can be found using @samp{^function-name}.} There must be a space between a function or macro name and the opening parenthesis of its argument list (except for macro definitions, as required by C). There must not be a space after an open paren/bracket or before a close paren/bracket. While additional whitespace is generally helpful for reading, do not use more than one blank line to separate blocks, and avoid adding whitespace after the end of a program line (as of 1/99, some 600 lines had whitespace after the semicolon). Excess whitespace causes difficulties for @code{diff} and @code{patch} utilities. Pointers are declared using the traditional K&R C style: @smallexample void *foo; @end smallexample @noindent and not: @smallexample void * foo; void* foo; @end smallexample In addition, whitespace around casts and unary operators should follow the following guidelines: @multitable @columnfractions .2 .2 .8 @item Use... @tab ...instead of @tab @item @code{!x} @tab @code{! x} @item @code{~x} @tab @code{~ x} @item @code{-x} @tab @code{- x} @tab (unary minus) @item @code{(foo) x} @tab @code{(foo)x} @tab (cast) @item @code{*x} @tab @code{* x} @tab (pointer dereference) @end multitable Any two or more lines in code should be wrapped in braces, even if they are comments, as they look like separate statements: @smallexample if (i) @{ /* Return success. */ return 0; @} @end smallexample @noindent and not: @smallexample if (i) /* Return success. */ return 0; @end smallexample @subsection Comments @cindex comment formatting The standard GNU requirements on comments must be followed strictly. Block comments must appear in the following form, with no @code{/*}- or @code{*/}-only lines, and no leading @code{*}: @smallexample /* Wait for control to return from inferior to debugger. If inferior gets a signal, we may decide to start it up again instead of returning. That is why there is a loop in this function. When this function actually returns it means the inferior should be left stopped and @value{GDBN} should read more commands. */ @end smallexample (Note that this format is encouraged by Emacs; tabbing for a multi-line comment works correctly, and @kbd{M-q} fills the block consistently.) Put a blank line between the block comments preceding function or variable definitions, and the definition itself. In general, put function-body comments on lines by themselves, rather than trying to fit them into the 20 characters left at the end of a line, since either the comment or the code will inevitably get longer than will fit, and then somebody will have to move it anyhow. @subsection C Usage @cindex C data types Code must not depend on the sizes of C data types, the format of the host's floating point numbers, the alignment of anything, or the order of evaluation of expressions. @cindex function usage Use functions freely. There are only a handful of compute-bound areas in @value{GDBN} that might be affected by the overhead of a function call, mainly in symbol reading. Most of @value{GDBN}'s performance is limited by the target interface (whether serial line or system call). However, use functions with moderation. A thousand one-line functions are just as hard to understand as a single thousand-line function. @emph{Macros are bad, M'kay.} (But if you have to use a macro, make sure that the macro arguments are protected with parentheses.) @cindex types Declarations like @samp{struct foo *} should be used in preference to declarations like @samp{typedef struct foo @{ @dots{} @} *foo_ptr}. Zero constant (@code{0}) is not interchangeable with a null pointer constant (@code{NULL}) anywhere. @sc{gcc} does not give a warning for such interchange. Specifically: @multitable @columnfractions .2 .5 @item incorrect @tab @code{if (pointervar) @{@}} @item incorrect @tab @code{if (!pointervar) @{@}} @item incorrect @tab @code{if (pointervar != 0) @{@}} @item incorrect @tab @code{if (pointervar == 0) @{@}} @item correct @tab @code{if (pointervar != NULL) @{@}} @item correct @tab @code{if (pointervar == NULL) @{@}} @end multitable @subsection Function Prototypes @cindex function prototypes Prototypes must be used when both @emph{declaring} and @emph{defining} a function. Prototypes for @value{GDBN} functions must include both the argument type and name, with the name matching that used in the actual function definition. All external functions should have a declaration in a header file that callers include, that declaration should use the @code{extern} modifier. The only exception concerns @code{_initialize_*} functions, which must be external so that @file{init.c} construction works, but shouldn't be visible to random source files. Where a source file needs a forward declaration of a static function, that declaration must appear in a block near the top of the source file. @subsection File Names Any file used when building the core of @value{GDBN} must be in lower case. Any file used when building the core of @value{GDBN} must be 8.3 unique. These requirements apply to both source and generated files. @emph{Pragmatics: The core of @value{GDBN} must be buildable on many platforms including DJGPP and MacOS/HFS. Every time an unfriendly file is introduced to the build process both @file{Makefile.in} and @file{configure.in} need to be modified accordingly. Compare the convoluted conversion process needed to transform @file{COPYING} into @file{copying.c} with the conversion needed to transform @file{version.in} into @file{version.c}.} Any file non 8.3 compliant file (that is not used when building the core of @value{GDBN}) must be added to @file{gdb/config/djgpp/fnchange.lst}. @emph{Pragmatics: This is clearly a compromise.} When @value{GDBN} has a local version of a system header file (ex @file{string.h}) the file name based on the POSIX header prefixed with @file{gdb_} (@file{gdb_string.h}). These headers should be relatively independent: they should use only macros defined by @file{configure}, the compiler, or the host; they should include only system headers; they should refer only to system types. They may be shared between multiple programs, e.g.@: @value{GDBN} and @sc{gdbserver}. For other files @samp{-} is used as the separator. @subsection Include Files A @file{.c} file should include @file{defs.h} first. A @file{.c} file should directly include the @code{.h} file of every declaration and/or definition it directly refers to. It cannot rely on indirect inclusion. A @file{.h} file should directly include the @code{.h} file of every declaration and/or definition it directly refers to. It cannot rely on indirect inclusion. Exception: The file @file{defs.h} does not need to be directly included. An external declaration should only appear in one include file. An external declaration should never appear in a @code{.c} file. Exception: a declaration for the @code{_initialize} function that pacifies @option{-Wmissing-declaration}. A @code{typedef} definition should only appear in one include file. An opaque @code{struct} declaration can appear in multiple @file{.h} files. Where possible, a @file{.h} file should use an opaque @code{struct} declaration instead of an include. All @file{.h} files should be wrapped in: @smallexample #ifndef INCLUDE_FILE_NAME_H #define INCLUDE_FILE_NAME_H header body #endif @end smallexample @section @value{GDBN} Python Coding Standards @value{GDBN} follows the published @code{Python} coding standards in @uref{http://www.python.org/dev/peps/pep-0008/, @code{PEP008}}. In addition, the guidelines in the @uref{http://google-styleguide.googlecode.com/svn/trunk/pyguide.html, Google Python Style Guide} are also followed where they do not conflict with @code{PEP008}. @subsection @value{GDBN}-specific exceptions There are a few exceptions to the published standards. They exist mainly for consistency with the @code{C} standards. @c It is expected that there are a few more exceptions, @c so we use itemize here. @itemize @bullet @item Use @code{FIXME} instead of @code{TODO}. @end itemize @node Misc Guidelines @chapter Misc Guidelines This chapter covers topics that are lower-level than the major algorithms of @value{GDBN}. @section Cleanups @cindex cleanups Cleanups are a structured way to deal with things that need to be done later. When your code does something (e.g., @code{xmalloc} some memory, or @code{open} a file) that needs to be undone later (e.g., @code{xfree} the memory or @code{close} the file), it can make a cleanup. The cleanup will be done at some future point: when the command is finished and control returns to the top level; when an error occurs and the stack is unwound; or when your code decides it's time to explicitly perform cleanups. Alternatively you can elect to discard the cleanups you created. Syntax: @table @code @item struct cleanup *@var{old_chain}; Declare a variable which will hold a cleanup chain handle. @findex make_cleanup @item @var{old_chain} = make_cleanup (@var{function}, @var{arg}); Make a cleanup which will cause @var{function} to be called with @var{arg} (a @code{char *}) later. The result, @var{old_chain}, is a handle that can later be passed to @code{do_cleanups} or @code{discard_cleanups}. Unless you are going to call @code{do_cleanups} or @code{discard_cleanups}, you can ignore the result from @code{make_cleanup}. @findex do_cleanups @item do_cleanups (@var{old_chain}); Do all cleanups added to the chain since the corresponding @code{make_cleanup} call was made. @findex discard_cleanups @item discard_cleanups (@var{old_chain}); Same as @code{do_cleanups} except that it just removes the cleanups from the chain and does not call the specified functions. @end table Cleanups are implemented as a chain. The handle returned by @code{make_cleanups} includes the cleanup passed to the call and any later cleanups appended to the chain (but not yet discarded or performed). E.g.: @smallexample make_cleanup (a, 0); @{ struct cleanup *old = make_cleanup (b, 0); make_cleanup (c, 0) ... do_cleanups (old); @} @end smallexample @noindent will call @code{c()} and @code{b()} but will not call @code{a()}. The cleanup that calls @code{a()} will remain in the cleanup chain, and will be done later unless otherwise discarded.@refill Your function should explicitly do or discard the cleanups it creates. Failing to do this leads to non-deterministic behavior since the caller will arbitrarily do or discard your functions cleanups. This need leads to two common cleanup styles. The first style is try/finally. Before it exits, your code-block calls @code{do_cleanups} with the old cleanup chain and thus ensures that your code-block's cleanups are always performed. For instance, the following code-segment avoids a memory leak problem (even when @code{error} is called and a forced stack unwind occurs) by ensuring that the @code{xfree} will always be called: @smallexample struct cleanup *old = make_cleanup (null_cleanup, 0); data = xmalloc (sizeof blah); make_cleanup (xfree, data); ... blah blah ... do_cleanups (old); @end smallexample The second style is try/except. Before it exits, your code-block calls @code{discard_cleanups} with the old cleanup chain and thus ensures that any created cleanups are not performed. For instance, the following code segment, ensures that the file will be closed but only if there is an error: @smallexample FILE *file = fopen ("afile", "r"); struct cleanup *old = make_cleanup (close_file, file); ... blah blah ... discard_cleanups (old); return file; @end smallexample Some functions, e.g., @code{fputs_filtered()} or @code{error()}, specify that they ``should not be called when cleanups are not in place''. This means that any actions you need to reverse in the case of an error or interruption must be on the cleanup chain before you call these functions, since they might never return to your code (they @samp{longjmp} instead). @section Per-architecture module data @cindex per-architecture module data @cindex multi-arch data @cindex data-pointer, per-architecture/per-module The multi-arch framework includes a mechanism for adding module specific per-architecture data-pointers to the @code{struct gdbarch} architecture object. A module registers one or more per-architecture data-pointers using: @deftypefn {Architecture Function} {struct gdbarch_data *} gdbarch_data_register_pre_init (gdbarch_data_pre_init_ftype *@var{pre_init}) @var{pre_init} is used to, on-demand, allocate an initial value for a per-architecture data-pointer using the architecture's obstack (passed in as a parameter). Since @var{pre_init} can be called during architecture creation, it is not parameterized with the architecture. and must not call modules that use per-architecture data. @end deftypefn @deftypefn {Architecture Function} {struct gdbarch_data *} gdbarch_data_register_post_init (gdbarch_data_post_init_ftype *@var{post_init}) @var{post_init} is used to obtain an initial value for a per-architecture data-pointer @emph{after}. Since @var{post_init} is always called after architecture creation, it both receives the fully initialized architecture and is free to call modules that use per-architecture data (care needs to be taken to ensure that those other modules do not try to call back to this module as that will create in cycles in the initialization call graph). @end deftypefn These functions return a @code{struct gdbarch_data} that is used to identify the per-architecture data-pointer added for that module. The per-architecture data-pointer is accessed using the function: @deftypefn {Architecture Function} {void *} gdbarch_data (struct gdbarch *@var{gdbarch}, struct gdbarch_data *@var{data_handle}) Given the architecture @var{arch} and module data handle @var{data_handle} (returned by @code{gdbarch_data_register_pre_init} or @code{gdbarch_data_register_post_init}), this function returns the current value of the per-architecture data-pointer. If the data pointer is @code{NULL}, it is first initialized by calling the corresponding @var{pre_init} or @var{post_init} method. @end deftypefn The examples below assume the following definitions: @smallexample struct nozel @{ int total; @}; static struct gdbarch_data *nozel_handle; @end smallexample A module can extend the architecture vector, adding additional per-architecture data, using the @var{pre_init} method. The module's per-architecture data is then initialized during architecture creation. In the below, the module's per-architecture @emph{nozel} is added. An architecture can specify its nozel by calling @code{set_gdbarch_nozel} from @code{gdbarch_init}. @smallexample static void * nozel_pre_init (struct obstack *obstack) @{ struct nozel *data = OBSTACK_ZALLOC (obstack, struct nozel); return data; @} @end smallexample @smallexample extern void set_gdbarch_nozel (struct gdbarch *gdbarch, int total) @{ struct nozel *data = gdbarch_data (gdbarch, nozel_handle); data->total = nozel; @} @end smallexample A module can on-demand create architecture dependent data structures using @code{post_init}. In the below, the nozel's total is computed on-demand by @code{nozel_post_init} using information obtained from the architecture. @smallexample static void * nozel_post_init (struct gdbarch *gdbarch) @{ struct nozel *data = GDBARCH_OBSTACK_ZALLOC (gdbarch, struct nozel); nozel->total = gdbarch@dots{} (gdbarch); return data; @} @end smallexample @smallexample extern int nozel_total (struct gdbarch *gdbarch) @{ struct nozel *data = gdbarch_data (gdbarch, nozel_handle); return data->total; @} @end smallexample @section Wrapping Output Lines @cindex line wrap in output @findex wrap_here Output that goes through @code{printf_filtered} or @code{fputs_filtered} or @code{fputs_demangled} needs only to have calls to @code{wrap_here} added in places that would be good breaking points. The utility routines will take care of actually wrapping if the line width is exceeded. The argument to @code{wrap_here} is an indentation string which is printed @emph{only} if the line breaks there. This argument is saved away and used later. It must remain valid until the next call to @code{wrap_here} or until a newline has been printed through the @code{*_filtered} functions. Don't pass in a local variable and then return! It is usually best to call @code{wrap_here} after printing a comma or space. If you call it before printing a space, make sure that your indentation properly accounts for the leading space that will print if the line wraps there. Any function or set of functions that produce filtered output must finish by printing a newline, to flush the wrap buffer, before switching to unfiltered (@code{printf}) output. Symbol reading routines that print warnings are a good example. @section Memory Management @value{GDBN} does not use the functions @code{malloc}, @code{realloc}, @code{calloc}, @code{free} and @code{asprintf}. @value{GDBN} uses the functions @code{xmalloc}, @code{xrealloc} and @code{xcalloc} when allocating memory. Unlike @code{malloc} et.al.@: these functions do not return when the memory pool is empty. Instead, they unwind the stack using cleanups. These functions return @code{NULL} when requested to allocate a chunk of memory of size zero. @emph{Pragmatics: By using these functions, the need to check every memory allocation is removed. These functions provide portable behavior.} @value{GDBN} does not use the function @code{free}. @value{GDBN} uses the function @code{xfree} to return memory to the memory pool. Consistent with ISO-C, this function ignores a request to free a @code{NULL} pointer. @emph{Pragmatics: On some systems @code{free} fails when passed a @code{NULL} pointer.} @value{GDBN} can use the non-portable function @code{alloca} for the allocation of small temporary values (such as strings). @emph{Pragmatics: This function is very non-portable. Some systems restrict the memory being allocated to no more than a few kilobytes.} @value{GDBN} uses the string function @code{xstrdup} and the print function @code{xstrprintf}. @emph{Pragmatics: @code{asprintf} and @code{strdup} can fail. Print functions such as @code{sprintf} are very prone to buffer overflow errors.} @section Compiler Warnings @cindex compiler warnings With few exceptions, developers should avoid the configuration option @samp{--disable-werror} when building @value{GDBN}. The exceptions are listed in the file @file{gdb/MAINTAINERS}. The default, when building with @sc{gcc}, is @samp{--enable-werror}. This option causes @value{GDBN} (when built using GCC) to be compiled with a carefully selected list of compiler warning flags. Any warnings from those flags are treated as errors. The current list of warning flags includes: @table @samp @item -Wall Recommended @sc{gcc} warnings. @item -Wdeclaration-after-statement @sc{gcc} 3.x (and later) and @sc{c99} allow declarations mixed with code, but @sc{gcc} 2.x and @sc{c89} do not. @item -Wpointer-arith @item -Wformat-nonliteral Non-literal format strings, with a few exceptions, are bugs - they might contain unintended user-supplied format specifiers. Since @value{GDBN} uses the @code{format printf} attribute on all @code{printf} like functions this checks not just @code{printf} calls but also calls to functions such as @code{fprintf_unfiltered}. @item -Wno-pointer-sign In version 4.0, GCC began warning about pointer argument passing or assignment even when the source and destination differed only in signedness. However, most @value{GDBN} code doesn't distinguish carefully between @code{char} and @code{unsigned char}. In early 2006 the @value{GDBN} developers decided correcting these warnings wasn't worth the time it would take. @item -Wno-unused-parameter Due to the way that @value{GDBN} is implemented many functions have unused parameters. Consequently this warning is avoided. The macro @code{ATTRIBUTE_UNUSED} is not used as it leads to false negatives --- it is not an error to have @code{ATTRIBUTE_UNUSED} on a parameter that is being used. @item -Wno-unused @itemx -Wno-switch @itemx -Wno-char-subscripts These are warnings which might be useful for @value{GDBN}, but are currently too noisy to enable with @samp{-Werror}. @end table @section Internal Error Recovery During its execution, @value{GDBN} can encounter two types of errors. User errors and internal errors. User errors include not only a user entering an incorrect command but also problems arising from corrupt object files and system errors when interacting with the target. Internal errors include situations where @value{GDBN} has detected, at run time, a corrupt or erroneous situation. When reporting an internal error, @value{GDBN} uses @code{internal_error} and @code{gdb_assert}. @value{GDBN} must not call @code{abort} or @code{assert}. @emph{Pragmatics: There is no @code{internal_warning} function. Either the code detected a user error, recovered from it and issued a @code{warning} or the code failed to correctly recover from the user error and issued an @code{internal_error}.} @section Command Names GDB U/I commands are written @samp{foo-bar}, not @samp{foo_bar}. @section Clean Design and Portable Implementation @cindex design In addition to getting the syntax right, there's the little question of semantics. Some things are done in certain ways in @value{GDBN} because long experience has shown that the more obvious ways caused various kinds of trouble. @cindex assumptions about targets You can't assume the byte order of anything that comes from a target (including @var{value}s, object files, and instructions). Such things must be byte-swapped using @code{SWAP_TARGET_AND_HOST} in @value{GDBN}, or one of the swap routines defined in @file{bfd.h}, such as @code{bfd_get_32}. You can't assume that you know what interface is being used to talk to the target system. All references to the target must go through the current @code{target_ops} vector. You can't assume that the host and target machines are the same machine (except in the ``native'' support modules). In particular, you can't assume that the target machine's header files will be available on the host machine. Target code must bring along its own header files -- written from scratch or explicitly donated by their owner, to avoid copyright problems. @cindex portability Insertion of new @code{#ifdef}'s will be frowned upon. It's much better to write the code portably than to conditionalize it for various systems. @cindex system dependencies New @code{#ifdef}'s which test for specific compilers or manufacturers or operating systems are unacceptable. All @code{#ifdef}'s should test for features. The information about which configurations contain which features should be segregated into the configuration files. Experience has proven far too often that a feature unique to one particular system often creeps into other systems; and that a conditional based on some predefined macro for your current system will become worthless over time, as new versions of your system come out that behave differently with regard to this feature. Adding code that handles specific architectures, operating systems, target interfaces, or hosts, is not acceptable in generic code. @cindex portable file name handling @cindex file names, portability One particularly notorious area where system dependencies tend to creep in is handling of file names. The mainline @value{GDBN} code assumes Posix semantics of file names: absolute file names begin with a forward slash @file{/}, slashes are used to separate leading directories, case-sensitive file names. These assumptions are not necessarily true on non-Posix systems such as MS-Windows. To avoid system-dependent code where you need to take apart or construct a file name, use the following portable macros: @table @code @findex HAVE_DOS_BASED_FILE_SYSTEM @item HAVE_DOS_BASED_FILE_SYSTEM This preprocessing symbol is defined to a non-zero value on hosts whose filesystems belong to the MS-DOS/MS-Windows family. Use this symbol to write conditional code which should only be compiled for such hosts. @findex IS_DIR_SEPARATOR @item IS_DIR_SEPARATOR (@var{c}) Evaluates to a non-zero value if @var{c} is a directory separator character. On Unix and GNU/Linux systems, only a slash @file{/} is such a character, but on Windows, both @file{/} and @file{\} will pass. @findex IS_ABSOLUTE_PATH @item IS_ABSOLUTE_PATH (@var{file}) Evaluates to a non-zero value if @var{file} is an absolute file name. For Unix and GNU/Linux hosts, a name which begins with a slash @file{/} is absolute. On DOS and Windows, @file{d:/foo} and @file{x:\bar} are also absolute file names. @findex FILENAME_CMP @item FILENAME_CMP (@var{f1}, @var{f2}) Calls a function which compares file names @var{f1} and @var{f2} as appropriate for the underlying host filesystem. For Posix systems, this simply calls @code{strcmp}; on case-insensitive filesystems it will call @code{strcasecmp} instead. @findex DIRNAME_SEPARATOR @item DIRNAME_SEPARATOR Evaluates to a character which separates directories in @code{PATH}-style lists, typically held in environment variables. This character is @samp{:} on Unix, @samp{;} on DOS and Windows. @findex SLASH_STRING @item SLASH_STRING This evaluates to a constant string you should use to produce an absolute filename from leading directories and the file's basename. @code{SLASH_STRING} is @code{"/"} on most systems, but might be @code{"\\"} for some Windows-based ports. @end table In addition to using these macros, be sure to use portable library functions whenever possible. For example, to extract a directory or a basename part from a file name, use the @code{dirname} and @code{basename} library functions (available in @code{libiberty} for platforms which don't provide them), instead of searching for a slash with @code{strrchr}. Another way to generalize @value{GDBN} along a particular interface is with an attribute struct. For example, @value{GDBN} has been generalized to handle multiple kinds of remote interfaces---not by @code{#ifdef}s everywhere, but by defining the @code{target_ops} structure and having a current target (as well as a stack of targets below it, for memory references). Whenever something needs to be done that depends on which remote interface we are using, a flag in the current target_ops structure is tested (e.g., @code{target_has_stack}), or a function is called through a pointer in the current target_ops structure. In this way, when a new remote interface is added, only one module needs to be touched---the one that actually implements the new remote interface. Other examples of attribute-structs are BFD access to multiple kinds of object file formats, or @value{GDBN}'s access to multiple source languages. Please avoid duplicating code. For example, in @value{GDBN} 3.x all the code interfacing between @code{ptrace} and the rest of @value{GDBN} was duplicated in @file{*-dep.c}, and so changing something was very painful. In @value{GDBN} 4.x, these have all been consolidated into @file{infptrace.c}. @file{infptrace.c} can deal with variations between systems the same way any system-independent file would (hooks, @code{#if defined}, etc.), and machines which are radically different don't need to use @file{infptrace.c} at all. All debugging code must be controllable using the @samp{set debug @var{module}} command. Do not use @code{printf} to print trace messages. Use @code{fprintf_unfiltered(gdb_stdlog, ...}. Do not use @code{#ifdef DEBUG}. @node Porting GDB @chapter Porting @value{GDBN} @cindex porting to new machines Most of the work in making @value{GDBN} compile on a new machine is in specifying the configuration of the machine. Porting a new architecture to @value{GDBN} can be broken into a number of steps. @itemize @bullet @item Ensure a @sc{bfd} exists for executables of the target architecture in the @file{bfd} directory. If one does not exist, create one by modifying an existing similar one. @item Implement a disassembler for the target architecture in the @file{opcodes} directory. @item Define the target architecture in the @file{gdb} directory (@pxref{Adding a New Target, , Adding a New Target}). Add the pattern for the new target to @file{configure.tgt} with the names of the files that contain the code. By convention the target architecture definition for an architecture @var{arch} is placed in @file{@var{arch}-tdep.c}. Within @file{@var{arch}-tdep.c} define the function @code{_initialize_@var{arch}_tdep} which calls @code{gdbarch_register} to create the new @code{@w{struct gdbarch}} for the architecture. @item If a new remote target is needed, consider adding a new remote target by defining a function @code{_initialize_remote_@var{arch}}. However if at all possible use the @value{GDBN} @emph{Remote Serial Protocol} for this and implement the server side protocol independently with the target. @item If desired implement a simulator in the @file{sim} directory. This should create the library @file{libsim.a} implementing the interface in @file{remote-sim.h} (found in the @file{include} directory). @item Build and test. If desired, lobby the @sc{gdb} steering group to have the new port included in the main distribution! @item Add a description of the new architecture to the main @value{GDBN} user guide (@pxref{Configuration Specific Information, , Configuration Specific Information, gdb, Debugging with @value{GDBN}}). @end itemize @node Versions and Branches @chapter Versions and Branches @section Versions @value{GDBN}'s version is determined by the file @file{gdb/version.in} and takes one of the following forms: @table @asis @item @var{major}.@var{minor} @itemx @var{major}.@var{minor}.@var{patchlevel} an official release (e.g., 6.2 or 6.2.1) @item @var{major}.@var{minor}.@var{patchlevel}.@var{YYYY}@var{MM}@var{DD} a snapshot taken at @var{YYYY}-@var{MM}-@var{DD}-gmt (e.g., 6.1.50.20020302, 6.1.90.20020304, or 6.1.0.20020308) @item @var{major}.@var{minor}.@var{patchlevel}.@var{YYYY}@var{MM}@var{DD}-cvs a @sc{cvs} check out drawn on @var{YYYY}-@var{MM}-@var{DD} (e.g., 6.1.50.20020302-cvs, 6.1.90.20020304-cvs, or 6.1.0.20020308-cvs) @item @var{major}.@var{minor}.@var{patchlevel}.@var{YYYY}@var{MM}@var{DD} (@var{vendor}) a vendor specific release of @value{GDBN}, that while based on@* @var{major}.@var{minor}.@var{patchlevel}.@var{YYYY}@var{MM}@var{DD}, may include additional changes @end table @value{GDBN}'s mainline uses the @var{major} and @var{minor} version numbers from the most recent release branch, with a @var{patchlevel} of 50. At the time each new release branch is created, the mainline's @var{major} and @var{minor} version numbers are updated. @value{GDBN}'s release branch is similar. When the branch is cut, the @var{patchlevel} is changed from 50 to 90. As draft releases are drawn from the branch, the @var{patchlevel} is incremented. Once the first release (@var{major}.@var{minor}) has been made, the @var{patchlevel} is set to 0 and updates have an incremented @var{patchlevel}. For snapshots, and @sc{cvs} check outs, it is also possible to identify the @sc{cvs} origin: @table @asis @item @var{major}.@var{minor}.50.@var{YYYY}@var{MM}@var{DD} drawn from the @sc{head} of mainline @sc{cvs} (e.g., 6.1.50.20020302) @item @var{major}.@var{minor}.90.@var{YYYY}@var{MM}@var{DD} @itemx @var{major}.@var{minor}.91.@var{YYYY}@var{MM}@var{DD} @dots{} drawn from a release branch prior to the release (e.g., 6.1.90.20020304) @item @var{major}.@var{minor}.0.@var{YYYY}@var{MM}@var{DD} @itemx @var{major}.@var{minor}.1.@var{YYYY}@var{MM}@var{DD} @dots{} drawn from a release branch after the release (e.g., 6.2.0.20020308) @end table If the previous @value{GDBN} version is 6.1 and the current version is 6.2, then, substituting 6 for @var{major} and 1 or 2 for @var{minor}, here's an illustration of a typical sequence: @smallexample <HEAD> | 6.1.50.20020302-cvs | +--------------------------. | <gdb_6_2-branch> | | 6.2.50.20020303-cvs 6.1.90 (draft #1) | | 6.2.50.20020304-cvs 6.1.90.20020304-cvs | | 6.2.50.20020305-cvs 6.1.91 (draft #2) | | 6.2.50.20020306-cvs 6.1.91.20020306-cvs | | 6.2.50.20020307-cvs 6.2 (release) | | 6.2.50.20020308-cvs 6.2.0.20020308-cvs | | 6.2.50.20020309-cvs 6.2.1 (update) | | 6.2.50.20020310-cvs <branch closed> | 6.2.50.20020311-cvs | +--------------------------. | <gdb_6_3-branch> | | 6.3.50.20020312-cvs 6.2.90 (draft #1) | | @end smallexample @section Release Branches @cindex Release Branches @value{GDBN} draws a release series (6.2, 6.2.1, @dots{}) from a single release branch, and identifies that branch using the @sc{cvs} branch tags: @smallexample gdb_@var{major}_@var{minor}-@var{YYYY}@var{MM}@var{DD}-branchpoint gdb_@var{major}_@var{minor}-branch gdb_@var{major}_@var{minor}-@var{YYYY}@var{MM}@var{DD}-release @end smallexample @emph{Pragmatics: To help identify the date at which a branch or release is made, both the branchpoint and release tags include the date that they are cut (@var{YYYY}@var{MM}@var{DD}) in the tag. The branch tag, denoting the head of the branch, does not need this.} @section Vendor Branches @cindex vendor branches To avoid version conflicts, vendors are expected to modify the file @file{gdb/version.in} to include a vendor unique alphabetic identifier (an official @value{GDBN} release never uses alphabetic characters in its version identifier). E.g., @samp{6.2widgit2}, or @samp{6.2 (Widgit Inc Patch 2)}. @section Experimental Branches @cindex experimental branches @subsection Guidelines @value{GDBN} permits the creation of branches, cut from the @sc{cvs} repository, for experimental development. Branches make it possible for developers to share preliminary work, and maintainers to examine significant new developments. The following are a set of guidelines for creating such branches: @table @emph @item a branch has an owner The owner can set further policy for a branch, but may not change the ground rules. In particular, they can set a policy for commits (be it adding more reviewers or deciding who can commit). @item all commits are posted All changes committed to a branch shall also be posted to @email{gdb-patches@@sourceware.org, the @value{GDBN} patches mailing list}. While commentary on such changes are encouraged, people should remember that the changes only apply to a branch. @item all commits are covered by an assignment This ensures that all changes belong to the Free Software Foundation, and avoids the possibility that the branch may become contaminated. @item a branch is focused A focused branch has a single objective or goal, and does not contain unnecessary or irrelevant changes. Cleanups, where identified, being be pushed into the mainline as soon as possible. @item a branch tracks mainline This keeps the level of divergence under control. It also keeps the pressure on developers to push cleanups and other stuff into the mainline. @item a branch shall contain the entire @value{GDBN} module The @value{GDBN} module @code{gdb} should be specified when creating a branch (branches of individual files should be avoided). @xref{Tags}. @item a branch shall be branded using @file{version.in} The file @file{gdb/version.in} shall be modified so that it identifies the branch @var{owner} and branch @var{name}, e.g., @samp{6.2.50.20030303_owner_name} or @samp{6.2 (Owner Name)}. @end table @subsection Tags @anchor{Tags} To simplify the identification of @value{GDBN} branches, the following branch tagging convention is strongly recommended: @table @code @item @var{owner}_@var{name}-@var{YYYYMMDD}-branchpoint @itemx @var{owner}_@var{name}-@var{YYYYMMDD}-branch The branch point and corresponding branch tag. @var{YYYYMMDD} is the date that the branch was created. A branch is created using the sequence: @anchor{experimental branch tags} @smallexample cvs rtag @var{owner}_@var{name}-@var{YYYYMMDD}-branchpoint gdb cvs rtag -b -r @var{owner}_@var{name}-@var{YYYYMMDD}-branchpoint \ @var{owner}_@var{name}-@var{YYYYMMDD}-branch gdb @end smallexample @item @var{owner}_@var{name}-@var{yyyymmdd}-mergepoint The tagged point, on the mainline, that was used when merging the branch on @var{yyyymmdd}. To merge in all changes since the branch was cut, use a command sequence like: @smallexample cvs rtag @var{owner}_@var{name}-@var{yyyymmdd}-mergepoint gdb cvs update \ -j@var{owner}_@var{name}-@var{YYYYMMDD}-branchpoint -j@var{owner}_@var{name}-@var{yyyymmdd}-mergepoint @end smallexample @noindent Similar sequences can be used to just merge in changes since the last merge. @end table @noindent For further information on @sc{cvs}, see @uref{http://www.gnu.org/software/cvs/, Concurrent Versions System}. @node Start of New Year Procedure @chapter Start of New Year Procedure @cindex new year procedure At the start of each new year, the following actions should be performed: @itemize @bullet @item Rotate the ChangeLog file The current @file{ChangeLog} file should be renamed into @file{ChangeLog-YYYY} where YYYY is the year that has just passed. A new @file{ChangeLog} file should be created, and its contents should contain a reference to the previous ChangeLog. The following should also be preserved at the end of the new ChangeLog, in order to provide the appropriate settings when editing this file with Emacs: @smallexample Local Variables: mode: change-log left-margin: 8 fill-column: 74 version-control: never coding: utf-8 End: @end smallexample @item Add an entry for the newly created ChangeLog file (@file{ChangeLog-YYYY}) in @file{gdb/config/djgpp/fnchange.lst}. @item Update the copyright year in the startup message Update the copyright year in: @itemize @bullet @item file @file{top.c}, function @code{print_gdb_version} @item file @file{gdbserver/server.c}, function @code{gdbserver_version} @item file @file{gdbserver/gdbreplay.c}, function @code{gdbreplay_version} @end itemize @item Run the @file{copyright.py} Python script to add the new year in the copyright notices of most source files. This script has been tested with Python 2.6 and 2.7. @end itemize @node Releasing GDB @chapter Releasing @value{GDBN} @cindex making a new release of gdb @section Branch Commit Policy The branch commit policy is pretty slack. @value{GDBN} releases 5.0, 5.1 and 5.2 all used the below: @itemize @bullet @item The @file{gdb/MAINTAINERS} file still holds. @item Don't fix something on the branch unless/until it is also fixed in the trunk. If this isn't possible, mentioning it in the @file{gdb/PROBLEMS} file is better than committing a hack. @item When considering a patch for the branch, suggested criteria include: Does it fix a build? Does it fix the sequence @kbd{break main; run} when debugging a static binary? @item The further a change is from the core of @value{GDBN}, the less likely the change will worry anyone (e.g., target specific code). @item Only post a proposal to change the core of @value{GDBN} after you've sent individual bribes to all the people listed in the @file{MAINTAINERS} file @t{;-)} @end itemize @emph{Pragmatics: Provided updates are restricted to non-core functionality there is little chance that a broken change will be fatal. This means that changes such as adding a new architectures or (within reason) support for a new host are considered acceptable.} @section Obsoleting code Before anything else, poke the other developers (and around the source code) to see if there is anything that can be removed from @value{GDBN} (an old target, an unused file). Obsolete code is identified by adding an @code{OBSOLETE} prefix to every line. Doing this means that it is easy to identify something that has been obsoleted when greping through the sources. The process is done in stages --- this is mainly to ensure that the wider @value{GDBN} community has a reasonable opportunity to respond. Remember, everything on the Internet takes a week. @enumerate @item Post the proposal on @email{gdb@@sourceware.org, the GDB mailing list} Creating a bug report to track the task's state, is also highly recommended. @item Wait a week or so. @item Post the proposal on @email{gdb-announce@@sourceware.org, the GDB Announcement mailing list}. @item Wait a week or so. @item Go through and edit all relevant files and lines so that they are prefixed with the word @code{OBSOLETE}. @item Wait until the next GDB version, containing this obsolete code, has been released. @item Remove the obsolete code. @end enumerate @noindent @emph{Maintainer note: While removing old code is regrettable it is hopefully better for @value{GDBN}'s long term development. Firstly it helps the developers by removing code that is either no longer relevant or simply wrong. Secondly since it removes any history associated with the file (effectively clearing the slate) the developer has a much freer hand when it comes to fixing broken files.} @section Before the Branch The most important objective at this stage is to find and fix simple changes that become a pain to track once the branch is created. For instance, configuration problems that stop @value{GDBN} from even building. If you can't get the problem fixed, document it in the @file{gdb/PROBLEMS} file. @subheading Prompt for @file{gdb/NEWS} People always forget. Send a post reminding them but also if you know something interesting happened add it yourself. The @code{schedule} script will mention this in its e-mail. @subheading Review @file{gdb/README} Grab one of the nightly snapshots and then walk through the @file{gdb/README} looking for anything that can be improved. The @code{schedule} script will mention this in its e-mail. @subheading Refresh any imported files. A number of files are taken from external repositories. They include: @itemize @bullet @item @file{texinfo/texinfo.tex} @item @file{config.guess} et.@: al.@: (see the top-level @file{MAINTAINERS} file) @item @file{etc/standards.texi}, @file{etc/make-stds.texi} @end itemize @subheading Check the ARI @uref{http://sourceware.org/gdb/ari,,A.R.I.} is an @code{awk} script (Awk Regression Index ;-) that checks for a number of errors and coding conventions. The checks include things like using @code{malloc} instead of @code{xmalloc} and file naming problems. There shouldn't be any regressions. @subsection Review the bug data base Close anything obviously fixed. @subsection Check all cross targets build The targets are listed in @file{gdb/MAINTAINERS}. @section Cut the Branch @subheading Create the branch @smallexample $ u=5.1 $ v=5.2 $ V=`echo $v | sed 's/\./_/g'` $ D=`date -u +%Y-%m-%d` $ echo $u $V $D 5.1 5_2 2002-03-03 $ echo cvs -f -d :ext:sourceware.org:/cvs/src rtag \ -D $D-gmt gdb_$V-$D-branchpoint insight cvs -f -d :ext:sourceware.org:/cvs/src rtag -D 2002-03-03-gmt gdb_5_2-2002-03-03-branchpoint insight $ ^echo ^^ ... $ echo cvs -f -d :ext:sourceware.org:/cvs/src rtag \ -b -r gdb_$V-$D-branchpoint gdb_$V-branch insight cvs -f -d :ext:sourceware.org:/cvs/src rtag \ -b -r gdb_5_2-2002-03-03-branchpoint gdb_5_2-branch insight $ ^echo ^^ ... $ @end smallexample @itemize @bullet @item By using @kbd{-D YYYY-MM-DD-gmt}, the branch is forced to an exact date/time. @item The trunk is first tagged so that the branch point can easily be found. @item Insight, which includes @value{GDBN}, is tagged at the same time. @item @file{version.in} gets bumped to avoid version number conflicts. @item The reading of @file{.cvsrc} is disabled using @file{-f}. @end itemize @subheading Update @file{version.in} @smallexample $ u=5.1 $ v=5.2 $ V=`echo $v | sed 's/\./_/g'` $ echo $u $v$V 5.1 5_2 $ cd /tmp $ echo cvs -f -d :ext:sourceware.org:/cvs/src co \ -r gdb_$V-branch src/gdb/version.in cvs -f -d :ext:sourceware.org:/cvs/src co -r gdb_5_2-branch src/gdb/version.in $ ^echo ^^ U src/gdb/version.in $ cd src/gdb $ echo $u.90-0000-00-00-cvs > version.in $ cat version.in 5.1.90-0000-00-00-cvs $ cvs -f commit version.in @end smallexample @itemize @bullet @item @file{0000-00-00} is used as a date to pump prime the version.in update mechanism. @item @file{.90} and the previous branch version are used as fairly arbitrary initial branch version number. @end itemize @subheading Update the web and news pages Something? @subheading Tweak cron to track the new branch The file @file{gdbadmin/cron/crontab} contains gdbadmin's cron table. This file needs to be updated so that: @itemize @bullet @item A daily timestamp is added to the file @file{version.in}. @item The new branch is included in the snapshot process. @end itemize @noindent See the file @file{gdbadmin/cron/README} for how to install the updated cron table. The file @file{gdbadmin/ss/README} should also be reviewed to reflect any changes. That file is copied to both the branch/ and current/ snapshot directories. @subheading Update the NEWS and README files The @file{NEWS} file needs to be updated so that on the branch it refers to @emph{changes in the current release} while on the trunk it also refers to @emph{changes since the current release}. The @file{README} file needs to be updated so that it refers to the current release. @subheading Post the branch info Send an announcement to the mailing lists: @itemize @bullet @item @email{gdb-announce@@sourceware.org, GDB Announcement mailing list} @item @email{gdb@@sourceware.org, GDB Discussion mailing list} and @email{gdb-testers@@sourceware.org, GDB Testers mailing list} @end itemize @emph{Pragmatics: The branch creation is sent to the announce list to ensure that people people not subscribed to the higher volume discussion list are alerted.} The announcement should include: @itemize @bullet @item The branch tag. @item How to check out the branch using CVS. @item The date/number of weeks until the release. @item The branch commit policy still holds. @end itemize @section Stabilize the branch Something goes here. @section Create a Release The process of creating and then making available a release is broken down into a number of stages. The first part addresses the technical process of creating a releasable tar ball. The later stages address the process of releasing that tar ball. When making a release candidate just the first section is needed. @subsection Create a release candidate The objective at this stage is to create a set of tar balls that can be made available as a formal release (or as a less formal release candidate). @subsubheading Freeze the branch Send out an e-mail notifying everyone that the branch is frozen to @email{gdb-patches@@sourceware.org}. @subsubheading Establish a few defaults. @smallexample $ b=gdb_5_2-branch $ v=5.2 $ t=/sourceware/snapshot-tmp/gdbadmin-tmp $ echo $t/$b/$v /sourceware/snapshot-tmp/gdbadmin-tmp/gdb_5_2-branch/5.2 $ mkdir -p $t/$b/$v $ cd $t/$b/$v $ pwd /sourceware/snapshot-tmp/gdbadmin-tmp/gdb_5_2-branch/5.2 $ which autoconf /home/gdbadmin/bin/autoconf $ @end smallexample @noindent Notes: @itemize @bullet @item Check the @code{autoconf} version carefully. You want to be using the version documented in the toplevel @file{README-maintainer-mode} file. It is very unlikely that the version of @code{autoconf} installed in system directories (e.g., @file{/usr/bin/autoconf}) is correct. @end itemize @subsubheading Check out the relevant modules: @smallexample $ for m in gdb insight do ( mkdir -p $m && cd $m && cvs -q -f -d /cvs/src co -P -r $b $m ) done $ @end smallexample @noindent Note: @itemize @bullet @item The reading of @file{.cvsrc} is disabled (@file{-f}) so that there isn't any confusion between what is written here and what your local @code{cvs} really does. @end itemize @subsubheading Update relevant files. @table @file @item gdb/NEWS Major releases get their comments added as part of the mainline. Minor releases should probably mention any significant bugs that were fixed. Don't forget to include the @file{ChangeLog} entry. @smallexample $ emacs gdb/src/gdb/NEWS ... c-x 4 a ... c-x c-s c-x c-c $ cp gdb/src/gdb/NEWS insight/src/gdb/NEWS $ cp gdb/src/gdb/ChangeLog insight/src/gdb/ChangeLog @end smallexample @item gdb/README You'll need to update: @itemize @bullet @item The version. @item The update date. @item Who did it. @end itemize @smallexample $ emacs gdb/src/gdb/README ... c-x 4 a ... c-x c-s c-x c-c $ cp gdb/src/gdb/README insight/src/gdb/README $ cp gdb/src/gdb/ChangeLog insight/src/gdb/ChangeLog @end smallexample @emph{Maintainer note: Hopefully the @file{README} file was reviewed before the initial branch was cut so just a simple substitute is needed to get it updated.} @emph{Maintainer note: Other projects generate @file{README} and @file{INSTALL} from the core documentation. This might be worth pursuing.} @item gdb/version.in @smallexample $ echo $v > gdb/src/gdb/version.in $ cat gdb/src/gdb/version.in 5.2 $ emacs gdb/src/gdb/version.in ... c-x 4 a ... Bump to version ... c-x c-s c-x c-c $ cp gdb/src/gdb/version.in insight/src/gdb/version.in $ cp gdb/src/gdb/ChangeLog insight/src/gdb/ChangeLog @end smallexample @end table @subsubheading Do the dirty work This is identical to the process used to create the daily snapshot. @smallexample $ for m in gdb insight do ( cd $m/src && gmake -f src-release $m.tar ) done @end smallexample If the top level source directory does not have @file{src-release} (@value{GDBN} version 5.3.1 or earlier), try these commands instead: @smallexample $ for m in gdb insight do ( cd $m/src && gmake -f Makefile.in $m.tar ) done @end smallexample @subsubheading Check the source files You're looking for files that have mysteriously disappeared. @kbd{distclean} has the habit of deleting files it shouldn't. Watch out for the @file{version.in} update @kbd{cronjob}. @smallexample $ ( cd gdb/src && cvs -f -q -n update ) M djunpack.bat ? gdb-5.1.91.tar ? proto-toplev @dots{} lots of generated files @dots{} M gdb/ChangeLog M gdb/NEWS M gdb/README M gdb/version.in @dots{} lots of generated files @dots{} $ @end smallexample @noindent @emph{Don't worry about the @file{gdb.info-??} or @file{gdb/p-exp.tab.c}. They were generated (and yes @file{gdb.info-1} was also generated only something strange with CVS means that they didn't get suppressed). Fixing it would be nice though.} @subsubheading Create compressed versions of the release @smallexample $ cp */src/*.tar . $ cp */src/*.bz2 . $ ls -F gdb/ gdb-5.2.tar insight/ insight-5.2.tar $ for m in gdb insight do bzip2 -v -9 -c $m-$v.tar > $m-$v.tar.bz2 gzip -v -9 -c $m-$v.tar > $m-$v.tar.gz done $ @end smallexample @noindent Note: @itemize @bullet @item A pipe such as @kbd{bunzip2 < xxx.bz2 | gzip -9 > xxx.gz} is not since, in that mode, @code{gzip} does not know the name of the file and, hence, can not include it in the compressed file. This is also why the release process runs @code{tar} and @code{bzip2} as separate passes. @end itemize @subsection Sanity check the tar ball Pick a popular machine (Solaris/PPC?) and try the build on that. @smallexample $ bunzip2 < gdb-5.2.tar.bz2 | tar xpf - $ cd gdb-5.2 $ ./configure $ make @dots{} $ ./gdb/gdb ./gdb/gdb GNU gdb 5.2 @dots{} (gdb) b main Breakpoint 1 at 0x80732bc: file main.c, line 734. (gdb) run Starting program: /tmp/gdb-5.2/gdb/gdb Breakpoint 1, main (argc=1, argv=0xbffff8b4) at main.c:734 734 catch_errors (captured_main, &args, "", RETURN_MASK_ALL); (gdb) print args $1 = @{argc = 136426532, argv = 0x821b7f0@} (gdb) @end smallexample @subsection Make a release candidate available If this is a release candidate then the only remaining steps are: @enumerate @item Commit @file{version.in} and @file{ChangeLog} @item Tweak @file{version.in} (and @file{ChangeLog} to read @var{L}.@var{M}.@var{N}-0000-00-00-cvs so that the version update process can restart. @item Make the release candidate available in @uref{ftp://sourceware.org/pub/gdb/snapshots/branch} @item Notify the relevant mailing lists ( @email{gdb@@sourceware.org} and @email{gdb-testers@@sourceware.org} that the candidate is available. @end enumerate @subsection Make a formal release available (And you thought all that was required was to post an e-mail.) @subsubheading Install on sware Copy the new files to both the release and the old release directory: @smallexample $ cp *.bz2 *.gz ~ftp/pub/gdb/old-releases/ $ cp *.bz2 *.gz ~ftp/pub/gdb/releases @end smallexample @noindent Clean up the releases directory so that only the most recent releases are available (e.g.@: keep 5.2 and 5.2.1 but remove 5.1): @smallexample $ cd ~ftp/pub/gdb/releases $ rm @dots{} @end smallexample @noindent Update the file @file{README} and @file{.message} in the releases directory: @smallexample $ vi README @dots{} $ rm -f .message $ ln README .message @end smallexample @subsubheading Update the web pages. @table @file @item htdocs/download/ANNOUNCEMENT This file, which is posted as the official announcement, includes: @itemize @bullet @item General announcement. @item News. If making an @var{M}.@var{N}.1 release, retain the news from earlier @var{M}.@var{N} release. @item Errata. @end itemize @item htdocs/index.html @itemx htdocs/news/index.html @itemx htdocs/download/index.html These files include: @itemize @bullet @item Announcement of the most recent release. @item News entry (remember to update both the top level and the news directory). @end itemize These pages also need to be regenerate using @code{index.sh}. @item download/onlinedocs/ You need to find the magic command that is used to generate the online docs from the @file{.tar.bz2}. The best way is to look in the output from one of the nightly @code{cron} jobs and then just edit accordingly. Something like: @smallexample $ ~/ss/update-web-docs \ ~ftp/pub/gdb/releases/gdb-5.2.tar.bz2 \ $PWD/www \ /www/sourceware/htdocs/gdb/download/onlinedocs \ gdb @end smallexample @item download/ari/ Just like the online documentation. Something like: @smallexample $ /bin/sh ~/ss/update-web-ari \ ~ftp/pub/gdb/releases/gdb-5.2.tar.bz2 \ $PWD/www \ /www/sourceware/htdocs/gdb/download/ari \ gdb @end smallexample @end table @subsubheading Shadow the pages onto gnu Something goes here. @subsubheading Install the @value{GDBN} tar ball on GNU At the time of writing, the GNU machine was @kbd{gnudist.gnu.org} in @file{~ftp/gnu/gdb}. @subsubheading Make the @file{ANNOUNCEMENT} Post the @file{ANNOUNCEMENT} file you created above to: @itemize @bullet @item @email{gdb-announce@@sourceware.org, GDB Announcement mailing list} @item @email{info-gnu@@gnu.org, General GNU Announcement list} (but delay it a day or so to let things get out) @item @email{bug-gdb@@gnu.org, GDB Bug Report mailing list} @end itemize @subsection Cleanup The release is out but you're still not finished. @subsubheading Commit outstanding changes In particular you'll need to commit any changes to: @itemize @bullet @item @file{gdb/ChangeLog} @item @file{gdb/version.in} @item @file{gdb/NEWS} @item @file{gdb/README} @end itemize @subsubheading Tag the release Something like: @smallexample $ d=`date -u +%Y-%m-%d` $ echo $d 2002-01-24 $ ( cd insight/src/gdb && cvs -f -q update ) $ ( cd insight/src && cvs -f -q tag gdb_5_2-$d-release ) @end smallexample Insight is used since that contains more of the release than @value{GDBN}. @subsubheading Mention the release on the trunk Just put something in the @file{ChangeLog} so that the trunk also indicates when the release was made. @subsubheading Restart @file{gdb/version.in} If @file{gdb/version.in} does not contain an ISO date such as @kbd{2002-01-24} then the daily @code{cronjob} won't update it. Having committed all the release changes it can be set to @file{5.2.0_0000-00-00-cvs} which will restart things (yes the @kbd{_} is important - it affects the snapshot process). Don't forget the @file{ChangeLog}. @subsubheading Merge into trunk The files committed to the branch may also need changes merged into the trunk. @subsubheading Revise the release schedule Post a revised release schedule to @email{gdb@@sourceware.org, GDB Discussion List} with an updated announcement. The schedule can be generated by running: @smallexample $ ~/ss/schedule `date +%s` schedule @end smallexample @noindent The first parameter is approximate date/time in seconds (from the epoch) of the most recent release. Also update the schedule @code{cronjob}. @section Post release Remove any @code{OBSOLETE} code. @node Testsuite @chapter Testsuite @cindex test suite The testsuite is an important component of the @value{GDBN} package. While it is always worthwhile to encourage user testing, in practice this is rarely sufficient; users typically use only a small subset of the available commands, and it has proven all too common for a change to cause a significant regression that went unnoticed for some time. The @value{GDBN} testsuite uses the DejaGNU testing framework. The tests themselves are calls to various @code{Tcl} procs; the framework runs all the procs and summarizes the passes and fails. @section Using the Testsuite @cindex running the test suite To run the testsuite, simply go to the @value{GDBN} object directory (or to the testsuite's objdir) and type @code{make check}. This just sets up some environment variables and invokes DejaGNU's @code{runtest} script. While the testsuite is running, you'll get mentions of which test file is in use, and a mention of any unexpected passes or fails. When the testsuite is finished, you'll get a summary that looks like this: @smallexample === gdb Summary === # of expected passes 6016 # of unexpected failures 58 # of unexpected successes 5 # of expected failures 183 # of unresolved testcases 3 # of untested testcases 5 @end smallexample To run a specific test script, type: @example make check RUNTESTFLAGS='@var{tests}' @end example where @var{tests} is a list of test script file names, separated by spaces. If you use GNU make, you can use its @option{-j} option to run the testsuite in parallel. This can greatly reduce the amount of time it takes for the testsuite to run. In this case, if you set @code{RUNTESTFLAGS} then, by default, the tests will be run serially even under @option{-j}. You can override this and force a parallel run by setting the @code{make} variable @code{FORCE_PARALLEL} to any non-empty value. Note that the parallel @kbd{make check} assumes that you want to run the entire testsuite, so it is not compatible with some dejagnu options, like @option{--directory}. The ideal test run consists of expected passes only; however, reality conspires to keep us from this ideal. Unexpected failures indicate real problems, whether in @value{GDBN} or in the testsuite. Expected failures are still failures, but ones which have been decided are too hard to deal with at the time; for instance, a test case might work everywhere except on AIX, and there is no prospect of the AIX case being fixed in the near future. Expected failures should not be added lightly, since you may be masking serious bugs in @value{GDBN}. Unexpected successes are expected fails that are passing for some reason, while unresolved and untested cases often indicate some minor catastrophe, such as the compiler being unable to deal with a test program. When making any significant change to @value{GDBN}, you should run the testsuite before and after the change, to confirm that there are no regressions. Note that truly complete testing would require that you run the testsuite with all supported configurations and a variety of compilers; however this is more than really necessary. In many cases testing with a single configuration is sufficient. Other useful options are to test one big-endian (Sparc) and one little-endian (x86) host, a cross config with a builtin simulator (powerpc-eabi, mips-elf), or a 64-bit host (Alpha). If you add new functionality to @value{GDBN}, please consider adding tests for it as well; this way future @value{GDBN} hackers can detect and fix their changes that break the functionality you added. Similarly, if you fix a bug that was not previously reported as a test failure, please add a test case for it. Some cases are extremely difficult to test, such as code that handles host OS failures or bugs in particular versions of compilers, and it's OK not to try to write tests for all of those. DejaGNU supports separate build, host, and target machines. However, some @value{GDBN} test scripts do not work if the build machine and the host machine are not the same. In such an environment, these scripts will give a result of ``UNRESOLVED'', like this: @smallexample UNRESOLVED: gdb.base/example.exp: This test script does not work on a remote host. @end smallexample @section Testsuite Parameters Several variables exist to modify the behavior of the testsuite. @itemize @bullet @item @code{TRANSCRIPT} Sometimes it is convenient to get a transcript of the commands which the testsuite sends to @value{GDBN}. For example, if @value{GDBN} crashes during testing, a transcript can be used to more easily reconstruct the failure when running @value{GDBN} under @value{GDBN}. You can instruct the @value{GDBN} testsuite to write transcripts by setting the DejaGNU variable @code{TRANSCRIPT} (to any value) before invoking @code{runtest} or @kbd{make check}. The transcripts will be written into DejaGNU's output directory. One transcript will be made for each invocation of @value{GDBN}; they will be named @file{transcript.@var{n}}, where @var{n} is an integer. The first line of the transcript file will show how @value{GDBN} was invoked; each subsequent line is a command sent as input to @value{GDBN}. @smallexample make check RUNTESTFLAGS=TRANSCRIPT=y @end smallexample Note that the transcript is not always complete. In particular, tests of completion can yield partial command lines. @item @code{GDB} Sometimes one wishes to test a different @value{GDBN} than the one in the build directory. For example, one may wish to run the testsuite on @file{/usr/bin/gdb}. @smallexample make check RUNTESTFLAGS=GDB=/usr/bin/gdb @end smallexample @item @code{GDBSERVER} When testing a different @value{GDBN}, it is often useful to also test a different gdbserver. @smallexample make check RUNTESTFLAGS="GDB=/usr/bin/gdb GDBSERVER=/usr/bin/gdbserver" @end smallexample @item @code{INTERNAL_GDBFLAGS} When running the testsuite normally one doesn't want whatever is in @file{~/.gdbinit} to interfere with the tests, therefore the test harness passes @option{-nx} to @value{GDBN}. One also doesn't want any windowed version of @value{GDBN}, e.g., @samp{gdb -tui}, to run. This is achieved via @code{INTERNAL_GDBFLAGS}. @smallexample set INTERNAL_GDBFLAGS "-nw -nx" @end smallexample This is all well and good, except when testing an installed @value{GDBN} that has been configured with @option{--with-system-gdbinit}. Here one does not want @file{~/.gdbinit} loaded but one may want the system @file{.gdbinit} file loaded. This can be achieved by pointing @code{$HOME} at a directory without a @file{.gdbinit} and by overriding @code{INTERNAL_GDBFLAGS} and removing @option{-nx}. @smallexample cd testsuite HOME=`pwd` runtest \ GDB=/usr/bin/gdb \ GDBSERVER=/usr/bin/gdbserver \ INTERNAL_GDBFLAGS=-nw @end smallexample @end itemize There are two ways to run the testsuite and pass additional parameters to DejaGnu. The first is with @kbd{make check} and specifying the makefile variable @samp{RUNTESTFLAGS}. @smallexample make check RUNTESTFLAGS=TRANSCRIPT=y @end smallexample The second is to cd to the @file{testsuite} directory and invoke the DejaGnu @command{runtest} command directly. @smallexample cd testsuite make site.exp runtest TRANSCRIPT=y @end smallexample @section Testsuite Configuration @cindex Testsuite Configuration It is possible to adjust the behavior of the testsuite by defining the global variables listed below, either in a @file{site.exp} file, or in a board file. @itemize @bullet @item @code{gdb_test_timeout} Defining this variable changes the default timeout duration used during communication with @value{GDBN}. More specifically, the global variable used during testing is @code{timeout}, but this variable gets reset to @code{gdb_test_timeout} at the beginning of each testcase, making sure that any local change to @code{timeout} in a testcase does not affect subsequent testcases. This global variable comes in handy when the debugger is slower than normal due to the testing environment, triggering unexpected @code{TIMEOUT} test failures. Examples include when testing on a remote machine, or against a system where communications are slow. If not specifically defined, this variable gets automatically defined to the same value as @code{timeout} during the testsuite initialization. The default value of the timeout is defined in the file @file{gdb/testsuite/config/unix.exp} that is part of the @value{GDBN} test suite@footnote{If you are using a board file, it could override the test-suite default; search the board file for "timeout".}. @end itemize @section Testsuite Organization @cindex test suite organization The testsuite is entirely contained in @file{gdb/testsuite}. While the testsuite includes some makefiles and configury, these are very minimal, and used for little besides cleaning up, since the tests themselves handle the compilation of the programs that @value{GDBN} will run. The file @file{testsuite/lib/gdb.exp} contains common utility procs useful for all @value{GDBN} tests, while the directory @file{testsuite/config} contains configuration-specific files, typically used for special-purpose definitions of procs like @code{gdb_load} and @code{gdb_start}. The tests themselves are to be found in @file{testsuite/gdb.*} and subdirectories of those. The names of the test files must always end with @file{.exp}. DejaGNU collects the test files by wildcarding in the test directories, so both subdirectories and individual files get chosen and run in alphabetical order. The following table lists the main types of subdirectories and what they are for. Since DejaGNU finds test files no matter where they are located, and since each test file sets up its own compilation and execution environment, this organization is simply for convenience and intelligibility. @table @file @item gdb.base This is the base testsuite. The tests in it should apply to all configurations of @value{GDBN} (but generic native-only tests may live here). The test programs should be in the subset of C that is valid K&R, ANSI/ISO, and C@t{++} (@code{#ifdef}s are allowed if necessary, for instance for prototypes). @item gdb.@var{lang} Language-specific tests for any language @var{lang} besides C. Examples are @file{gdb.cp} and @file{gdb.java}. @item gdb.@var{platform} Non-portable tests. The tests are specific to a specific configuration (host or target), such as HP-UX or eCos. Example is @file{gdb.hp}, for HP-UX. @item gdb.@var{compiler} Tests specific to a particular compiler. As of this writing (June 1999), there aren't currently any groups of tests in this category that couldn't just as sensibly be made platform-specific, but one could imagine a @file{gdb.gcc}, for tests of @value{GDBN}'s handling of GCC extensions. @item gdb.@var{subsystem} Tests that exercise a specific @value{GDBN} subsystem in more depth. For instance, @file{gdb.disasm} exercises various disassemblers, while @file{gdb.stabs} tests pathways through the stabs symbol reader. @end table @section Writing Tests @cindex writing tests In many areas, the @value{GDBN} tests are already quite comprehensive; you should be able to copy existing tests to handle new cases. You should try to use @code{gdb_test} whenever possible, since it includes cases to handle all the unexpected errors that might happen. However, it doesn't cost anything to add new test procedures; for instance, @file{gdb.base/exprs.exp} defines a @code{test_expr} that calls @code{gdb_test} multiple times. Only use @code{send_gdb} and @code{gdb_expect} when absolutely necessary. Even if @value{GDBN} has several valid responses to a command, you can use @code{gdb_test_multiple}. Like @code{gdb_test}, @code{gdb_test_multiple} recognizes internal errors and unexpected prompts. Do not write tests which expect a literal tab character from @value{GDBN}. On some operating systems (e.g.@: OpenBSD) the TTY layer expands tabs to spaces, so by the time @value{GDBN}'s output reaches expect the tab is gone. The source language programs do @emph{not} need to be in a consistent style. Since @value{GDBN} is used to debug programs written in many different styles, it's worth having a mix of styles in the testsuite; for instance, some @value{GDBN} bugs involving the display of source lines would never manifest themselves if the programs used GNU coding style uniformly. Some testcase results need more detailed explanation: @table @code @item KFAIL Known problem of @value{GDBN} itself. You must specify the @value{GDBN} bug report number like in these sample tests: @smallexample kfail "gdb/13392" "continue to marker 2" @end smallexample or @smallexample setup_kfail gdb/13392 "*-*-*" kfail "continue to marker 2" @end smallexample @item XFAIL Known problem of environment. This typically includes @value{NGCC} but it includes also many other system components which cannot be fixed in the @value{GDBN} project. Sample test with sanity check not knowing the specific cause of the problem: @smallexample # On x86_64 it is commonly about 4MB. if @{$stub_size > 25000000@} @{ xfail "stub size $stub_size is too large" return @} @end smallexample You should provide bug report number for the failing component of the environment, if such bug report is available: @smallexample if @{[test_compiler_info @{gcc-[0-3]-*@}] || [test_compiler_info @{gcc-4-[0-5]-*@}]@} @{ setup_xfail "gcc/46955" *-*-* @} gdb_test "python print ttype.template_argument(2)" "&C::c" @end smallexample @end table @section Board settings In @value{GDBN} testsuite, the tests can be configured or customized in the board file by means of @dfn{Board Settings}. Each setting should be consulted by test cases that depend on the corresponding feature. Here are the supported board settings: @table @code @item gdb,cannot_call_functions The board does not support inferior call, that is, invoking inferior functions in @value{GDBN}. @item gdb,can_reverse The board supports reverse execution. @item gdb,no_hardware_watchpoints The board does not support hardware watchpoints. @item gdb,nofileio @value{GDBN} is unable to intercept target file operations in remote and perform them on the host. @item gdb,noinferiorio The board is unable to provide I/O capability to the inferior. @c @item gdb,noresults @c NEED DOCUMENT. @item gdb,nosignals The board does not support signals. @item gdb,skip_huge_test Skip time-consuming tests on the board with slow connection. @item gdb,skip_float_tests Skip tests related to float points on target board. @item gdb,use_precord The board supports process record. @item gdb_server_prog The location of GDBserver. If GDBserver somewhere other than its default location is used in test, specify the location of GDBserver in this variable. The location is a file name of GDBserver that can be either absolute or relative to testsuite subdirectory in build directory. @item in_proc_agent The location of in-process agent. If in-process agent other than its default location is used in test, specify the location of in-process agent in this variable. The location is a file name of in-process agent that can be either absolute or relative to testsuite subdirectory in build directory. @item noargs @value{GDBN} does not support argument passing for inferior. @item no_long_long The board does not support type @code{long long}. @c @item use_cygmon @c NEED DOCUMENT. @item use_gdb_stub The tests are running with gdb stub. @end table @node Hints @chapter Hints Check the @file{README} file, it often has useful information that does not appear anywhere else in the directory. @menu * Getting Started:: Getting started working on @value{GDBN} * Debugging GDB:: Debugging @value{GDBN} with itself @end menu @node Getting Started @section Getting Started @value{GDBN} is a large and complicated program, and if you first starting to work on it, it can be hard to know where to start. Fortunately, if you know how to go about it, there are ways to figure out what is going on. This manual, the @value{GDBN} Internals manual, has information which applies generally to many parts of @value{GDBN}. Information about particular functions or data structures are located in comments with those functions or data structures. If you run across a function or a global variable which does not have a comment correctly explaining what is does, this can be thought of as a bug in @value{GDBN}; feel free to submit a bug report, with a suggested comment if you can figure out what the comment should say. If you find a comment which is actually wrong, be especially sure to report that. Comments explaining the function of macros defined in host, target, or native dependent files can be in several places. Sometimes they are repeated every place the macro is defined. Sometimes they are where the macro is used. Sometimes there is a header file which supplies a default definition of the macro, and the comment is there. This manual also documents all the available macros. @c (@pxref{Host Conditionals}, @pxref{Target @c Conditionals}, @pxref{Native Conditionals}, and @pxref{Obsolete @c Conditionals}) Start with the header files. Once you have some idea of how @value{GDBN}'s internal symbol tables are stored (see @file{symtab.h}, @file{gdbtypes.h}), you will find it much easier to understand the code which uses and creates those symbol tables. You may wish to process the information you are getting somehow, to enhance your understanding of it. Summarize it, translate it to another language, add some (perhaps trivial or non-useful) feature to @value{GDBN}, use the code to predict what a test case would do and write the test case and verify your prediction, etc. If you are reading code and your eyes are starting to glaze over, this is a sign you need to use a more active approach. Once you have a part of @value{GDBN} to start with, you can find more specifically the part you are looking for by stepping through each function with the @code{next} command. Do not use @code{step} or you will quickly get distracted; when the function you are stepping through calls another function try only to get a big-picture understanding (perhaps using the comment at the beginning of the function being called) of what it does. This way you can identify which of the functions being called by the function you are stepping through is the one which you are interested in. You may need to examine the data structures generated at each stage, with reference to the comments in the header files explaining what the data structures are supposed to look like. Of course, this same technique can be used if you are just reading the code, rather than actually stepping through it. The same general principle applies---when the code you are looking at calls something else, just try to understand generally what the code being called does, rather than worrying about all its details. @cindex command implementation A good place to start when tracking down some particular area is with a command which invokes that feature. Suppose you want to know how single-stepping works. As a @value{GDBN} user, you know that the @code{step} command invokes single-stepping. The command is invoked via command tables (see @file{command.h}); by convention the function which actually performs the command is formed by taking the name of the command and adding @samp{_command}, or in the case of an @code{info} subcommand, @samp{_info}. For example, the @code{step} command invokes the @code{step_command} function and the @code{info display} command invokes @code{display_info}. When this convention is not followed, you might have to use @code{grep} or @kbd{M-x tags-search} in emacs, or run @value{GDBN} on itself and set a breakpoint in @code{execute_command}. @cindex @code{bug-gdb} mailing list If all of the above fail, it may be appropriate to ask for information on @code{bug-gdb}. But @emph{never} post a generic question like ``I was wondering if anyone could give me some tips about understanding @value{GDBN}''---if we had some magic secret we would put it in this manual. Suggestions for improving the manual are always welcome, of course. @node Debugging GDB @section Debugging @value{GDBN} with itself @cindex debugging @value{GDBN} If @value{GDBN} is limping on your machine, this is the preferred way to get it fully functional. Be warned that in some ancient Unix systems, like Ultrix 4.2, a program can't be running in one process while it is being debugged in another. Rather than typing the command @kbd{@w{./gdb ./gdb}}, which works on Suns and such, you can copy @file{gdb} to @file{gdb2} and then type @kbd{@w{./gdb ./gdb2}}. When you run @value{GDBN} in the @value{GDBN} source directory, it will read @file{gdb-gdb.gdb} file (plus possibly @file{gdb-gdb.py} file) that sets up some simple things to make debugging gdb easier. The @code{info} command, when executed without a subcommand in a @value{GDBN} being debugged by gdb, will pop you back up to the top level gdb. See @file{gdb-gdb.gdb} for details. If you use emacs, you will probably want to do a @code{make TAGS} after you configure your distribution; this will put the machine dependent routines for your local machine where they will be accessed first by @kbd{M-.} Also, make sure that you've either compiled @value{GDBN} with your local cc, or have run @code{fixincludes} if you are compiling with gcc. @section Submitting Patches @cindex submitting patches Thanks for thinking of offering your changes back to the community of @value{GDBN} users. In general we like to get well designed enhancements. Thanks also for checking in advance about the best way to transfer the changes. The @value{GDBN} maintainers will only install ``cleanly designed'' patches. This manual summarizes what we believe to be clean design for @value{GDBN}. If the maintainers don't have time to put the patch in when it arrives, or if there is any question about a patch, it goes into a large queue with everyone else's patches and bug reports. @cindex legal papers for code contributions The legal issue is that to incorporate substantial changes requires a copyright assignment from you and/or your employer, granting ownership of the changes to the Free Software Foundation. You can get the standard documents for doing this by sending mail to @code{gnu@@gnu.org} and asking for it. We recommend that people write in "All programs owned by the Free Software Foundation" as "NAME OF PROGRAM", so that changes in many programs (not just @value{GDBN}, but GAS, Emacs, GCC, etc) can be contributed with only one piece of legalese pushed through the bureaucracy and filed with the FSF. We can't start merging changes until this paperwork is received by the FSF (their rules, which we follow since we maintain it for them). Technically, the easiest way to receive changes is to receive each feature as a small context diff or unidiff, suitable for @code{patch}. Each message sent to me should include the changes to C code and header files for a single feature, plus @file{ChangeLog} entries for each directory where files were modified, and diffs for any changes needed to the manuals (@file{gdb/doc/gdb.texinfo} or @file{gdb/doc/gdbint.texinfo}). If there are a lot of changes for a single feature, they can be split down into multiple messages. In this way, if we read and like the feature, we can add it to the sources with a single patch command, do some testing, and check it in. If you leave out the @file{ChangeLog}, we have to write one. If you leave out the doc, we have to puzzle out what needs documenting. Etc., etc. The reason to send each change in a separate message is that we will not install some of the changes. They'll be returned to you with questions or comments. If we're doing our job correctly, the message back to you will say what you have to fix in order to make the change acceptable. The reason to have separate messages for separate features is so that the acceptable changes can be installed while one or more changes are being reworked. If multiple features are sent in a single message, we tend to not put in the effort to sort out the acceptable changes from the unacceptable, so none of the features get installed until all are acceptable. If this sounds painful or authoritarian, well, it is. But we get a lot of bug reports and a lot of patches, and many of them don't get installed because we don't have the time to finish the job that the bug reporter or the contributor could have done. Patches that arrive complete, working, and well designed, tend to get installed on the day they arrive. The others go into a queue and get installed as time permits, which, since the maintainers have many demands to meet, may not be for quite some time. Please send patches directly to @email{gdb-patches@@sourceware.org, the @value{GDBN} maintainers}. @section Build Script @cindex build script The script @file{gdb_buildall.sh} builds @value{GDBN} with flag @option{--enable-targets=all} set. This builds @value{GDBN} with all supported targets activated. This helps testing @value{GDBN} when doing changes that affect more than one architecture and is much faster than using @file{gdb_mbuild.sh}. After building @value{GDBN} the script checks which architectures are supported and then switches the current architecture to each of those to get information about the architecture. The test results are stored in log files in the directory the script was called from. @include observer.texi @node GNU Free Documentation License @appendix GNU Free Documentation License @include fdl.texi @node Concept Index @unnumbered Concept Index @printindex cp @node Function and Variable Index @unnumbered Function and Variable Index @printindex fn @bye