aboutsummaryrefslogtreecommitdiff
path: root/gdb
AgeCommit message (Collapse)AuthorFilesLines
2015-09-21Add two missing constsSimon Marchi3-2/+7
Two missing consts, found while doing cxx-conversion work. We end up with a char*, even though we pass a const char* to strstr. I am pushing this as obvious. gdb/ChangeLog: * cli/cli-setshow.c (cmd_show_list): Constify a variable. * linespec.c (linespec_lexer_lex_string): Same.
2015-09-21Add NEWS entry for fast tracepoint support on aarch64-linuxPierre Langlois2-0/+8
Here is a NEWS entry for this series: gdb/ChangeLog: * NEWS: Mention support for fast tracepoints on aarch64-linux.
2015-09-21Add a test case for fast tracepoints' locking mechanismPierre Langlois3-0/+199
When installing a fast tracepoint, we create a jump pad with a spin-lock. This way, only one thread can collect a given tracepoint at any time. This test case checks that this lock actually works as expected. This test works by creating a function which overrides the in-process agent library's gdb_collect function. On start up, GDBserver will ask GDB with the 'qSymbol' packet about symbols present in the inferior. GDB will reply with the gdb_agent_gdb_collect function from the test case instead of the one from the agent. gdb/testsuite/ChangeLog: * gdb.trace/ftrace-lock.c: New file. * gdb.trace/ftrace-lock.exp: New file.
2015-09-21Add a gdb.trace test for instruction relocationPierre Langlois3-0/+627
This test case makes sure that relocating PC relative instructions does not change their behaviors. All PC relative AArch64 instructions are covered. While call and jump (32 bit relative) instructions are covered on x86. The test case creates a static array of function pointers for each supported architecture. Each function in this array tests a specific instruction using inline assembly. They all need to contain a symbol in the form of 'set_point\[0-9\]+' and finish by either calling pass or fail. The number of 'set_pointN' needs to go from 0 to (ARRAY_SIZE - 1). The test will: - look up the number of function pointers in the static array. - set fast tracepoints on each 'set_point\[0-9\]+' symbol, one in each functions from 0 to (ARRAY_SIZE - 1). - run the trace experiment and make sure the pass function is called for every function. gdb/testsuite/ChangeLog: * gdb.arch/insn-reloc.c: New file. * gdb.arch/ftrace-insn-reloc.exp: New file.
2015-09-21Enable fast tracepoint testsPierre Langlois11-1/+36
gdb/testsuite/ChangeLog: * gdb.trace/change-loc.h (func4) [__aarch64__]: Add a nop instruction. * gdb.trace/pendshr1.c (pendfunc): Likewise. * gdb.trace/pendshr2.c (pendfunc2): Likewise. * gdb.trace/range-stepping.c: Likewise. * gdb.trace/trace-break.c: Likewise. * gdb.trace/trace-mt.c (thread_function): Likewise. * gdb.trace/ftrace.c (marker): Likewise. * gdb.trace/trace-condition.c (marker): Likewise. * gdb.trace/ftrace.exp: Enable ftrace test if is_aarch64_target. * gdb.trace/trace-condition.exp: Set pcreg to "\$pc" if is_aarch64_target.
2015-09-21Implement target_emit_opsPierre Langlois2-14/+1300
This patch implements compiling agent expressions to native code for AArch64. This allows us to compile conditions set on fast tracepoints. The compiled function has the following prologue: High *------------------------------------------------------* | LR | | FP | <- FP | x1 (ULONGEST *value) | | x0 (unsigned char *regs) | Low *------------------------------------------------------* We save the function's argument on the stack as well as the return address and the frame pointer. We then set the current frame pointer to point to the previous one. The generated code for the expression will freely update the stack pointer so we use the frame pointer to refer to `*value' and `*regs'. `*value' needs to be accessed in the epilogue of the function, in order to set it to whatever is on top of the stack. `*regs' needs to be passed down to the `gdb_agent_get_raw_reg' function with the `reg' operation. gdb/gdbserver/ChangeLog: * linux-aarch64-low-.c: Include ax.h and tracepoint.h. (enum aarch64_opcodes) <RET>, <SUBS>, <AND>, <ORR>, <ORN>, <EOR>, <LSLV>, <LSRV>, <ASRV>, <SBFM>, <UBFM>, <CSINC>, <MUL>, <NOP>: New. (enum aarch64_condition_codes): New enum. (w0): New static global. (fp): Likewise. (lr): Likewise. (struct aarch64_memory_operand) <type>: New MEMORY_OPERAND_POSTINDEX type. (postindex_memory_operand): New helper function. (emit_ret): New function. (emit_load_store_pair): New function, factored out of emit_stp with support for MEMORY_OPERAND_POSTINDEX. (emit_stp): Rewrite using emit_load_store_pair. (emit_ldp): New function. (emit_load_store): Likewise. (emit_ldr): Mention post-index instruction in comment. (emit_ldrh): New function. (emit_ldrb): New function. (emit_ldrsw): Mention post-index instruction in comment. (emit_str): Likewise. (emit_subs): New function. (emit_cmp): Likewise. (emit_and): Likewise. (emit_orr): Likewise. (emit_orn): Likewise. (emit_eor): Likewise. (emit_mvn): Likewise. (emit_lslv): Likewise. (emit_lsrv): Likewise. (emit_asrv): Likewise. (emit_mul): Likewise. (emit_sbfm): Likewise. (emit_sbfx): Likewise. (emit_ubfm): Likewise. (emit_ubfx): Likewise. (emit_csinc): Likewise. (emit_cset): Likewise. (emit_nop): Likewise. (emit_ops_insns): New helper function. (emit_pop): Likewise. (emit_push): Likewise. (aarch64_emit_prologue): New function. (aarch64_emit_epilogue): Likewise. (aarch64_emit_add): Likewise. (aarch64_emit_sub): Likewise. (aarch64_emit_mul): Likewise. (aarch64_emit_lsh): Likewise. (aarch64_emit_rsh_signed): Likewise. (aarch64_emit_rsh_unsigned): Likewise. (aarch64_emit_ext): Likewise. (aarch64_emit_log_not): Likewise. (aarch64_emit_bit_and): Likewise. (aarch64_emit_bit_or): Likewise. (aarch64_emit_bit_xor): Likewise. (aarch64_emit_bit_not): Likewise. (aarch64_emit_equal): Likewise. (aarch64_emit_less_signed): Likewise. (aarch64_emit_less_unsigned): Likewise. (aarch64_emit_ref): Likewise. (aarch64_emit_if_goto): Likewise. (aarch64_emit_goto): Likewise. (aarch64_write_goto_address): Likewise. (aarch64_emit_const): Likewise. (aarch64_emit_call): Likewise. (aarch64_emit_reg): Likewise. (aarch64_emit_pop): Likewise. (aarch64_emit_stack_flush): Likewise. (aarch64_emit_zero_ext): Likewise. (aarch64_emit_swap): Likewise. (aarch64_emit_stack_adjust): Likewise. (aarch64_emit_int_call_1): Likewise. (aarch64_emit_void_call_2): Likewise. (aarch64_emit_eq_goto): Likewise. (aarch64_emit_ne_goto): Likewise. (aarch64_emit_lt_goto): Likewise. (aarch64_emit_le_goto): Likewise. (aarch64_emit_gt_goto): Likewise. (aarch64_emit_ge_got): Likewise. (aarch64_emit_ops_impl): New static global variable. (aarch64_emit_ops): New target function, return &aarch64_emit_ops_impl. (struct linux_target_ops): Install it.
2015-09-21Add support for fast tracepointsPierre Langlois5-3/+1689
This patch adds support for fast tracepoints for aarch64-linux. With this implementation, a tracepoint can only be placed in a +/- 128MB range of the jump pad. This is due to the unconditional branch instruction being limited to a (26 bit << 2) offset from the current PC. Three target operations are implemented: - target_install_fast_tracepoint_jump_pad Building the jump pad the biggest change of this patch. We need to add functions to emit all instructions needed to save and restore the current state when the tracepoint is hit. As well as implementing a lock and creating a collecting_t object identifying the current thread. Steps performed by the jump pad: * Save the current state on the stack. * Push a collecting_t object on the stack. We read the special tpidr_el0 system register to get the thread ID. * Spin-lock on the shared memory location of all tracing threads. We write the address of our collecting_t object there once we have the lock. * Call gdb_collect. * Release the lock. * Restore the state. * Execute the replaced instruction which will have been relocated. * Jump back to the program. - target_get_thread_area As implemented in ps_get_thread_area, target_get_thread_area uses ptrace to fetch the NT_ARM_TLS register. At the architecture level, NT_ARM_TLS represents the tpidr_el0 system register. So this ptrace call (if lwpid is the current thread): ~~~ ptrace (PTRACE_GETREGSET, lwpid, NT_ARM_TLS, &iovec); ~~~ Is equivalent to the following instruction: ~~~ msr x0, tpidr_el0 ~~~ This instruction is used when creating the collecting_t object that GDBserver can read to know if a given thread is currently tracing. So target_get_thread_area must get the same thread IDs as what the jump pad writes into its collecting_t object. - target_get_min_fast_tracepoint_insn_len This just returns 4. gdb/gdbserver/ChangeLog: * Makefile.in (linux-aarch64-ipa.o, aarch64-ipa.o): New rules. * configure.srv (aarch64*-*-linux*): Add linux-aarch64-ipa.o and aarch64-ipa.o. * linux-aarch64-ipa.c: New file. * linux-aarch64-low.c: Include arch/aarch64-insn.h, inttypes.h and endian.h. (aarch64_get_thread_area): New target method. (extract_signed_bitfield): New helper function. (aarch64_decode_ldr_literal): New function. (enum aarch64_opcodes): New enum. (struct aarch64_register): New struct. (struct aarch64_operand): New struct. (x0): New static global. (x1): Likewise. (x2): Likewise. (x3): Likewise. (x4): Likewise. (w2): Likewise. (ip0): Likewise. (sp): Likewise. (xzr): Likewise. (aarch64_register): New helper function. (register_operand): Likewise. (immediate_operand): Likewise. (struct aarch64_memory_operand): New struct. (offset_memory_operand): New helper function. (preindex_memory_operand): Likewise. (enum aarch64_system_control_registers): New enum. (ENCODE): New macro. (emit_insn): New helper function. (emit_b): New function. (emit_bcond): Likewise. (emit_cb): Likewise. (emit_tb): Likewise. (emit_blr): Likewise. (emit_stp): Likewise. (emit_ldp_q_offset): Likewise. (emit_stp_q_offset): Likewise. (emit_load_store): Likewise. (emit_ldr): Likewise. (emit_ldrsw): Likewise. (emit_str): Likewise. (emit_ldaxr): Likewise. (emit_stxr): Likewise. (emit_stlr): Likewise. (emit_data_processing_reg): Likewise. (emit_data_processing): Likewise. (emit_add): Likewise. (emit_sub): Likewise. (emit_mov): Likewise. (emit_movk): Likewise. (emit_mov_addr): Likewise. (emit_mrs): Likewise. (emit_msr): Likewise. (emit_sevl): Likewise. (emit_wfe): Likewise. (append_insns): Likewise. (can_encode_int32_in): New helper function. (aarch64_relocate_instruction): New function. (aarch64_install_fast_tracepoint_jump_pad): Likewise. (aarch64_get_min_fast_tracepoint_insn_len): Likewise. (struct linux_target_ops): Install aarch64_get_thread_area, aarch64_install_fast_tracepoint_jump_pad and aarch64_get_min_fast_tracepoint_insn_len.
2015-09-21Make aarch64_decode_adrp handle both ADR and ADRP instructionsPierre Langlois4-7/+40
We will need to decode both ADR and ADRP instructions in GDBserver. This patch makes common code handle both cases, even if GDB only needs to decode the ADRP instruction. gdb/ChangeLog: * aarch64-tdep.c (aarch64_analyze_prologue): New is_adrp variable. Call aarch64_decode_adr instead of aarch64_decode_adrp. * arch/aarch64-insn.h (aarch64_decode_adrp): Delete. (aarch64_decode_adr): New function declaration. * arch/aarch64-insn.c (aarch64_decode_adrp): Delete. (aarch64_decode_adr): New function, factored out from aarch64_decode_adrp to decode both adr and adrp instructions.
2015-09-21Move instruction decoding into new arch/ directoryPierre Langlois9-201/+366
This patch moves the following functions into the arch/ common directory, in new files arch/aarch64-insn.{h,c}. They are prefixed with 'aarch64_': - aarch64_decode_adrp - aarch64_decode_b - aarch64_decode_cb - aarch64_decode_tb We will need them to implement fast tracepoints in GDBserver. For consistency, this patch also adds the 'aarch64_' prefix to static decoding functions that do not need to be shared right now. V2: make sure the formatting issues propagated fix `gdbserver/configure.srv'. gdb/ChangeLog: * Makefile.in (ALL_64_TARGET_OBS): Add aarch64-insn.o. (HFILES_NO_SRCDIR): Add arch/aarch64-insn.h. (aarch64-insn.o): New rule. * configure.tgt (aarch64*-*-elf): Add aarch64-insn.o. (aarch64*-*-linux*): Likewise. * arch/aarch64-insn.c: New file. * arch/aarch64-insn.h: New file. * aarch64-tdep.c: Include arch/aarch64-insn.h. (aarch64_debug): Move to arch/aarch64-insn.c. Declare in arch/aarch64-insn.h. (decode_add_sub_imm): Rename to ... (aarch64_decode_add_sub_imm): ... this. (decode_adrp): Rename to ... (aarch64_decode_adrp): ... this. Move to arch/aarch64-insn.c. Declare in arch/aarch64-insn.h. (decode_b): Rename to ... (aarch64_decode_b): ... this. Move to arch/aarch64-insn.c. Declare in arch/aarch64-insn.h. (decode_bcond): Rename to ... (aarch64_decode_bcond): ... this. Move to arch/aarch64-insn.c. Declare in arch/aarch64-insn.h. (decode_br): Rename to ... (aarch64_decode_br): ... this. (decode_cb): Rename to ... (aarch64_decode_cb): ... this. Move to arch/aarch64-insn.c. Declare in arch/aarch64-insn.h. (decode_eret): Rename to ... (aarch64_decode_eret): ... this. (decode_movz): Rename to ... (aarch64_decode_movz): ... this. (decode_orr_shifted_register_x): Rename to ... (aarch64_decode_orr_shifted_register_x): ... this. (decode_ret): Rename to ... (aarch64_decode_ret): ... this. (decode_stp_offset): Rename to ... (aarch64_decode_stp_offset): ... this. (decode_stp_offset_wb): Rename to ... (aarch64_decode_stp_offset_wb): ... this. (decode_stur): Rename to ... (aarch64_decode_stur): ... this. (decode_tb): Rename to ... (aarch64_decode_tb): ... this. Move to arch/aarch64-insn.c. Declare in arch/aarch64-insn.h. (aarch64_analyze_prologue): Adjust calls to renamed functions. gdb/gdbserver/ChangeLog: * Makefile.in (aarch64-insn.o): New rule. * configure.srv (aarch64*-*-linux*): Add aarch64-insn.o.
2015-09-21Wrap gdb_agent_op_sizes by #ifndef IN_PROCESS_AGENTYao Qi2-0/+6
Hi, I see the following build warning with recent GCC built from mainline, aarch64-none-linux-gnu-gcc -g -O2 -I. -I/home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver -I/home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver/../common -I/home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver/../regformats -I/home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver/.. -I/home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver/../../include -I/home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver/../gnulib/import -Ibuild-gnulib-gdbserver/import -Wall -Wpointer-arith -Wformat-nonliteral -Wno-char-subscripts -Wempty-body -Wdeclaration-after-statement -Werror -DGDBSERVER -DCONFIG_UST_GDB_INTEGRATION -fPIC -DIN_PROCESS_AGENT -fvisibility=hidden -c -o ax-ipa.o -MT ax-ipa.o -MMD -MP -MF .deps/ax-ipa.Tpo `echo " -Wall -Wpointer-arith -Wformat-nonliteral -Wno-char-subscripts -Wempty-body -Wdeclaration-after-statement " | sed "s/ -Wformat-nonliteral / -Wno-format-nonliteral /g"` /home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver/ax.c /home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver/ax.c:73:28: error: 'gdb_agent_op_sizes' defined but not used [-Werror=unused-const-variable] static const unsigned char gdb_agent_op_sizes [gdb_agent_op_last] = ^ cc1: all warnings being treated as errors gdb_agent_op_sizes is only used in function is_goto_target, which is defined inside #ifndef IN_PROCESS_AGENT. This warning is not arch specific, so GCC mainline for other targets should produce this warning too, although this warning is triggered by enabling aarch64 fast tracepoint. The fix is to move gdb_agent_op_sizes to gdb/gdbserver: 2015-09-21 Yao Qi <yao.qi@linaro.org> * ax.c [!IN_PROCESS_AGENT] (gdb_agent_op_sizes): Define it.
2015-09-21[gdbserver] Remove unused max_jump_pad_sizeYao Qi2-3/+4
This patch is to remove max_jump_pad_size which isn't used else where, and it causes a recent gcc warning like this, gdb/gdbserver/tracepoint.c:2920:18: error: 'max_jump_pad_size' defined but not used [-Werror=unused-const-variable] static const int max_jump_pad_size = 0x100; ^ cc1: all warnings being treated as errors This variable max_jump_pad_size wasn't used since it was added in 2010 by https://sourceware.org/ml/gdb-patches/2010-06/msg00002.html gdb/gdbserver: 2015-09-21 Yao Qi <yao.qi@linaro.org> * tracepoint.c (max_jump_pad_size): Remove.
2015-09-20dwarf2read.c (add_partial_symbol): Remove outdated comments.Doug Evans2-6/+4
gdb/ChangeLog: * dwarf2read.c (add_partial_symbol): Remove outdated comments.
2015-09-20dwarf2_compute_name: add fixme, don't use same name as parameter for localDoug Evans2-8/+18
gdb/ChangeLog: * dwarf2read.c (dwarf2_compute_name): Add FIXME. Don't use a local variable name that collides with a parameter.
2015-09-20crash printing non-local variable from nested subprogramJoel Brobecker2-4/+51
We have noticed that GDB would sometimes crash trying to print from a nested function the value of a variable declared in an enclosing scope. This appears to be target dependent, although that correlation might only be fortuitious. We noticed the issue on x86_64-darwin, x86-vxworks6 and x86-solaris. The investigation was done on Darwin. This is a new feature that was introduced by: commit 63e43d3aedb8b1112899c2d0ad74cbbee687e5d6 Date: Thu Feb 5 17:00:06 2015 +0100 DWARF: handle non-local references in nested functions We can reproduce the problem with one of the testcases that was added with the patch (gdb.base/nested-subp1.exp), where we have... 18 int 19 foo (int i1) 20 { 21 int 22 nested (int i2) 23 { [...] 27 return i1 * i2; /* STOP */ 28 } ... After building the example program, and running until line 27, try printing the value of "i1": % gdb gdb.base/nested-subp1 (gdb) break foo.c:27 (gdb) run Breakpoint 1, nested (i2=2) at /[...]/nested-subp1.c:27 27 return i1 * i2; /* STOP */ (gdb) p i1 [1] 73090 segmentation fault ../gdb -q gdb.base/nested-subp1 Ooops! What happens is that, because the reference is non-local, we are trying to follow the function's static link, which does... /* If we don't know how to compute FRAME's base address, don't give up: maybe the frame we are looking for is upper in the stace frame. */ if (framefunc != NULL && SYMBOL_BLOCK_OPS (framefunc)->get_frame_base != NULL && (SYMBOL_BLOCK_OPS (framefunc)->get_frame_base (framefunc, frame) == upper_frame_base)) ... or, in other words, calls the get_frame_base "method" of framefunc's struct symbol_block_ops data. This resolves to the block_op_get_frame_base function. Looking at the function's implementation, we see: struct dwarf2_locexpr_baton *dlbaton; [...] dlbaton = SYMBOL_LOCATION_BATON (framefunc); [...] result = dwarf2_evaluate_loc_desc (type, frame, start, length, dlbaton->per_cu); ^^^^^^^^^^^^^^^ Printing dlbaton->per_cu gives a value that seems fairly bogus for a memory address (0x60). Because of it, dwarf2_evaluate_loc_desc then crashes trying to dereference it. What's different on Darwin compared to Linux is that the function's frame base is encoded using the following form: .byte 0x40 # uleb128 0x40; (DW_AT_frame_base) .byte 0x6 # uleb128 0x6; (DW_FORM_data4) ... and so dwarf2_symbol_mark_computed ends up creating a SYMBOL_LOCATION_BATON as a struct dwarf2_loclist_baton: if (attr_form_is_section_offset (attr) /* .debug_loc{,.dwo} may not exist at all, or the offset may be outside the section. If so, fall through to the complaint in the other branch. */ && DW_UNSND (attr) < dwarf2_section_size (objfile, section)) { struct dwarf2_loclist_baton *baton; [...] SYMBOL_LOCATION_BATON (sym) = baton; However, if you look more closely at block_op_get_frame_base's implementation, you'll notice that the function extracts the symbol's SYMBOL_LOCATION_BATON as a dwarf2_locexpr_baton (a DWARF _expression_ rather than a _location list_). That's why we end up decoding the DLBATON improperly, and thus pass a random dlbaton->per_cu when calling dwarf2_evaluate_loc_desc. This works on x86_64-linux, because we indeed have the frame base described using a different form: .uleb128 0x40 # (DW_AT_frame_base) .uleb128 0x18 # (DW_FORM_exprloc) This patch fixes the issue by doing what we do for most (if not all) other such methods: providing one implementation each for loc-list, and loc-expr. Both implementations are nearly identical, so perhaps we might later want to improve this. But this patch first tries to fix the crash first, leaving the design issue for later. gdb/ChangeLog: * dwarf2loc.c (locexpr_get_frame_base): Renames block_op_get_frame_base. (dwarf2_block_frame_base_locexpr_funcs): Replace reference to block_op_get_frame_base by reference to locexpr_get_frame_base. (loclist_get_frame_base): New function, near identical copy of locexpr_get_frame_base. (dwarf2_block_frame_base_loclist_funcs): Replace reference to block_op_get_frame_base by reference to loclist_get_frame_base. Tested on x86_64-darwin (AdaCore testsuite), and x86_64-linux (official testsuite).
2015-09-19Replace current_inferior ()->gdbarch with its wrapper target_gdbarch.Doug Evans2-1/+6
gdb/ChangeLog: * ravenscar-thread.c (ravenscar_inferior_created): Replace current_inferior ()->gdbarch with its wrapper target_gdbarch.
2015-09-18linux-thread-db.c (record_thread): Return the created thread.Doug Evans2-14/+18
gdb/ChangeLog: * linux-thread-db.c (record_thread): Return the created thread. (thread_from_lwp): Likewise. (thread_db_get_thread_local_address): Update.
2015-09-18symtab.h (general_symbol_info) <mangled_lang>: delete and move up only member.Doug Evans4-9/+10
gdb/ChangeLog: * symtab.h (general_symbol_info) <mangled_lang>: Delete struct, move only member demangled_name up. All uses updated.
2015-09-18default_read_var_value <LOC_UNRESOLVED>: Include minsym kind in error message.Doug Evans7-1/+123
bfd/ChangeLog: * targets.c (enum bfd_flavour): Add comment. (bfd_flavour_name): New function. * bfd-in2.h: Regenerate. gdb/ChangeLog: * findvar.c (default_read_var_value) <LOC_UNRESOLVED>: Include the kind of minimal symbol in the error message. * objfiles.c (objfile_flavour_name): New function. * objfiles.h (objfile_flavour_name): Declare. gdb/testsuite/ChangeLog: * gdb.dwarf2/dw2-bad-unresolved.c: New file. * gdb.dwarf2/dw2-bad-unresolved.exp: New file.
2015-09-18Fix directory prefix in gdb.base/dso2dso.exp.Sandra Loosemore2-1/+6
2015-09-18 Sandra Loosemore <sandra@codesourcery.com> gdb/testsuite/ * gdb.base/dso2dso.exp: Don't use directory prefix when setting the breakpoint.
2015-09-18Fix pathname prefix and timeout issues in gdb.mi/mi-pending.exp.Sandra Loosemore2-3/+9
2015-09-18 Sandra Loosemore <sandra@codesourcery.com> gdb/testsuite/ * gdb.mi/mi-pending.exp: Don't use directory prefix when setting the pending breakpoint. Remove timeout override for "Run till MI pending breakpoint on pendfunc3 on thread 2" test.
2015-09-18Generalize breakpoint pattern in gdb.mi/mi-cli.exp.Sandra Loosemore2-1/+6
2015-09-18 Sandra Loosemore <sandra@codesourcery.com> gdb/testsuite/ * gdb.mi/mi-cli.exp: Don't require directory prefix in breakpoint filename pattern.
2015-09-18Generalize filename pattern in gdb.mi/mi-dprintf-pending.exp.Sandra Loosemore2-1/+6
2015-09-18 Sandra Loosemore <sandra@codesourcery.com> gdb/testsuite/ * gdb.mi/mi-dprintf-pending.exp: Don't require directory prefix in breakpoint filename pattern.
2015-09-18Fix shared library load in gdb.base/global-var-nested-by-dso.exp.Sandra Loosemore2-1/+5
2015-09-18 Sandra Loosemore <sandra@codesourcery.com> gdb/testsuite/ * gdb.base/global-var-nested-by-dso.exp: Call gdb_load_shlibs.
2015-09-18Require readline for gdb.linespec/explicit.exp tab-completion tests.Sandra Loosemore2-135/+148
2015-09-18 Sandra Loosemore <sandra@codesourcery.com> gdb/testsuite/ * gdb.linespec/explicit.exp: Check for readline support for tab-completion tests. Fix obvious typo.
2015-09-18aarch64 multi-arch (part 3): get thread areaYao Qi6-30/+69
With the kernle fix <http://lists.infradead.org/pipermail/linux-arm-kernel/2015-July/356511.html>, aarch64 GDB is able to read the base of thread area of 32-bit arm program through NT_ARM_TLS. This patch is to teach both GDB and GDBserver to read the base of thread area correctly in the multi-arch case. A new function aarch64_ps_get_thread_area is added, and is shared between GDB and GDBserver. With this patch applied, the following fails in multi-arch testing (GDB is aarch64 but the test cases are arm) are fixed, -FAIL: gdb.threads/tls-nodebug.exp: thread local storage -FAIL: gdb.threads/tls-shared.exp: print thread local storage variable -FAIL: gdb.threads/tls-so_extern.exp: print thread local storage variable -FAIL: gdb.threads/tls-var.exp: print tls_var -FAIL: gdb.threads/tls.exp: first thread local storage -FAIL: gdb.threads/tls.exp: first another thread local storage -FAIL: gdb.threads/tls.exp: p a_thread_local -FAIL: gdb.threads/tls.exp: p file2_thread_local -FAIL: gdb.threads/tls.exp: p a_thread_local second time gdb: 2015-09-18 Yao Qi <yao.qi@linaro.org> * nat/aarch64-linux.c: Include elf/common.h, nat/gdb_ptrace.h, asm/ptrace.h and sys/uio.h. (aarch64_ps_get_thread_area): New function. * nat/aarch64-linux.h: Include gdb_proc_service.h. (aarch64_ps_get_thread_area): Declare. * aarch64-linux-nat.c (ps_get_thread_area): Call aarch64_ps_get_thread_area. gdb/gdbserver: 2015-09-18 Yao Qi <yao.qi@linaro.org> * linux-aarch64-low.c: Don't include sys/uio.h. (ps_get_thread_area): Call aarch64_ps_get_thread_area.
2015-09-18btrace: honour scheduler-locking for all-stop targetsMarkus Metzger4-98/+195
In all-stop mode, record btrace maintains the old behaviour of an implicit scheduler-locking on. Now that we added a scheduler-locking mode to model this old behaviour, we don't need the respective code in record btrace anymore. Remove it. For all-stop targets, step inferior_ptid and continue other threads matching the argument ptid. Assert that inferior_ptid matches the argument ptid. This should make record btrace honour scheduler-locking. gdb/ * record-btrace.c (record_btrace_resume): Honour scheduler-locking. testsuite/ * gdb.btrace/multi-thread-step.exp: Test scheduler-locking on, step, and replay.
2015-09-18infrun: scheduler-locking replayMarkus Metzger6-25/+70
Record targets behave as if scheduler-locking were on in replay mode. Add a new scheduler-locking option "replay" to make this implicit behaviour explicit. It behaves like "on" in replay mode and like "off" in record mode. By making the current behaviour a scheduler-locking option, we allow the user to change it. Since it is the current behaviour, this new option is also the new default. One caveat is that when resuming a thread that is at the end of its execution history, record btrace implicitly stops replaying other threads and resumes the entire process. This is a convenience feature to not require the user to explicitly move all other threads to the end of their execution histories before being able to resume the process. We mimick this behaviour with scheduler-locking replay and move it from record-btrace into infrun. With all-stop on top of non-stop, we can't do this in record-btrace anymore. Record full does not really support multi-threading and is therefore not impacted. If it were extended to support multi-threading, it would 'benefit' from this change. The good thing is that all record targets will behave the same with respect to scheduler-locking. I put the code for this into clear_proceed_status. It also sends the about_to_proceed notification. gdb/ * NEWS: Announce new scheduler-locking mode. * infrun.c (schedlock_replay): New. (scheduler_enums): Add schedlock_replay. (scheduler_mode): Change default to schedlock_replay. (user_visible_resume_ptid): Handle schedlock_replay. (clear_proceed_status_thread): Stop replaying if resumed thread is not replaying. (schedlock_applies): Handle schedlock_replay. (_initialize_infrun): Document new scheduler-locking mode. * record-btrace.c (record_btrace_resume): Remove code to stop other threads when not replaying the resumed thread. doc/ * gdb.texinfo (All-Stop Mode): Describe new scheduler-locking mode.
2015-09-18target: add to_record_will_replay target methodMarkus Metzger6-0/+85
Add a new target method to_record_will_replay to query if there is a record target that will replay at least one thread matching the argument PTID if it were executed in the argument execution direction. gdb/ * record-btrace.c ((record_btrace_will_replay): New. (init_record_btrace_ops): Initialize to_record_will_replay. * record-full.c ((record_full_will_replay): New. (init_record_full_ops): Initialize to_record_will_replay. * target-delegates.c: Regenerated. * target.c (target_record_will_replay): New. * target.h (struct target_ops) <to_record_will_replay>: New. (target_record_will_replay): New. Signed-off-by: Markus Metzger <markus.t.metzger@intel.com>
2015-09-18target: add to_record_stop_replaying target methodMarkus Metzger6-2/+76
Add a new target method to_record_stop_replaying to stop replaying. gdb/ * record-btrace.c (record_btrace_resume): Call target_record_stop_replaying. (record_btrace_stop_replaying_all): New. (init_record_btrace_ops): Initialize to_record_stop_replaying. * record-full.c (record_full_stop_replaying): New. (init_record_full_ops ): Initialize to_record_stop_replaying. * target-delegates.c: Regenerated. * target.c (target_record_stop_replaying): New. * target.h (struct target_ops) <to_record_stop_replaying>: New. (target_record_stop_replaying): New.
2015-09-18btrace: allow full memory and register access for non-replaying threadsMarkus Metzger2-4/+12
The record btrace target does not allow accessing memory and storing registers while replaying. For multi-threaded applications, this prevents those accesses also for threads that are at the end of their execution history as long as at least one thread is replaying. Change this to only check if the selected thread is replaying. This allows threads that are at the end of their execution history to read and write memory and to store registers. Also change the error message to reflect this change. gdb/ * record-btrace.c (record_btrace_xfer_partial) (record_btrace_store_registers, record_btrace_prepare_to_store): Call record_btrace_is_replaying with inferior_ptid instead of minus_one_ptid. (record_btrace_store_registers): Change error message.
2015-09-18target, record: add PTID argument to to_record_is_replayingMarkus Metzger7-26/+46
The to_record_is_replaying target method is used to query record targets if they are replaying. This is currently interpreted as "is any thread being replayed". Add a PTID argument and change the interpretation to "is any thread matching PTID being replayed". Change all users to pass minus_one_ptid to preserve the old meaning. The record full target does not really support multi-threading and ignores the PTID argument. gdb/ * record-btrace.c (record_btrace_is_replaying): Add ptid argument. Update users to pass minus_one_ptid. * record-full.c (record_full_is_replaying): Add ptid argument (ignored). * record.c (cmd_record_delete): Pass inferior_ptid to target_record_is_replaying. * target-delegates.c: Regenerated. * target.c (target_record_is_replaying): Add ptid argument. * target.h (struct target_ops) <to_record_is_replaying>: Add ptid argument. (target_record_is_replaying): Add ptid argument.
2015-09-18btrace: non-stopMarkus Metzger6-3/+302
Support non-stop mode in record btrace. gdb/ * record-btrace.c (record_btrace_open): Remove non_stop check. * NEWS: Announce that record btrace supports non-stop mode. testsuite/ * gdb.btrace/non-stop.c: New. * gdb.btrace/non-stop.exp: New.
2015-09-18infrun: switch to NO_HISTORY threadMarkus Metzger2-1/+12
A thread that runs out of its execution history is stopped. We already set stop_pc and call stop_waiting. But we do not switch to the stopped thread. In normal_stop, we call finish_thread_state_cleanup to set a thread's running state. In all-stop mode, we call it with minus_one_ptid; in non-stop mode, we only call it for inferior_ptid. If in non-stop mode normal_stop is called on behalf of a thread that is not inferior_ptid, that other thread will still be reported as running. If it is actually stopped it can't be resumed again. Record targets traditionally don't support non-stop and only resume inferior_ptid. So this has not been a problem, so far. Switch to the eventing thread for NO_HISTORY events as preparation to support non-stop for the record btrace target. gdb/ * infrun.c (handle_inferior_event_1): Switch to the eventing thread in the TARKET_WAITKIND_NO_HISTORY case.
2015-09-18btrace: asyncMarkus Metzger2-0/+32
The record btrace target runs synchronous with GDB. That is, GDB steps resumed threads in record btrace's to_wait method. Without GDB calling to_wait, nothing happens 'on the target'. Check for further expected events in to_wait before reporting the current event and mark record btrace's async event handler in async mode. gdb/ * record-btrace.c (record_btrace_maybe_mark_async_event): New. (record_btrace_wait): Call record_btrace_maybe_mark_async_event.
2015-09-18btrace: temporarily set inferior_ptid in record_btrace_start_replayingMarkus Metzger2-19/+56
Get_current_frame uses inferior_ptid. In record_btrace_start_replaying, we need to get the current frame of the argument thread. So far, this has always been inferior_ptid. With non-stop, this is not guaranteed. Temporarily set inferior_ptid to the ptid of the argument thread. We already temporarily set the argument thread's executing flag to false. Move both into a new function get_thread_current_frame that does the temporary adjustments, calls get_current_frame, and restores the previous values. gdb/ * record-btrace.c (get_thread_current_frame): New. (record_btrace_start_replaying): Call get_thread_current_frame.
2015-09-18btrace: resume all requested threadsMarkus Metzger2-37/+43
The record targets are implicitly schedlocked. They only step the current thread and keep other threads where they are. Change record btrace to step all requested threads in to_resume. For maintenance and debugging, we keep the old behaviour when the target below is not non-stop. Enable with "maint set target-non-stop on". gdb/ * record-btrace.c (record_btrace_resume_thread): A move request overwrites a previous move request. (record_btrace_find_resume_thread): Removed. (record_btrace_resume): Resume all requested threads.
2015-09-18btrace: lock-stepMarkus Metzger2-77/+190
Record btrace's to_wait method picks a single thread to step. When passed minus_one_ptid, it picks the current thread. All other threads remain where they are. Change this to step all resumed threads together, one step at a time, until the first thread reports an event. We do delay reporting NO_HISTORY events until there are no other events to report to prevent threads at the end of their execution history from starving other threads. We keep threads at the end of their execution history moving and replaying until we announce their stop in to_wait. This shouldn't really be user-visible but its a detail worth mentioning. Since record btrace's to_resume method also picks only a single thread to resume, there shouldn't be a difference with the current all-stop. With non-stop or all-stop on top of non-stop, we will see differences. The behaviour should be more natural as we're moving all threads. gdb/ * record-btrace.c: Include vec.h. (record_btrace_find_thread_to_move): Removed. (btrace_step_no_resumed, btrace_step_again) (record_btrace_stop_replaying_at_end): New. (record_btrace_cancel_resume): Call record_btrace_stop_replaying_at_end. (record_btrace_single_step_forward): Remove calls to record_btrace_stop_replaying. (record_btrace_step_thread): Do only one step for BTHR_CONT and BTHR_RCONT. Keep threads at the end of their history moving. (record_btrace_wait): Call record_btrace_step_thread for all threads until one reports an event. Call record_btrace_stop_replaying_at_end for the eventing thread.
2015-09-18btrace: add missing NO_HISTORYMarkus Metzger2-1/+9
If a single-step ended right at the end of the execution history, we forgot to announce that. Fix it. gdb/ * record-btrace.c (record_btrace_single_step_forward): Return NO_HISTORY if a step brings us to the end of the execution history.
2015-09-18btrace: move breakpoint checking into stepping functionsMarkus Metzger2-6/+23
Breakpoints are only checked for BTHR_CONT and BTHR_RCONT stepping requests. A BTHR_STEP and BTHR_RSTEP request will always report stopped without reason. Since breakpoints are reported correctly, I assume infrun is handling this. Move the breakpoint check into the btrace single stepping functions. This will cause us to report breakpoint hits now also for single-step requests. One thing to notice is that - when executing forwards, the breakpoint is checked before 'executing' the instruction, i.e. before moving the PC to the next instruction. - when executing backwards, the breakpoint is checked after 'executing' the instruction, i.e. after moving the PC to the preceding instruction in the recorded execution. There is code in infrun (see, for example proceed and adjust_pc_after_break) that handles this and also depends on this behaviour. gdb/ * record-btrace.c (record_btrace_step_thread): Move breakpoint check to ... (record_btrace_single_step_forward): ... here and (record_btrace_single_step_backward): ... here.
2015-09-18btrace: split record_btrace_step_threadMarkus Metzger2-82/+113
The code for BTHR_STEP and BTHR_CONT is fairly similar. Extract the common parts into a new function record_btrace_single_step_forward. The function returns TARGET_WAITKIND_SPURIOUS to indicate that the single-step completed without triggering a trap. Same for BTHR_RSTEP and BTHR_RCONT. gdb/ * record-btrace.c (btrace_step_spurious) (record_btrace_single_step_forward) (record_btrace_single_step_backward): New. (record_btrace_step_thread): Call record_btrace_single_step_forward and record_btrace_single_step_backward.
2015-09-18btrace: extract the breakpoint check from record_btrace_step_threadMarkus Metzger2-12/+35
There are two places where record_btrace_step_thread checks for a breakpoint at the current replay position. Move this code into its own function. gdb/ * record-btrace.c (record_btrace_replay_at_breakpoint): New. (record_btrace_step_thread): Call record_btrace_replay_at_breakpoint.
2015-09-18btrace: improve stepping debuggingMarkus Metzger2-4/+62
gdb/ * record-btrace.c (btrace_thread_flag_to_str) (record_btrace_cancel_resume): New. (record_btrace_step_thread): Call btrace_thread_flag_to_str. (record_btrace_resume): Print execution direction. (record_btrace_resume_thread): Call btrace_thread_flag_to_str. (record_btrace_wait): Call record_btrace_cancel_resume.
2015-09-18btrace: support to_stopMarkus Metzger3-9/+70
Add support for the to_stop target method to the btrace record target. gdb/ * btrace.h (enum btrace_thread_flag) <BTHR_STOP>: New. * record-btrace (record_btrace_resume_thread): Clear BTHR_STOP. (record_btrace_find_thread_to_move): Also accept threads that have BTHR_STOP set. (btrace_step_stopped_on_request, record_btrace_stop): New. (record_btrace_step_thread): Support BTHR_STOP. (record_btrace_wait): Also clear BTHR_STOP when stopping other threads. (init_record_btrace_ops): Initialize to_stop.
2015-09-18btrace: fix non-stop check in to_waitMarkus Metzger2-1/+6
The record btrace target stops other threads in non-stop mode after stepping the to-be-resumed thread. The check is done on the non_stop variable. It should rather be done on target_is_non_stop_p (). With all-stop on top of non-stop, infrun will take care of stopping other threads. gdb/ * record-btrace.c (record_btrace_wait): Replace non_stop check with target_is_non_stop_p ().
2015-09-17Add test case for tracepoints with conditionsPierre Langlois3-0/+240
This patch adds a test case for tracepoints with a condition expression. Each case will test a condition against the number of frames that should have been traced. Some of these tests fail on x86_64 and others on i386, which have been marked as known failures for now, see PR/18955. gdb/testsuite/ChangeLog: 2015-09-17 Pierre Langlois <pierre.langlois@arm.com> Yao Qi <yao.qi@linaro.org> * gdb.trace/trace-condition.c: New file. * gdb.trace/trace-condition.exp: New file.
2015-09-16Fix argument to compiled_cond, and add cases for compiled-condition.Wei-cheng Wang4-2/+79
This patch fixes the argument passed to compiled_cond. It should be regs buffer instead of tracepoint_hit_ctx. Test case is added as well for testing compiled-cond. gdb/gdbserver/ChangeLog 2015-09-16 Wei-cheng Wang <cole945@gmail.com> * tracepoint.c (eval_result_type): Change prototype. (condition_true_at_tracepoint): Fix argument to compiled_cond. gdb/testsuite/ChangeLog 2015-09-16 Wei-cheng Wang <cole945@gmail.com> * gdb.trace/ftrace.exp: (test_ftrace_condition) New function for testing bytecode compilation.
2015-09-16non-stop-fair-events.exp slower on software single-step && !displ-step targetsPedro Alves4-38/+96
On software single-step targets that don't support displaced stepping, threads keep hitting each other's single-step breakpoints, and then GDB needs to pause all threads to step past those. The end result is that progress in the main thread will be slower and it may take a bit longer for the signal to be queued. This patch bumps the timeout on such targets. gdb/testsuite/ChangeLog: 2015-09-16 Pedro Alves <palves@redhat.com> Sandra Loosemore <sandra@codesourcery.com> * gdb.threads/non-stop-fair-events.c (timeout): New global. (SECONDS): Redefine. (main): Call pthread_kill and alarm early. * gdb.threads/non-stop-fair-events.exp: Probe displaced stepping support. (test): If the target can't hardware step and doesn't support displaced stepping, increase the timeout.
2015-09-16Make it easier to debug non-stop-fair-events.expPedro Alves2-3/+61
If we enable infrun debug running this test, it quickly fails with a full expect buffer. That can be simply handled with a couple exp_continues. As it's annoying to hack this every time we need to debug the test, this patch adds bits to enable debugging support easily, with a one-line change. And then, if any iteration of the test fails, we end up with a long cascade of time outs. Just bail out when we see the first fail. gdb/testsuite/ 2015-09-16 Pedro Alves <palves@redhat.com> * gdb.threads/non-stop-fair-events.exp (gdb_test_no_anchor) (enable_debug): New procedures. (test): Use them. Bail out if waiting for threads fails. (top level): Bail out if a test fails.
2015-09-16Don't skip gdb.asm/asm-source.exp on aarch64Yao Qi3-0/+43
This patch adds gdb.asm/aarch64.inc, so asm-source.exp isn't skipped on aarch64 any more. gdb/testsuite: 2015-09-16 Yao Qi <yao.qi@linaro.org> * gdb.asm/asm-source.exp: Set asm-arch for aarch64*-*-* target. * gdb.asm/aarch64.inc: New file.
2015-09-15[Ada] Enhance type printing for arrays with variable-sized elementsPierre-Marie de Rodat7-3/+140
This change is relevant only for standard DWARF (as opposed to the GNAT encodings extensions): at the time of writing it only makes a difference with GCC patches that are to be integrated: see the patch series submission at <https://gcc.gnu.org/ml/gcc-patches/2015-07/msg01353.html>. Given the following Ada declarations: subtype Small_Int is Natural range 0 .. 100; type R_Type (L : Small_Int := 0) is record S : String (1 .. L); end record; type A_Type is array (Natural range <>) of R_Type; A : A_Type := (1 => (L => 0, S => ""), 2 => (L => 2, S => "ab")); Before this change, we would get the following GDB session: (gdb) ptype a type = array (1 .. 2) of foo.r_type <packed: 838-bit elements> This is wrong: "a" is not a packed array. This output comes from the fact that, because R_Type has a dynamic size (with a maximum), the compiler has to describe in the debugging information the size allocated for each array element (i.e. the stride, in DWARF parlance: see DW_AT_byte_stride). Ada type printing currently assumes that arrays with a stride are packed, hence the above output. In practice, GNAT never performs bit-packing for arrays that contain variable-sized elements. Leveraging this fact, this patch enhances type printing so that ptype does not pretend that arrays are packed when they have a stride and they contain dynamic elements. After this change, we get the following expected output: (gdb) ptype a type = array (1 .. 2) of foo.r_type gdb/ChangeLog: * ada-typeprint.c (print_array_type): Do not describe arrays as packed when they embed dynamic elements. gdb/testsuite/ChangeLog: * gdb.ada/array_of_variable_length.exp: New testcase. * gdb.ada/array_of_variable_length/foo.adb: New file. * gdb.ada/array_of_variable_length/pck.adb: New file. * gdb.ada/array_of_variable_length/pck.ads: New file. Tested on x86_64-linux, no regression.