- 20 Dec, 2017 35 commits
-
-
This patch changes SUBREG_BYTE from an int to a poly_int. Since valid SUBREG_BYTEs must be contained within the mode of the SUBREG_REG, the required range is the same as for GET_MODE_SIZE, i.e. unsigned short. The patch therefore uses poly_uint16(_pod) for the SUBREG_BYTE. Using poly_uint16_pod rtx fields requires a new field code ('p'). Since there are no other uses of 'p' besides SUBREG_BYTE, the patch doesn't add an XPOLY or whatever; all uses should go via SUBREG_BYTE instead. The patch doesn't bother implementing 'p' support for legacy define_peepholes, since none of the remaining ones have subregs in their patterns. As it happened, the rtl documentation used SUBREG as an example of a code with mixed field types, accessed via XEXP (x, 0) and XINT (x, 1). Since there's no direct replacement for XINT, and since people should never use it even if there were, the patch changes the example to use INT_LIST instead. The patch also changes subreg-related helper functions so that they too take and return polynomial offsets. This makes the patch quite big, but it's mostly mechanical. The patch generally sticks to existing choices wrt signedness. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * doc/rtl.texi: Update documentation of SUBREG_BYTE. Document the 'p' format code. Use INT_LIST rather than SUBREG as the example of a code with an XINT and an XEXP. Remove the implication that accessing an rtx field using XINT is expected to work. * rtl.def (SUBREG): Change format from "ei" to "ep". * rtl.h (rtunion::rt_subreg): New field. (XCSUBREG): New macro. (SUBREG_BYTE): Use it. (subreg_shape): Change offset from an unsigned int to a poly_uint16. Update constructor accordingly. (subreg_shape::operator ==): Update accordingly. (subreg_shape::unique_id): Return an unsigned HOST_WIDE_INT rather than an unsigned int. (subreg_lsb, subreg_lowpart_offset, subreg_highpart_offset): Return a poly_uint64 rather than an unsigned int. (subreg_lsb_1): Likewise. Take the offset as a poly_uint64 rather than an unsigned int. (subreg_size_offset_from_lsb, subreg_size_lowpart_offset) (subreg_size_highpart_offset): Return a poly_uint64 rather than an unsigned int. Take the sizes as poly_uint64s. (subreg_offset_from_lsb): Return a poly_uint64 rather than an unsigned int. Take the shift as a poly_uint64 rather than an unsigned int. (subreg_regno_offset, subreg_offset_representable_p): Take the offset as a poly_uint64 rather than an unsigned int. (simplify_subreg_regno): Likewise. (byte_lowpart_offset): Return the memory offset as a poly_int64 rather than an int. (subreg_memory_offset): Likewise. Take the subreg offset as a poly_uint64 rather than an unsigned int. (simplify_subreg, simplify_gen_subreg, subreg_get_info) (gen_rtx_SUBREG, validate_subreg): Take the subreg offset as a poly_uint64 rather than an unsigned int. * rtl.c (rtx_format): Describe 'p' in comment. (copy_rtx, rtx_equal_p_cb, rtx_equal_p): Handle 'p'. * emit-rtl.c (validate_subreg, gen_rtx_SUBREG): Take the subreg offset as a poly_uint64 rather than an unsigned int. (byte_lowpart_offset): Return the memory offset as a poly_int64 rather than an int. (subreg_memory_offset): Likewise. Take the subreg offset as a poly_uint64 rather than an unsigned int. (subreg_size_lowpart_offset, subreg_size_highpart_offset): Take the mode sizes as poly_uint64s rather than unsigned ints. Return a poly_uint64 rather than an unsigned int. (subreg_lowpart_p): Treat subreg offsets as poly_ints. (copy_insn_1): Handle 'p'. * rtlanal.c (set_noop_p): Treat subregs offsets as poly_uint64s. (subreg_lsb_1): Take the subreg offset as a poly_uint64 rather than an unsigned int. Return the shift in the same way. (subreg_lsb): Return the shift as a poly_uint64 rather than an unsigned int. (subreg_size_offset_from_lsb): Take the sizes and shift as poly_uint64s rather than unsigned ints. Return the offset as a poly_uint64. (subreg_get_info, subreg_regno_offset, subreg_offset_representable_p) (simplify_subreg_regno): Take the offset as a poly_uint64 rather than an unsigned int. * rtlhash.c (add_rtx): Handle 'p'. * genemit.c (gen_exp): Likewise. * gengenrtl.c (type_from_format, gendef): Likewise. * gensupport.c (subst_pattern_match, get_alternatives_number) (collect_insn_data, alter_predicate_for_insn, alter_constraints) (subst_dup): Likewise. * gengtype.c (adjust_field_rtx_def): Likewise. * genrecog.c (find_operand, find_matching_operand, validate_pattern) (match_pattern_2): Likewise. (rtx_test::SUBREG_FIELD): New rtx_test::kind_enum. (rtx_test::subreg_field): New function. (operator ==, safe_to_hoist_p, transition_parameter_type) (print_nonbool_test, print_test): Handle SUBREG_FIELD. * genattrtab.c (attr_rtx_1): Say that 'p' is deliberately not handled. * genpeep.c (match_rtx): Likewise. * print-rtl.c (print_poly_int): Include if GENERATOR_FILE too. (rtx_writer::print_rtx_operand): Handle 'p'. (print_value): Handle SUBREG. * read-rtl.c (apply_int_iterator): Likewise. (rtx_reader::read_rtx_operand): Handle 'p'. * alias.c (rtx_equal_for_memref_p): Likewise. * cselib.c (rtx_equal_for_cselib_1, cselib_hash_rtx): Likewise. * caller-save.c (replace_reg_with_saved_mem): Treat subreg offsets as poly_ints. * calls.c (expand_call): Likewise. * combine.c (combine_simplify_rtx, expand_field_assignment): Likewise. (make_extraction, gen_lowpart_for_combine): Likewise. * loop-invariant.c (hash_invariant_expr_1, invariant_expr_equal_p): Likewise. * cse.c (remove_invalid_subreg_refs): Take the offset as a poly_uint64 rather than an unsigned int. Treat subreg offsets as poly_ints. (exp_equiv_p): Handle 'p'. (hash_rtx_cb): Likewise. Treat subreg offsets as poly_ints. (equiv_constant, cse_insn): Treat subreg offsets as poly_ints. * dse.c (find_shift_sequence): Likewise. * dwarf2out.c (rtl_for_decl_location): Likewise. * expmed.c (extract_low_bits): Likewise. * expr.c (emit_group_store, undefined_operand_subword_p): Likewise. (expand_expr_real_2): Likewise. * final.c (alter_subreg): Likewise. (leaf_renumber_regs_insn): Handle 'p'. * function.c (assign_parm_find_stack_rtl, assign_parm_setup_stack): Treat subreg offsets as poly_ints. * fwprop.c (forward_propagate_and_simplify): Likewise. * ifcvt.c (noce_emit_move_insn, noce_emit_cmove): Likewise. * ira.c (get_subreg_tracking_sizes): Likewise. * ira-conflicts.c (go_through_subreg): Likewise. * ira-lives.c (process_single_reg_class_operands): Likewise. * jump.c (rtx_renumbered_equal_p): Likewise. Handle 'p'. * lower-subreg.c (simplify_subreg_concatn): Take the subreg offset as a poly_uint64 rather than an unsigned int. (simplify_gen_subreg_concatn, resolve_simple_move): Treat subreg offsets as poly_ints. * lra-constraints.c (operands_match_p): Handle 'p'. (match_reload, curr_insn_transform): Treat subreg offsets as poly_ints. * lra-spills.c (assign_mem_slot): Likewise. * postreload.c (move2add_valid_value_p): Likewise. * recog.c (general_operand, indirect_operand): Likewise. * regcprop.c (copy_value, maybe_mode_change): Likewise. (copyprop_hardreg_forward_1): Likewise. * reginfo.c (simplifiable_subregs_hasher::hash, simplifiable_subregs) (record_subregs_of_mode): Likewise. * rtlhooks.c (gen_lowpart_general, gen_lowpart_if_possible): Likewise. * reload.c (operands_match_p): Handle 'p'. (find_reloads_subreg_address): Treat subreg offsets as poly_ints. * reload1.c (alter_reg, choose_reload_regs): Likewise. (compute_reload_subreg_offset): Likewise, and return an poly_int64. * simplify-rtx.c (simplify_truncation, simplify_binary_operation_1): (test_vector_ops_duplicate): Treat subreg offsets as poly_ints. (simplify_const_poly_int_tests<N>::run): Likewise. (simplify_subreg, simplify_gen_subreg): Take the subreg offset as a poly_uint64 rather than an unsigned int. * valtrack.c (debug_lowpart_subreg): Likewise. * var-tracking.c (var_lowpart): Likewise. (loc_cmp): Handle 'p'. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255882
Richard Sandiford committed -
Normmaly the IRA-reload interface tries to track the liveness of individual bytes of an allocno if the allocno is sometimes written to as a SUBREG. This isn't possible for variable-sized allocnos, but it doesn't matter because targets with variable-sized registers should use LRA instead. This patch adds a get_subreg_tracking_sizes function for deciding whether it is possible to model a partial read or write. Later patches make it return false if anything is variable. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * ira.c (get_subreg_tracking_sizes): New function. (init_live_subregs): Take an integer size rather than a register. (build_insn_chain): Use get_subreg_tracking_sizes. Update calls to init_live_subregs. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255881
Richard Sandiford committed -
This patch makes store_field and related routines use poly_ints for bit positions and sizes. It keeps the existing choices between signed and unsigned types (there are a mixture of both). 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * expr.c (store_constructor_field): Change bitsize from a unsigned HOST_WIDE_INT to a poly_uint64 and bitpos from a HOST_WIDE_INT to a poly_int64. (store_constructor): Change size from a HOST_WIDE_INT to a poly_int64. (store_field): Likewise bitsize and bitpos. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255880
Richard Sandiford committed -
This patch changes C++ bitregion_start/end values from constants to poly_ints. Although it's unlikely that the size needs to be polynomial in practice, the offset could be with future language extensions. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * expmed.h (store_bit_field): Change bitregion_start and bitregion_end from unsigned HOST_WIDE_INT to poly_uint64. * expmed.c (adjust_bit_field_mem_for_reg, strict_volatile_bitfield_p) (store_bit_field_1, store_integral_bit_field, store_bit_field) (store_fixed_bit_field, store_split_bit_field): Likewise. * expr.c (store_constructor_field, store_field): Likewise. (optimize_bitfield_assignment_op): Likewise. Make the same change to bitsize and bitpos. * machmode.h (bit_field_mode_iterator): Change m_bitregion_start and m_bitregion_end from HOST_WIDE_INT to poly_int64. Make the same change in the constructor arguments. (get_best_mode): Change bitregion_start and bitregion_end from unsigned HOST_WIDE_INT to poly_uint64. * stor-layout.c (bit_field_mode_iterator::bit_field_mode_iterator): Change bitregion_start and bitregion_end from HOST_WIDE_INT to poly_int64. (bit_field_mode_iterator::next_mode): Update for new types of m_bitregion_start and m_bitregion_end. (get_best_mode): Change bitregion_start and bitregion_end from unsigned HOST_WIDE_INT to poly_uint64. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255879
Richard Sandiford committed -
Similar to the previous store_bit_field patch, but for extractions rather than insertions. The patch splits out the extraction-as-subreg handling into a new function (extract_bit_field_as_subreg), both for ease of writing and because a later patch will add another caller. The simplify_gen_subreg overload is temporary; it goes away in a later patch. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * rtl.h (simplify_gen_subreg): Add a temporary overload that accepts poly_uint64 offsets. * expmed.h (extract_bit_field): Take bitsize and bitnum as poly_uint64s rather than unsigned HOST_WIDE_INTs. * expmed.c (lowpart_bit_field_p): Likewise. (extract_bit_field_as_subreg): New function, split out from... (extract_bit_field_1): ...here. Take bitsize and bitnum as poly_uint64s rather than unsigned HOST_WIDE_INTs. For vector extractions, check that BITSIZE matches the size of the extracted value and that BITNUM is an exact multiple of that size. If all else fails, try forcing the value into memory if BITNUM is variable, and adjusting the address so that the offset is constant. Split the part that can only handle constant bitsize and bitnum out into... (extract_integral_bit_field): ...this new function. (extract_bit_field): Take bitsize and bitnum as poly_uint64s rather than unsigned HOST_WIDE_INTs. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255878
Richard Sandiford committed -
This patch changes the bitnum and bitsize arguments to store_bit_field from unsigned HOST_WIDE_INTs to poly_uint64s. The later part of store_bit_field_1 still needs to operate on constant bit positions and sizes, so the patch splits it out into a subfunction (store_integral_bit_field). 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * expmed.h (store_bit_field): Take bitsize and bitnum as poly_uint64s rather than unsigned HOST_WIDE_INTs. * expmed.c (simple_mem_bitfield_p): Likewise. Add a parameter that returns the byte size. (store_bit_field_1): Take bitsize and bitnum as poly_uint64s rather than unsigned HOST_WIDE_INTs. Update call to simple_mem_bitfield_p. Split the part that can only handle constant bitsize and bitnum out into... (store_integral_bit_field): ...this new function. (store_bit_field): Take bitsize and bitnum as poly_uint64s rather than unsigned HOST_WIDE_INTs. (extract_bit_field_1): Update call to simple_mem_bitfield_p. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255877
Richard Sandiford committed -
This patch makes LRA use poly_int64s rather than HOST_WIDE_INTs to store a frame offset (including in things like eliminations). 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * lra-int.h (lra_reg): Change offset from int to poly_int64. (lra_insn_recog_data): Change sp_offset from HOST_WIDE_INT to poly_int64. (lra_eliminate_regs_1, eliminate_regs_in_insn): Change update_sp_offset from a HOST_WIDE_INT to a poly_int64. (lra_update_reg_val_offset, lra_reg_val_equal_p): Take the offset as a poly_int64 rather than an int. * lra-assigns.c (find_hard_regno_for_1): Handle poly_int64 offsets. (setup_live_pseudos_and_spill_after_risky_transforms): Likewise. * lra-constraints.c (equiv_address_substitution): Track offsets as poly_int64s. (emit_inc): Check poly_int_rtx_p instead of CONST_INT_P. (curr_insn_transform): Handle the new form of sp_offset. * lra-eliminations.c (lra_elim_table): Change previous_offset and offset from HOST_WIDE_INT to poly_int64. (print_elim_table, update_reg_eliminate): Update accordingly. (self_elim_offsets): Change from HOST_WIDE_INT to poly_int64_pod. (get_elimination): Update accordingly. (form_sum): Check poly_int_rtx_p instead of CONST_INT_P. (lra_eliminate_regs_1, eliminate_regs_in_insn): Change update_sp_offset from a HOST_WIDE_INT to a poly_int64. Handle poly_int64 offsets generally. (curr_sp_change): Change from HOST_WIDE_INT to poly_int64. (mark_not_eliminable, init_elimination): Update accordingly. (remove_reg_equal_offset_note): Return a bool and pass the new offset back by pointer as a poly_int64. * lra-remat.c (change_sp_offset): Take sp_offset as a poly_int64 rather than a HOST_WIDE_INT. (do_remat): Track offsets poly_int64s. * lra.c (lra_update_insn_recog_data, setup_sp_offset): Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255876
Richard Sandiford committed -
This patch changes the MEM_OFFSET and MEM_SIZE memory attributes from HOST_WIDE_INT to poly_int64. Most of it is mechanical, but there is one nonbovious change in widen_memory_access. Previously the main while loop broke with: /* Similarly for the decl. */ else if (DECL_P (attrs.expr) && DECL_SIZE_UNIT (attrs.expr) && TREE_CODE (DECL_SIZE_UNIT (attrs.expr)) == INTEGER_CST && compare_tree_int (DECL_SIZE_UNIT (attrs.expr), size) >= 0 && (! attrs.offset_known_p || attrs.offset >= 0)) break; but it seemed wrong to optimistically assume the best case when the offset isn't known (and thus might be negative). As it happens, the "! attrs.offset_known_p" condition was always false, because we'd already nullified attrs.expr in that case: /* If we don't know what offset we were at within the expression, then we can't know if we've overstepped the bounds. */ if (! attrs.offset_known_p) attrs.expr = NULL_TREE; The patch therefore drops "! attrs.offset_known_p ||" when converting the offset check to the may/must interface. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * rtl.h (mem_attrs): Add a default constructor. Change size and offset from HOST_WIDE_INT to poly_int64. * emit-rtl.h (set_mem_offset, set_mem_size, adjust_address_1) (adjust_automodify_address_1, set_mem_attributes_minus_bitpos) (widen_memory_access): Take the sizes and offsets as poly_int64s rather than HOST_WIDE_INTs. * alias.c (ao_ref_from_mem): Handle the new form of MEM_OFFSET. (offset_overlap_p): Take poly_int64s rather than HOST_WIDE_INTs and ints. (adjust_offset_for_component_ref): Change the offset from a HOST_WIDE_INT to a poly_int64. (nonoverlapping_memrefs_p): Track polynomial offsets and sizes. * cfgcleanup.c (merge_memattrs): Update after mem_attrs changes. * dce.c (find_call_stack_args): Likewise. * dse.c (record_store): Likewise. * dwarf2out.c (tls_mem_loc_descriptor, dw_sra_loc_expr): Likewise. * print-rtl.c (rtx_writer::print_rtx): Likewise. * read-rtl-function.c (test_loading_mem): Likewise. * rtlanal.c (may_trap_p_1): Likewise. * simplify-rtx.c (delegitimize_mem_from_attrs): Likewise. * var-tracking.c (int_mem_offset, track_expr_p): Likewise. * emit-rtl.c (mem_attrs_eq_p, get_mem_align_offset): Likewise. (mem_attrs::mem_attrs): New function. (set_mem_attributes_minus_bitpos): Change bitpos from a HOST_WIDE_INT to poly_int64. (set_mem_alias_set, set_mem_addr_space, set_mem_align, set_mem_expr) (clear_mem_offset, clear_mem_size, change_address) (get_spill_slot_decl, set_mem_attrs_for_spill): Directly initialize mem_attrs. (set_mem_offset, set_mem_size, adjust_address_1) (adjust_automodify_address_1, offset_address, widen_memory_access): Likewise. Take poly_int64s rather than HOST_WIDE_INT. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255875
Richard Sandiford committed -
This patch changes the offset and size arguments of rtx_addr_can_trap_p_1 from HOST_WIDE_INT to poly_int64. It also uses a size of -1 rather than 0 to represent an unknown size and BLKmode rather than VOIDmode to represent an unknown mode. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * rtlanal.c (rtx_addr_can_trap_p_1): Take the offset and size as poly_int64s rather than HOST_WIDE_INTs. Use a size of -1 rather than 0 to represent an unknown size. Assert that the size is known when the mode isn't BLKmode. (may_trap_p_1): Use -1 for unknown sizes. (rtx_addr_can_trap_p): Likewise. Pass BLKmode rather than VOIDmode. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255874
Richard Sandiford committed -
This patch makes RTL DSE use poly_int for offsets and sizes. The local phase can optimise them normally but the global phase treats them as wild accesses. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * dse.c (store_info): Change offset and width from HOST_WIDE_INT to poly_int64. Update commentary for positions_needed.large. (read_info_type): Change offset and width from HOST_WIDE_INT to poly_int64. (set_usage_bits): Likewise. (canon_address): Return the offset as a poly_int64 rather than a HOST_WIDE_INT. Use strip_offset_and_add. (set_all_positions_unneeded, any_positions_needed_p): Use positions_needed.large to track stores with non-constant widths. (all_positions_needed_p): Likewise. Take the offset and width as poly_int64s rather than ints. Assert that rhs is nonnull. (record_store): Cope with non-constant offsets and widths. Nullify the rhs of an earlier store if we can't tell which bytes of it are needed. (find_shift_sequence): Take the access_size and shift as poly_int64s rather than ints. (get_stored_val): Take the read_offset and read_width as poly_int64s rather than HOST_WIDE_INTs. (check_mem_read_rtx, scan_stores, scan_reads, dse_step5): Handle non-constant offsets and widths. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255873
Richard Sandiford committed -
This patch changes the offset, size and max_size fields of ao_ref from HOST_WIDE_INT to poly_int64 and propagates the change through the code that references it. This includes changing the off field of vn_reference_op_struct in the same way. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * inchash.h (inchash::hash::add_poly_int): New function. * tree-ssa-alias.h (ao_ref::offset, ao_ref::size, ao_ref::max_size): Use poly_int64 rather than HOST_WIDE_INT. (ao_ref::max_size_known_p): New function. * tree-ssa-sccvn.h (vn_reference_op_struct::off): Use poly_int64_pod rather than HOST_WIDE_INT. * tree-ssa-alias.c (ao_ref_base): Apply get_ref_base_and_extent to temporaries until its interface is adjusted to match. (ao_ref_init_from_ptr_and_size): Handle polynomial offsets and sizes. (aliasing_component_refs_p, decl_refs_may_alias_p) (indirect_ref_may_alias_decl_p, indirect_refs_may_alias_p): Take the offsets and max_sizes as poly_int64s instead of HOST_WIDE_INTs. (refs_may_alias_p_1, stmt_kills_ref_p): Adjust for changes to ao_ref fields. * alias.c (ao_ref_from_mem): Likewise. * tree-ssa-dce.c (mark_aliased_reaching_defs_necessary_1): Likewise. * tree-ssa-dse.c (valid_ao_ref_for_dse, normalize_ref) (clear_bytes_written_by, setup_live_bytes_from_ref, compute_trims) (maybe_trim_complex_store, maybe_trim_constructor_store) (live_bytes_read, dse_classify_store): Likewise. * tree-ssa-sccvn.c (vn_reference_compute_hash, vn_reference_eq): (copy_reference_ops_from_ref, ao_ref_init_from_vn_reference) (fully_constant_vn_reference_p, valueize_refs_1): Likewise. (vn_reference_lookup_3): Likewise. * tree-ssa-uninit.c (warn_uninitialized_vars): Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255872
Richard Sandiford committed -
This patch makes indirect_refs_may_alias_p use ranges_may_overlap_p rather than ranges_overlap_p. Unlike the former, the latter can handle negative offsets, so the fix for PR44852 should no longer be necessary. It can also handle offset_int, so avoids unchecked truncations to HOST_WIDE_INT. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * tree-ssa-alias.c (indirect_ref_may_alias_decl_p) (indirect_refs_may_alias_p): Use ranges_may_overlap_p instead of ranges_overlap_p. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255871
Richard Sandiford committed -
This patch makes tree-ssa-alias.c:same_addr_size_stores_p handle poly_int sizes and offsets. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * tree-ssa-alias.c (same_addr_size_stores_p): Take the offsets and sizes as poly_int64s rather than HOST_WIDE_INTs. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255870
Richard Sandiford committed -
This patch changes the offset and size arguments to fold_ctor_reference from unsigned HOST_WIDE_INT to poly_uint64. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * gimple-fold.h (fold_ctor_reference): Take the offset and size as poly_uint64 rather than unsigned HOST_WIDE_INT. * gimple-fold.c (fold_ctor_reference): Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255869
Richard Sandiford committed -
This patch adds support for DWARF location expressions that involve polynomial offsets. It adds a target hook that says how the runtime invariants used in the offsets should be represented in DWARF. SVE vectors have to be a multiple of 128 bits in size, so the GCC port uses the number of 128-bit blocks minus one as the runtime invariant. However, in DWARF, the vector length is exposed via a pseudo "VG" register that holds the number of 64-bit elements in a vector. Thus: indeterminate 1 == (VG / 2) - 1 The hook needs to be general enough to express this. Note that in most cases the division and subtraction fold away into surrounding expressions. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * target.def (dwarf_poly_indeterminate_value): New hook. * targhooks.h (default_dwarf_poly_indeterminate_value): Declare. * targhooks.c (default_dwarf_poly_indeterminate_value): New function. * doc/tm.texi.in (TARGET_DWARF_POLY_INDETERMINATE_VALUE): Document. * doc/tm.texi: Regenerate. * dwarf2out.h (build_cfa_loc, build_cfa_aligned_loc): Take the offset as a poly_int64. * dwarf2out.c (new_reg_loc_descr): Move later in file. Take the offset as a poly_int64. (loc_descr_plus_const, loc_list_plus_const, build_cfa_aligned_loc): Take the offset as a poly_int64. (build_cfa_loc): Likewise. Use loc_descr_plus_const. (frame_pointer_fb_offset): Change to a poly_int64. (int_loc_descriptor): Take the offset as a poly_int64. Use targetm.dwarf_poly_indeterminate_value for polynomial offsets. (based_loc_descr): Take the offset as a poly_int64. Use strip_offset_and_add to handle (plus X (const)). Use new_reg_loc_descr instead of an open-coded version of the previous implementation. (mem_loc_descriptor): Handle CONST_POLY_INT. (compute_frame_pointer_to_fb_displacement): Take the offset as a poly_int64. Use strip_offset_and_add to handle (plus X (const)). Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255868
Richard Sandiford committed -
This patch changes the type of the reg_attrs offset field from HOST_WIDE_INT to poly_int64 and updates uses accordingly. This includes changing reg_attr_hasher::hash to use inchash. (Doing this has no effect on code generation since the only use of the hasher is to avoid creating duplicate objects.) 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * rtl.h (reg_attrs::offset): Change from HOST_WIDE_INT to poly_int64. (gen_rtx_REG_offset): Take the offset as a poly_int64. * inchash.h (inchash::hash::add_poly_hwi): New function. * gengtype.c (main): Register poly_int64. * emit-rtl.c (reg_attr_hasher::hash): Use inchash. Treat the offset as a poly_int. (reg_attr_hasher::equal): Use must_eq to compare offsets. (get_reg_attrs, update_reg_offset, gen_rtx_REG_offset): Take the offset as a poly_int64. (set_reg_attrs_from_value): Treat the offset as a poly_int64. * print-rtl.c (print_poly_int): New function. (rtx_writer::print_rtx_operand_code_r): Treat REG_OFFSET as a poly_int. * var-tracking.c (track_offset_p, get_tracked_reg_offset): New functions. (var_reg_set, var_reg_delete_and_set, var_reg_delete): Use them. (same_variable_part_p, track_loc_p): Take the offset as a poly_int64. (vt_get_decl_and_offset): Return the offset as a poly_int64. Enforce track_offset_p for parts of a PARALLEL. (vt_add_function_parameter): Use const_offset for the final offset to track. Use get_tracked_reg_offset for the parts of a PARALLEL. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255867
Richard Sandiford committed -
This patch makes TRULY_NOOP_TRUNCATION take the mode sizes as poly_uint64s instead of unsigned ints. The function bodies don't need to change. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * target.def (truly_noop_truncation): Take poly_uint64s instead of unsigned ints. Change default to hook_bool_puint64_puint64_true. * doc/tm.texi: Regenerate. * hooks.h (hook_bool_uint_uint_true): Delete. (hook_bool_puint64_puint64_true): Declare. * hooks.c (hook_bool_uint_uint_true): Delete. (hook_bool_puint64_puint64_true): New function. * config/mips/mips.c (mips_truly_noop_truncation): Take poly_uint64s instead of unsigned ints. * config/spu/spu.c (spu_truly_noop_truncation): Likewise. * config/tilegx/tilegx.c (tilegx_truly_noop_truncation): Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255866
Richard Sandiford committed -
This patch generalises create_integer_operand so that it accepts poly_int64s rather than HOST_WIDE_INTs. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * optabs.h (expand_operand): Add an int_value field. (create_expand_operand): Add an int_value parameter and use it to initialize the new expand_operand field. (create_integer_operand): Replace with a declaration of a function that accepts poly_int64s. Move the implementation to... * optabs.c (create_integer_operand): ...here. (maybe_legitimize_operand): For EXPAND_INTEGER, check whether the mode preserves the value of int_value, instead of calling const_int_operand on the rtx. Use gen_int_mode to generate the new rtx. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255865
Richard Sandiford committed -
2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * dumpfile.h (dump_dec): Declare. * dumpfile.c (dump_dec): New function. * pretty-print.h (pp_wide_integer): Turn into a function and declare a poly_int version. * pretty-print.c (pp_wide_integer): New function for poly_ints. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255864
Richard Sandiford committed -
This patch adds a tree representation for poly_ints. Unlike the rtx version, the coefficients are INTEGER_CSTs rather than plain integers, so that we can easily access them as poly_widest_ints and poly_offset_ints. The patch also adjusts some places that previously relied on "constant" meaning "INTEGER_CST". It also makes sure that the TYPE_SIZE agrees with the TYPE_SIZE_UNIT for vector booleans, given the existing: /* Several boolean vector elements may fit in a single unit. */ if (VECTOR_BOOLEAN_TYPE_P (type) && type->type_common.mode != BLKmode) TYPE_SIZE_UNIT (type) = size_int (GET_MODE_SIZE (type->type_common.mode)); else TYPE_SIZE_UNIT (type) = int_const_binop (MULT_EXPR, TYPE_SIZE_UNIT (innertype), size_int (nunits)); 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * doc/generic.texi (POLY_INT_CST): Document. * tree.def (POLY_INT_CST): New tree code. * treestruct.def (TS_POLY_INT_CST): New tree layout. * tree-core.h (tree_poly_int_cst): New struct. (tree_node): Add a poly_int_cst field. * tree.h (POLY_INT_CST_P, POLY_INT_CST_COEFF): New macros. (wide_int_to_tree, force_fit_type): Take a poly_wide_int_ref instead of a wide_int_ref. (build_int_cst, build_int_cst_type): Take a poly_int64 instead of a HOST_WIDE_INT. (build_int_cstu, build_array_type_nelts): Take a poly_uint64 instead of an unsigned HOST_WIDE_INT. (build_poly_int_cst, tree_fits_poly_int64_p, tree_fits_poly_uint64_p) (ptrdiff_tree_p): Declare. (tree_to_poly_int64, tree_to_poly_uint64): Likewise. Provide extern inline implementations if the target doesn't use POLY_INT_CST. (poly_int_tree_p): New function. (wi::unextended_tree): New class. (wi::int_traits <unextended_tree>): New override. (wi::extended_tree): Add a default constructor. (wi::extended_tree::get_tree): New function. (wi::widest_extended_tree, wi::offset_extended_tree): New typedefs. (wi::tree_to_widest_ref, wi::tree_to_offset_ref): Use them. (wi::tree_to_poly_widest_ref, wi::tree_to_poly_offset_ref) (wi::tree_to_poly_wide_ref): New typedefs. (wi::ints_for): Provide overloads for extended_tree and unextended_tree. (poly_int_cst_value, wi::to_poly_widest, wi::to_poly_offset) (wi::to_wide): New functions. (wi::fits_to_boolean_p, wi::fits_to_tree_p): Handle poly_ints. * tree.c (poly_int_cst_hasher): New struct. (poly_int_cst_hash_table): New variable. (tree_node_structure_for_code, tree_code_size, simple_cst_equal) (valid_constant_size_p, add_expr, drop_tree_overflow): Handle POLY_INT_CST. (initialize_tree_contains_struct): Handle TS_POLY_INT_CST. (init_ttree): Initialize poly_int_cst_hash_table. (build_int_cst, build_int_cst_type, build_invariant_address): Take a poly_int64 instead of a HOST_WIDE_INT. (build_int_cstu, build_array_type_nelts): Take a poly_uint64 instead of an unsigned HOST_WIDE_INT. (wide_int_to_tree): Rename to... (wide_int_to_tree_1): ...this. (build_new_poly_int_cst, build_poly_int_cst): New functions. (force_fit_type): Take a poly_wide_int_ref instead of a wide_int_ref. (wide_int_to_tree): New function that takes a poly_wide_int_ref. (ptrdiff_tree_p, tree_to_poly_int64, tree_to_poly_uint64) (tree_fits_poly_int64_p, tree_fits_poly_uint64_p): New functions. * lto-streamer-out.c (DFS::DFS_write_tree_body, hash_tree): Handle TS_POLY_INT_CST. * tree-streamer-in.c (lto_input_ts_poly_tree_pointers): Likewise. (streamer_read_tree_body): Likewise. * tree-streamer-out.c (write_ts_poly_tree_pointers): Likewise. (streamer_write_tree_body): Likewise. * tree-streamer.c (streamer_check_handled_ts_structures): Likewise. * asan.c (asan_protect_global): Require the size to be an INTEGER_CST. * cfgexpand.c (expand_debug_expr): Handle POLY_INT_CST. * expr.c (expand_expr_real_1, const_vector_from_tree): Likewise. * gimple-expr.h (is_gimple_constant): Likewise. * gimplify.c (maybe_with_size_expr): Likewise. * print-tree.c (print_node): Likewise. * tree-data-ref.c (data_ref_compare_tree): Likewise. * tree-pretty-print.c (dump_generic_node): Likewise. * tree-ssa-address.c (addr_for_mem_ref): Likewise. * tree-vect-data-refs.c (dr_group_sort_cmp): Likewise. * tree-vrp.c (compare_values_warnv): Likewise. * tree-ssa-loop-ivopts.c (determine_base_object, constant_multiple_of) (get_loop_invariant_expr, add_candidate_1, get_computation_aff_1) (force_expr_to_var_cost): Likewise. * tree-ssa-loop.c (for_each_index): Likewise. * fold-const.h (build_invariant_address, size_int_kind): Take a poly_int64 instead of a HOST_WIDE_INT. * fold-const.c (fold_negate_expr_1, const_binop, const_unop) (fold_convert_const, multiple_of_p, fold_negate_const): Handle POLY_INT_CST. (size_binop_loc): Likewise. Allow int_const_binop_1 to fail. (int_const_binop_2): New function, split out from... (int_const_binop_1): ...here. Handle POLY_INT_CST. (size_int_kind): Take a poly_int64 instead of a HOST_WIDE_INT. * expmed.c (make_tree): Handle CONST_POLY_INT_P. * gimple-ssa-strength-reduction.c (slsr_process_add) (slsr_process_mul): Check for INTEGER_CSTs before using them as candidates. * stor-layout.c (bits_from_bytes): New function. (bit_from_pos): Use it. (layout_type): Likewise. For vectors, multiply the TYPE_SIZE_UNIT by BITS_PER_UNIT to get the TYPE_SIZE. * tree-cfg.c (verify_expr, verify_types_in_gimple_reference): Allow MEM_REF and TARGET_MEM_REF offsets to be a POLY_INT_CST. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255863
Richard Sandiford committed -
This patch adds an rtl representation of poly_int values. There were three possible ways of doing this: (1) Add a new rtl code for the poly_ints themselves and store the coefficients as trailing wide_ints. This would give constants like: (const_poly_int [c0 c1 ... cn]) The runtime value would be: c0 + c1 * x1 + ... + cn * xn (2) Like (1), but use rtxes for the coefficients. This would give constants like: (const_poly_int [(const_int c0) (const_int c1) ... (const_int cn)]) although the coefficients could be const_wide_ints instead of const_ints where appropriate. (3) Add a new rtl code for the polynomial indeterminates, then use them in const wrappers. A constant like c0 + c1 * x1 would then look like: (const:M (plus:M (mult:M (const_param:M x1) (const_int c1)) (const_int c0))) There didn't seem to be that much to choose between them. The main advantage of (1) is that it's a more efficient representation and that we can refer to the cofficients directly as wide_int_storage. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * doc/rtl.texi (const_poly_int): Document. Also document the rtl sharing behavior. * gengenrtl.c (excluded_rtx): Return true for CONST_POLY_INT. * rtl.h (const_poly_int_def): New struct. (rtx_def::u): Add a cpi field. (CASE_CONST_UNIQUE, CASE_CONST_ANY): Add CONST_POLY_INT. (CONST_POLY_INT_P, CONST_POLY_INT_COEFFS): New macros. (wi::rtx_to_poly_wide_ref): New typedef (const_poly_int_value, wi::to_poly_wide, rtx_to_poly_int64) (poly_int_rtx_p): New functions. (trunc_int_for_mode): Declare a poly_int64 version. (plus_constant): Take a poly_int64 instead of a HOST_WIDE_INT. (immed_wide_int_const): Take a poly_wide_int_ref rather than a wide_int_ref. (strip_offset): Declare. (strip_offset_and_add): New function. * rtl.def (CONST_POLY_INT): New rtx code. * rtl.c (rtx_size): Handle CONST_POLY_INT. (shared_const_p): Use poly_int_rtx_p. * emit-rtl.h (gen_int_mode): Take a poly_int64 instead of a HOST_WIDE_INT. (gen_int_shift_amount): Likewise. * emit-rtl.c (const_poly_int_hasher): New class. (const_poly_int_htab): New variable. (init_emit_once): Initialize it when NUM_POLY_INT_COEFFS > 1. (const_poly_int_hasher::hash): New function. (const_poly_int_hasher::equal): Likewise. (gen_int_mode): Take a poly_int64 instead of a HOST_WIDE_INT. (immed_wide_int_const): Rename to... (immed_wide_int_const_1): ...this and make static. (immed_wide_int_const): New function, taking a poly_wide_int_ref instead of a wide_int_ref. (gen_int_shift_amount): Take a poly_int64 instead of a HOST_WIDE_INT. (gen_lowpart_common): Handle CONST_POLY_INT. * cse.c (hash_rtx_cb, equiv_constant): Likewise. * cselib.c (cselib_hash_rtx): Likewise. * dwarf2out.c (const_ok_for_output_1): Likewise. * expr.c (convert_modes): Likewise. * print-rtl.c (rtx_writer::print_rtx, print_value): Likewise. * rtlhash.c (add_rtx): Likewise. * explow.c (trunc_int_for_mode): Add a poly_int64 version. (plus_constant): Take a poly_int64 instead of a HOST_WIDE_INT. Handle existing CONST_POLY_INT rtxes. * expmed.h (expand_shift): Take a poly_int64 instead of a HOST_WIDE_INT. * expmed.c (expand_shift): Likewise. * rtlanal.c (strip_offset): New function. (commutative_operand_precedence): Give CONST_POLY_INT the same precedence as CONST_DOUBLE and put CONST_WIDE_INT between that and CONST_INT. * rtl-tests.c (const_poly_int_tests): New struct. (rtl_tests_c_tests): Use it. * simplify-rtx.c (simplify_const_unary_operation): Handle CONST_POLY_INT. (simplify_const_binary_operation): Likewise. (simplify_binary_operation_1): Fold additions of symbolic constants and CONST_POLY_INTs. (simplify_subreg): Handle extensions and truncations of CONST_POLY_INTs. (simplify_const_poly_int_tests): New struct. (simplify_rtx_c_tests): Use it. * wide-int.h (storage_ref): Add default constructor. (wide_int_ref_storage): Likewise. (trailing_wide_ints): Use GTY((user)). (trailing_wide_ints::operator[]): Add a const version. (trailing_wide_ints::get_precision): New function. (trailing_wide_ints::extra_size): Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255862
Richard Sandiford committed -
This patch adds a helper routine that constructs rtxes for constant shift amounts, given the mode of the value being shifted. As well as helping with the SVE patches, this is one step towards allowing CONST_INTs to have a real mode. One long-standing problem has been to decide what the mode of a shift count should be for arbitrary rtxes (as opposed to those directly tied to a target pattern). Realistic choices would be the mode of the shifted elements, word_mode, QImode, a 64-bit mode, or the same mode as the shift optabs (in which case what should the mode be when the target doesn't have a pattern?) For now the patch picks a 64-bit mode, but with a ??? comment. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * emit-rtl.h (gen_int_shift_amount): Declare. * emit-rtl.c (gen_int_shift_amount): New function. * asan.c (asan_emit_stack_protection): Use gen_int_shift_amount instead of GEN_INT. * calls.c (shift_return_value): Likewise. * cse.c (fold_rtx): Likewise. * dse.c (find_shift_sequence): Likewise. * expmed.c (init_expmed_one_mode, store_bit_field_1, expand_shift_1) (expand_shift, expand_smod_pow2): Likewise. * lower-subreg.c (shift_cost): Likewise. * optabs.c (expand_superword_shift, expand_doubleword_mult) (expand_unop, expand_binop, shift_amt_for_vec_perm_mask) (expand_vec_perm_var): Likewise. * simplify-rtx.c (simplify_unary_operation_1): Likewise. (simplify_binary_operation_1): Likewise. * combine.c (try_combine, find_split_point, force_int_to_mode) (simplify_shift_const_1, simplify_shift_const): Likewise. (change_zero_ext): Likewise. Use simplify_gen_binary. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r255861
Richard Sandiford committed -
Fix a stupid inversion. This function is very rarely used and was mostly to help split patches up, which is why it didn't get picked up during initial testing. 2017-12-20 Richard Sandiford <richard.sandiford@linaro.org> gcc/ * poly-int.h (multiple_p): Fix handling of two non-poly_ints. gcc/testsuite/ * gcc.dg/plugin/poly-int-tests.h (test_nonpoly_multiple_p): New function. (test_nonpoly_type): Call it. From-SVN: r255860
Richard Sandiford committed -
I noticed that we helpfully list the extensions that are accepted by the -march options on arm but we were missing the information for 'armv8.3-a'. This patchlet corrects that. Built the documentation and it looked ok. * doc/invoke.texi (ARM Options): Document accepted extension options for -march=armv8.3-a. From-SVN: r255859
Kyrylo Tkachov committed -
When GCC for ARM/linux is configured with --with-float=hard, or --with-float=softfp the compiler will now die when trying to build the support libraries because the baseline architecture is too old to support VFP (older versions of GCC just emitted the VFP instructions anyway, even though they wouldn't run on that version of the architecture; but we're now more prickly about it). This patch fixed the problem by raising the default architecture (actually the default CPU) to ARMv5te (ARM10e) when we need to generate HW floating-point code. PR target/83105 * config.gcc (arm*-*-linux*): When configured with --with-float=hard or --with-float=softfp, set the default CPU to arm10e. From-SVN: r255858
Richard Earnshaw committed -
As has been spotted at https://gcc.gnu.org/ml/gcc-patches/2017-12/msg01289.html we check the __AARCH64EB__ macro for aarch64 big-endian detection in config/cpu/aarch64/opt/ext/opt_random.h. That works just fine with GCC but the standardised ACLE[1] macro for that purpose is __ARM_BIG_ENDIAN so there is a possibility that non-GCC compilers that include this header are not aware of this predefine. So this patch changes the use of __AARCH64EB__ to the more portable __ARM_BIG_ENDIAN. Tested on aarch64-none-elf and aarch64_be-none-elf. Preapproved by Jeff at https://gcc.gnu.org/ml/gcc-patches/2017-12/msg01326.html * config/cpu/aarch64/opt/ext/opt_random.h (__VEXT): Check __ARM_BIG_ENDIAN instead of __AARCH64EB__. From-SVN: r255857
Kyrylo Tkachov committed -
* config/visium/constraints.md (J, K, L): Use IN_RANGE macro. * config/visium/predicates.md (const_shift_operand): Likewise. * config/visium/visium.c (visium_legitimize_address): Fix oversight. (visium_legitimize_reload_address): Likewise. From-SVN: r255856
Eric Botcazou committed -
* Committing ChangeLog entry. From-SVN: r255855
Paolo Carlini committed -
* gcc-interface/trans.c (Loop_Statement_to_gnu): Use IN_RANGE macro. * gcc-interface/misc.c (gnat_get_array_descr_info): Likewise. (default_pass_by_ref): Likewise. * gcc-interface/decl.c (gnat_to_gnu_entity): Likewise. From-SVN: r255854
Eric Botcazou committed -
Commit missing hunk to arm.h TEST_REGNO comment. PR target/82975 * config/arm/arm.h (TEST_REGNO): Adjust comment as expected in r255830. From-SVN: r255853
Kyrylo Tkachov committed -
PR c++/83490 * calls.c (compute_argument_addresses): Ignore TYPE_EMPTY_P arguments. * g++.dg/abi/empty29.C: New test. From-SVN: r255852
Jakub Jelinek committed -
2017-12-20 Martin Liska <mliska@suse.cz> PR middle-end/82404 * g++.dg/pr82404.C: New test. * gcc.dg/pr82404.c: New test. From-SVN: r255851
Martin Liska committed -
gcc/ * common/config/i386/i386-common.c (OPTION_MASK_ISA_VPCLMULQDQ_SET, OPTION_MASK_ISA_VPCLMULQDQ_UNSET): New. (ix86_handle_option): Handle -mvpclmulqdq, move cx6 to flags2. * config.gcc: Include vpclmulqdqintrin.h. * config/i386/cpuid.h: Handle bit_VPCLMULQDQ. * config/i386/driver-i386.c (host_detect_local_cpu): Handle -mvpclmulqdq. * config/i386/i386-builtin.def (__builtin_ia32_vpclmulqdq_v2di, __builtin_ia32_vpclmulqdq_v4di, __builtin_ia32_vpclmulqdq_v8di): New. * config/i386/i386-c.c (__VPCLMULQDQ__): New. * config/i386/i386.c (isa2_opts): Add -mcx16. (isa_opts): Add -mpclmulqdq, remove -mcx16. (ix86_option_override_internal): Move mcx16 to flags2. (ix86_valid_target_attribute_inner_p): Add vpclmulqdq. (ix86_expand_builtin): Handle OPTION_MASK_ISA_VPCLMULQDQ. * config/i386/i386.h (TARGET_VPCLMULQDQ, TARGET_VPCLMULQDQ_P): New. * config/i386/i386.opt: Add mvpclmulqdq, move mcx16 to flags2. * config/i386/immintrin.h: Include vpclmulqdqintrin.h. * config/i386/sse.md (vpclmulqdq_<mode>): New pattern. * config/i386/vpclmulqdqintrin.h (_mm512_clmulepi64_epi128, _mm_clmulepi64_epi128, _mm256_clmulepi64_epi128): New intrinsics. * doc/invoke.texi: Add -mvpclmulqdq. gcc/testsuite/ * gcc.target/i386/avx-1.c: Handle new intrinsics. * gcc.target/i386/sse-13.c: Ditto. * gcc.target/i386/sse-23.c: Ditto. * gcc.target/i386/avx512-check.h: Handle bit_VPCLMULQDQ. * gcc.target/i386/avx512f-vpclmulqdq-2.c: New test. * gcc.target/i386/avx512vl-vpclmulqdq-2.c: Ditto. * gcc.target/i386/vpclmulqdq.c: Ditto. * gcc.target/i386/i386.exp (check_effective_target_vpclmulqdq): New. From-SVN: r255850
Julia Koval committed -
2017-12-20 Tom de Vries <tom@codesourcery.com> PR middle-end/83423 * config/i386/i386.c (ix86_static_chain): Move DECL_STATIC_CHAIN test ... * calls.c (rtx_for_static_chain): ... here. New function. * calls.h (rtx_for_static_chain): Declare. * builtins.c (expand_builtin_setjmp_receiver): Use rtx_for_static_chain instead of targetm.calls.static_chain. * df-scan.c (df_get_entry_block_def_set): Same. From-SVN: r255849
Tom de Vries committed -
From-SVN: r255848
GCC Administrator committed
-
- 19 Dec, 2017 5 commits
-
-
/cp 2017-12-19 Paolo Carlini <paolo.carlini@oracle.com> PR c++/82593 * decl.c (check_array_designated_initializer): Not static. * cp-tree.h (check_array_designated_initializer): Declare. * typeck2.c (process_init_constructor_array): Call the latter. * parser.c (cp_parser_initializer_list): Check the return value of require_potential_rvalue_constant_expression. /testsuite 2017-12-19 Paolo Carlini <paolo.carlini@oracle.com> PR c++/82593 * g++.dg/cpp0x/desig2.C: New. * g++.dg/cpp0x/desig3.C: Likewise. * g++.dg/cpp0x/desig4.C: Likewise. From-SVN: r255845
Paolo Carlini committed -
PR c++/83394 - always_inline vs. noinline no longer diagnosed PR c++/83322 - ICE: tree check: expected class ‘type’, have ‘exceptional’ gcc/cp/ChangeLog: PR c++/83394 PR c++/83322 * decl2.c (cplus_decl_attributes): Look up member functions in the scope of their class. gcc/testsuite/ChangeLog: PR c++/83394 * g++.dg/Wattributes-3.C: New test. * g++.dg/Wattributes-4.C: New test. * g++.dg/Wattributes-5.C: New test. From-SVN: r255844
Martin Sebor committed -
PR target/82975 * gcc.dg/pr82975.c: Only add -mtune=cortex-a57 on arm*/aarch64* targets. From-SVN: r255843
Jakub Jelinek committed -
2017-12-19 Tom de Vries <tom@codesourcery.com> PR tree-optimization/83493 * graphite-isl-ast-to-gimple.c (translate_isl_ast_node_for): Unshare ub and lb. From-SVN: r255842
Tom de Vries committed -
From-SVN: r255841
François Dumont committed
-