1. 16 Nov, 2019 23 commits
    • Print the type of alias check in a dump message · b4d1b635
      This patch prints a message to say how an alias check is being
      implemented.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* tree-data-ref.c (create_intersect_range_checks_index)
      	(create_intersect_range_checks): Print dump messages.
      
      gcc/testsuite/
      	* gcc.dg/vect/vect-alias-check-1.c: Test for the type of alias check.
      	* gcc.dg/vect/vect-alias-check-8.c: Likewise.
      	* gcc.dg/vect/vect-alias-check-9.c: Likewise.
      	* gcc.dg/vect/vect-alias-check-10.c: Likewise.
      	* gcc.dg/vect/vect-alias-check-11.c: Likewise.
      	* gcc.dg/vect/vect-alias-check-12.c: Likewise.
      	* gcc.dg/vect/vect-alias-check-13.c: Likewise.
      	* gcc.dg/vect/vect-alias-check-14.c: Likewise.
      	* gcc.dg/vect/vect-alias-check-15.c: Likewise.
      	* gcc.dg/vect/vect-alias-check-16.c: Likewise.
      	* gcc.dg/vect/vect-alias-check-17.c: Likewise.
      
      From-SVN: r278353
      Richard Sandiford committed
    • Dump the list of merged alias pairs · cad984b2
      This patch dumps the final (merged) list of alias pairs.  It also adds:
      
      - WAW and RAW versions of vect-alias-check-8.c
      - a "well-ordered" version of vect-alias-check-9.c (i.e. all reads
        before any writes)
      - a test with mixed steps in the same alias pair
      
      I also tweaked the test value in vect-alias-check-9.c so that the
      result was less likely to be accidentally correct if the alias
      isn't honoured.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* tree-data-ref.c (dump_alias_pair): New function.
      	(prune_runtime_alias_test_list): Use it to dump each merged alias pair.
      
      gcc/testsuite/
      	* gcc.dg/vect/vect-alias-check-8.c: Test for the RAW flag.
      	* gcc.dg/vect/vect-alias-check-9.c: Test for the ARBITRARY flag.
      	(TEST_VALUE): Use a higher value for early iterations.
      	* gcc.dg/vect/vect-alias-check-14.c: New test.
      	* gcc.dg/vect/vect-alias-check-15.c: Likewise.
      	* gcc.dg/vect/vect-alias-check-16.c: Likewise.
      	* gcc.dg/vect/vect-alias-check-17.c: Likewise.
      
      From-SVN: r278352
      Richard Sandiford committed
    • Record whether a dr_with_seg_len contains mixed steps · 52c29905
      prune_runtime_alias_test_list can merge dr_with_seg_len_pair_ts that
      have different steps for the first reference or different steps for the
      second reference.  This patch adds a flag to record that.
      
      I don't know whether the change to create_intersect_range_checks_index
      fixes anything in practice.  It would have to be a corner case if so,
      since at present we only merge two alias pairs if either the first or
      the second references are identical and only the other references differ.
      And the vectoriser uses VF-based segment lengths only if both references
      in a pair have the same step.  Either way, it still seems wrong to use
      DR_STEP when it doesn't represent all checks that have been merged into
      the pair.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* tree-data-ref.h (DR_ALIAS_MIXED_STEPS): New flag.
      	* tree-data-ref.c (prune_runtime_alias_test_list): Set it when
      	merging data references with different steps.
      	(create_intersect_range_checks_index): Take a
      	dr_with_seg_len_pair_t instead of two dr_with_seg_lens.
      	Bail out if DR_ALIAS_MIXED_STEPS is set.
      	(create_intersect_range_checks): Take a dr_with_seg_len_pair_t
      	instead of two dr_with_seg_lens.  Update call to
      	create_intersect_range_checks_index.
      	(create_runtime_alias_checks): Update call accordingly.
      
      From-SVN: r278351
      Richard Sandiford committed
    • Add flags to dr_with_seg_len_pair_t · e9acf80c
      This patch adds a bunch of flags to dr_with_seg_len_pair_t,
      for use by later patches.  The update to tree-loop-distribution.c
      is conservatively correct, but might be tweakable later.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* tree-data-ref.h (DR_ALIAS_RAW, DR_ALIAS_WAR, DR_ALIAS_WAW)
      	(DR_ALIAS_ARBITRARY, DR_ALIAS_SWAPPED, DR_ALIAS_UNSWAPPED): New flags.
      	(dr_with_seg_len_pair_t::sequencing): New enum.
      	(dr_with_seg_len_pair_t::flags): New member variable.
      	(dr_with_seg_len_pair_t::dr_with_seg_len_pair_t): Take a sequencing
      	parameter and initialize the flags member variable.
      	* tree-loop-distribution.c (compute_alias_check_pairs): Update
      	call accordingly.
      	* tree-vect-data-refs.c (vect_prune_runtime_alias_test_list): Likewise.
      	Ensure the two data references in an alias pair are in statement
      	order, if there is a defined order.
      	* tree-data-ref.c (prune_runtime_alias_test_list): Use
      	DR_ALIAS_SWAPPED and DR_ALIAS_UNSWAPPED to record whether we've
      	swapped the references in a dr_with_seg_len_pair_t.  OR together
      	the flags when merging two dr_with_seg_len_pair_ts.  After merging,
      	try to restore the original dr_with_seg_len order, updating the
      	flags if that fails.
      
      From-SVN: r278350
      Richard Sandiford committed
    • Delay swapping data refs in prune_runtime_alias_test_list · 97602450
      prune_runtime_alias_test_list swapped dr_as between two dr_with_seg_len
      pairs before finally deciding whether to merge them.  Bailing out later
      would therefore leave the pairs in an incorrect state.
      
      IMO a better fix would be to split this out into a subroutine that
      produces a temporary dr_with_seg_len on success, rather than changing
      an existing one in-place.  It would then be easy to merge both the dr_as
      and dr_bs if we wanted to, rather than requiring one of them to be equal.
      But here I tried to do something that could be backported if necessary.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* tree-data-ref.c (prune_runtime_alias_test_list): Delay
      	swapping the dr_as based on init values until we've decided
      	whether to merge them.
      
      From-SVN: r278349
      Richard Sandiford committed
    • Move canonicalisation of dr_with_seg_len_pair_ts · 1fb2b0f6
      The two users of tree-data-ref's runtime alias checks both canonicalise
      the order of the dr_with_seg_lens in a pair before passing them to
      prune_runtime_alias_test_list.  It's more convenient for later patches
      if prune_runtime_alias_test_list does that itself.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* tree-data-ref.c (prune_runtime_alias_test_list): Sort the
      	two accesses in each dr_with_seg_len_pair_t before trying to
      	combine separate dr_with_seg_len_pair_ts.
      	* tree-loop-distribution.c (compute_alias_check_pairs): Don't do
      	that here.
      	* tree-vect-data-refs.c (vect_prune_runtime_alias_test_list): Likewise.
      
      From-SVN: r278348
      Richard Sandiford committed
    • [AArch64] Add scatter stores for partial SVE modes · 37a3662f
      This patch adds support for scatter stores of partial vectors,
      where the vector base or offset elements can be wider than the
      elements being stored.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* config/aarch64/aarch64-sve.md
      	(scatter_store<SVE_FULL_SD:mode><v_int_equiv>): Extend to...
      	(scatter_store<SVE_24:mode><v_int_container>): ...this.
      	(mask_scatter_store<SVE_FULL_S:mode><v_int_equiv>): Extend to...
      	(mask_scatter_store<SVE_4:mode><v_int_equiv>): ...this.
      	(mask_scatter_store<SVE_FULL_D:mode><v_int_equiv>): Extend to...
      	(mask_scatter_store<SVE_2:mode><v_int_equiv>): ...this.
      	(*mask_scatter_store<mode><v_int_container>_<su>xtw_unpacked): New
      	pattern.
      	(*mask_scatter_store<SVE_FULL_D:mode><v_int_equiv>_sxtw): Extend to...
      	(*mask_scatter_store<SVE_2:mode><v_int_equiv>_sxtw): ...this.
      	(*mask_scatter_store<SVE_FULL_D:mode><v_int_equiv>_uxtw): Extend to...
      	(*mask_scatter_store<SVE_2:mode><v_int_equiv>_uxtw): ...this.
      
      gcc/testsuite/
      	* gcc.target/aarch64/sve/scatter_store_1.c (TEST_LOOP): Start at 0.
      	(TEST_ALL): Add tests for 8-bit and 16-bit elements.
      	* gcc.target/aarch64/sve/scatter_store_2.c: Update accordingly.
      	* gcc.target/aarch64/sve/scatter_store_3.c (TEST_LOOP): Start at 0.
      	(TEST_ALL): Add tests for 8-bit and 16-bit elements.
      	* gcc.target/aarch64/sve/scatter_store_4.c: Update accordingly.
      	* gcc.target/aarch64/sve/scatter_store_5.c (TEST_LOOP): Start at 0.
      	(TEST_ALL): Add tests for 8-bit, 16-bit and 32-bit elements.
      	* gcc.target/aarch64/sve/scatter_store_8.c: New test.
      	* gcc.target/aarch64/sve/scatter_store_9.c: Likewise.
      
      From-SVN: r278347
      Richard Sandiford committed
    • [AArch64] Pattern-match SVE extending gather loads · 87a80d27
      This patch pattern-matches a partial gather load followed by a sign or
      zero extension into an extending gather load.  (The partial gather load
      is already an extending load; we just don't rely on the upper bits of
      the elements.)
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* config/aarch64/iterators.md (SVE_2BHSI, SVE_2HSDI, SVE_4BHI)
      	(SVE_4HSI): New mode iterators.
      	(ANY_EXTEND2): New code iterator.
      	* config/aarch64/aarch64-sve.md
      	(@aarch64_gather_load_<ANY_EXTEND:optab><VNx4_WIDE:mode><VNx4_NARROW:mode>):
      	Extend to...
      	(@aarch64_gather_load_<ANY_EXTEND:optab><SVE_4HSI:mode><SVE_4BHI:mode>):
      	...this, handling extension to partial modes as well as full modes.
      	Describe the extension as a predicated rather than unpredicated
      	extension.
      	(@aarch64_gather_load_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>):
      	Likewise extend to...
      	(@aarch64_gather_load_<ANY_EXTEND:optab><SVE_2HSDI:mode><SVE_2BHSI:mode>):
      	...this, making the same adjustments.
      	(*aarch64_gather_load_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>_sxtw):
      	Likewise extend to...
      	(*aarch64_gather_load_<ANY_EXTEND:optab><SVE_2HSDI:mode><SVE_2BHSI:mode>_sxtw)
      	...this, making the same adjustments.
      	(*aarch64_gather_load_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>_uxtw):
      	Likewise extend to...
      	(*aarch64_gather_load_<ANY_EXTEND:optab><SVE_2HSDI:mode><SVE_2BHSI:mode>_uxtw)
      	...this, making the same adjustments.
      	(*aarch64_gather_load_<ANY_EXTEND:optab><SVE_2HSDI:mode><SVE_2BHSI:mode>_<ANY_EXTEND2:su>xtw_unpacked):
      	New pattern.
      	(*aarch64_ldff1_gather<mode>_sxtw): Canonicalize to a constant
      	extension predicate.
      	(@aarch64_ldff1_gather_<ANY_EXTEND:optab><VNx4_WIDE:mode><VNx4_NARROW:mode>)
      	(@aarch64_ldff1_gather_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>)
      	(*aarch64_ldff1_gather_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>_uxtw):
      	Describe the extension as a predicated rather than unpredicated
      	extension.
      	(*aarch64_ldff1_gather_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>_sxtw):
      	Likewise.  Canonicalize to a constant extension predicate.
      	* config/aarch64/aarch64-sve-builtins-base.cc
      	(svld1_gather_extend_impl::expand): Add an extra predicate for
      	the extension.
      	(svldff1_gather_extend_impl::expand): Likewise.
      
      gcc/testsuite/
      	* gcc.target/aarch64/sve/gather_load_extend_1.c: New test.
      	* gcc.target/aarch64/sve/gather_load_extend_2.c: Likewise.
      	* gcc.target/aarch64/sve/gather_load_extend_3.c: Likewise.
      	* gcc.target/aarch64/sve/gather_load_extend_4.c: Likewise.
      	* gcc.target/aarch64/sve/gather_load_extend_5.c: Likewise.
      	* gcc.target/aarch64/sve/gather_load_extend_6.c: Likewise.
      	* gcc.target/aarch64/sve/gather_load_extend_7.c: Likewise.
      	* gcc.target/aarch64/sve/gather_load_extend_8.c: Likewise.
      	* gcc.target/aarch64/sve/gather_load_extend_9.c: Likewise.
      	* gcc.target/aarch64/sve/gather_load_extend_10.c: Likewise.
      	* gcc.target/aarch64/sve/gather_load_extend_11.c: Likewise.
      	* gcc.target/aarch64/sve/gather_load_extend_12.c: Likewise.
      
      From-SVN: r278346
      Richard Sandiford committed
    • [AArch64] Add gather loads for partial SVE modes · f8186eea
      This patch adds support for gather loads of partial vectors,
      where the vector base or offset elements can be wider than the
      elements being loaded.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* config/aarch64/iterators.md (SVE_24, SVE_2, SVE_4): New mode
      	iterators.
      	* config/aarch64/aarch64-sve.md
      	(gather_load<SVE_FULL_SD:mode><v_int_equiv>): Extend to...
      	(gather_load<SVE_24:mode><v_int_container>): ...this.
      	(mask_gather_load<SVE_FULL_S:mode><v_int_equiv>): Extend to...
      	(mask_gather_load<SVE_4:mode><v_int_container>): ...this.
      	(mask_gather_load<SVE_FULL_D:mode><v_int_equiv>): Extend to...
      	(mask_gather_load<SVE_2:mode><v_int_container>): ...this.
      	(*mask_gather_load<SVE_2:mode><v_int_container>_<su>xtw_unpacked):
      	New pattern.
      	(*mask_gather_load<SVE_FULL_D:mode><v_int_equiv>_sxtw): Extend to...
      	(*mask_gather_load<SVE_2:mode><v_int_equiv>_sxtw): ...this.
      	Allow the nominal extension predicate to be different from the
      	load predicate.
      	(*mask_gather_load<SVE_FULL_D:mode><v_int_equiv>_uxtw): Extend to...
      	(*mask_gather_load<SVE_2:mode><v_int_equiv>_uxtw): ...this.
      
      gcc/testsuite/
      	* gcc.target/aarch64/sve/gather_load_1.c (TEST_LOOP): Start at 0.
      	(TEST_ALL): Add tests for 8-bit and 16-bit elements.
      	* gcc.target/aarch64/sve/gather_load_2.c: Update accordingly.
      	* gcc.target/aarch64/sve/gather_load_3.c (TEST_LOOP): Start at 0.
      	(TEST_ALL): Add tests for 8-bit and 16-bit elements.
      	* gcc.target/aarch64/sve/gather_load_4.c: Update accordingly.
      	* gcc.target/aarch64/sve/gather_load_5.c (TEST_LOOP): Start at 0.
      	(TEST_ALL): Add tests for 8-bit, 16-bit and 32-bit elements.
      	* gcc.target/aarch64/sve/gather_load_6.c: Add
      	--param aarch64-sve-compare-costs=0.
      	(TEST_LOOP): Start at 0.
      	* gcc.target/aarch64/sve/gather_load_7.c: Add
      	--param aarch64-sve-compare-costs=0.
      	* gcc.target/aarch64/sve/gather_load_8.c: New test.
      	* gcc.target/aarch64/sve/gather_load_9.c: Likewise.
      	* gcc.target/aarch64/sve/mask_gather_load_6.c: Add
      	--param aarch64-sve-compare-costs=0.
      
      From-SVN: r278345
      Richard Sandiford committed
    • [AArch64] Add truncation for partial SVE modes · 2d56600c
      This patch adds support for "truncating" to a partial SVE vector from
      either a full SVE vector or a wider partial vector.  This truncation is
      actually a no-op and so should have zero cost in the vector cost model.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* config/aarch64/aarch64-sve.md
      	(trunc<SVE_HSDI:mode><SVE_PARTIAL_I:mode>2): New pattern.
      	* config/aarch64/aarch64.c (aarch64_integer_truncation_p): New
      	function.
      	(aarch64_sve_adjust_stmt_cost): Call it.
      
      gcc/testsuite/
      	* gcc.target/aarch64/sve/mask_struct_load_1.c: Add
      	--param aarch64-sve-compare-costs=0.
      	* gcc.target/aarch64/sve/mask_struct_load_2.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_load_3.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_load_4.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_load_5.c: Likewise.
      	* gcc.target/aarch64/sve/pack_1.c: Likewise.
      	* gcc.target/aarch64/sve/truncate_1.c: New test.
      
      From-SVN: r278344
      Richard Sandiford committed
    • [AArch64] Pattern-match SVE extending loads · 217ccab8
      This patch pattern-matches a partial SVE load followed by a sign or zero
      extension into an extending load.  (The partial load is already an
      extending load; we just don't rely on the upper bits of the elements.)
      
      Nothing yet uses the extra LDFF1 and LDNF1 combinations, but it seemed
      more consistent to provide them, since I needed to update the pattern
      to use a predicated extension anyway.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* config/aarch64/aarch64-sve.md
      	(@aarch64_load_<ANY_EXTEND:optab><VNx8_WIDE:mode><VNx8_NARROW:mode>):
      	(@aarch64_load_<ANY_EXTEND:optab><VNx4_WIDE:mode><VNx4_NARROW:mode>)
      	(@aarch64_load_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>):
      	Combine into...
      	(@aarch64_load_<ANY_EXTEND:optab><SVE_HSDI:mode><SVE_PARTIAL_I:mode>):
      	...this new pattern, handling extension to partial modes as well
      	as full modes.  Describe the extension as a predicated rather than
      	unpredicated extension.
      	(@aarch64_ld<fn>f1_<ANY_EXTEND:optab><VNx8_WIDE:mode><VNx8_NARROW:mode>)
      	(@aarch64_ld<fn>f1_<ANY_EXTEND:optab><VNx4_WIDE:mode><VNx4_NARROW:mode>)
      	(@aarch64_ld<fn>f1_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>):
      	Combine into...
      	(@aarch64_ld<fn>f1_<ANY_EXTEND:optab><SVE_HSDI:mode><SVE_PARTIAL_I:mode>):
      	...this new pattern, handling extension to partial modes as well
      	as full modes.  Describe the extension as a predicated rather than
      	unpredicated extension.
      	* config/aarch64/aarch64-sve-builtins.cc
      	(function_expander::use_contiguous_load_insn): Add an extra
      	predicate for extending loads.
      	* config/aarch64/aarch64.c (aarch64_extending_load_p): New function.
      	(aarch64_sve_adjust_stmt_cost): Likewise.
      	(aarch64_add_stmt_cost): Use aarch64_sve_adjust_stmt_cost to adjust
      	the cost of SVE vector stmts.
      
      gcc/testsuite/
      	* gcc.target/aarch64/sve/load_extend_1.c: New test.
      	* gcc.target/aarch64/sve/load_extend_2.c: Likewise.
      	* gcc.target/aarch64/sve/load_extend_3.c: Likewise.
      	* gcc.target/aarch64/sve/load_extend_4.c: Likewise.
      	* gcc.target/aarch64/sve/load_extend_5.c: Likewise.
      	* gcc.target/aarch64/sve/load_extend_6.c: Likewise.
      	* gcc.target/aarch64/sve/load_extend_7.c: Likewise.
      	* gcc.target/aarch64/sve/load_extend_8.c: Likewise.
      	* gcc.target/aarch64/sve/load_extend_9.c: Likewise.
      	* gcc.target/aarch64/sve/load_extend_10.c: Likewise.
      	* gcc.target/aarch64/sve/reduc_4.c: Add
      	--param aarch64-sve-compare-costs=0.
      
      From-SVN: r278343
      Richard Sandiford committed
    • [AArch64] Add sign and zero extension for partial SVE modes · e58703e2
      This patch adds support for extending from partial SVE modes
      to both full vector modes and wider partial modes.
      
      Some tests now need --param aarch64-sve-compare-costs=0 to force
      the original full-vector code.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* config/aarch64/iterators.md (SVE_HSDI): New mode iterator.
      	(narrower_mask): Handle VNx4HI, VNx2HI and VNx2SI.
      	* config/aarch64/aarch64-sve.md
      	(<ANY_EXTEND:optab><SVE_PARTIAL_I:mode><SVE_HSDI:mode>2): New pattern.
      	(*<ANY_EXTEND:optab><SVE_PARTIAL_I:mode><SVE_HSDI:mode>2): Likewise.
      	(@aarch64_pred_sxt<SVE_FULL_HSDI:mode><SVE_PARTIAL_I:mode>): Update
      	comment.  Avoid new narrower_mask ambiguity.
      	(@aarch64_cond_sxt<SVE_FULL_HSDI:mode><SVE_PARTIAL_I:mode>): Likewise.
      	(*cond_uxt<mode>_2): Update comment.
      	(*cond_uxt<mode>_any): Likewise.
      
      gcc/testsuite/
      	* gcc.target/aarch64/sve/cost_model_1.c: Expect the loop to be
      	vectorized with bytes stored in 32-bit containers.
      	* gcc.target/aarch64/sve/extend_1.c: New test.
      	* gcc.target/aarch64/sve/extend_2.c: New test.
      	* gcc.target/aarch64/sve/extend_3.c: New test.
      	* gcc.target/aarch64/sve/extend_4.c: New test.
      	* gcc.target/aarch64/sve/load_const_offset_3.c: Add
      	--param aarch64-sve-compare-costs=0.
      	* gcc.target/aarch64/sve/mask_struct_store_1.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_store_1_run.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_store_2.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_store_2_run.c: Likewise.
      	* gcc.target/aarch64/sve/unpack_unsigned_1.c: Likewise.
      	* gcc.target/aarch64/sve/unpack_unsigned_1_run.c: Likewise.
      
      From-SVN: r278342
      Richard Sandiford committed
    • [AArch64] Add autovec support for partial SVE vectors · cc68f7c2
      This patch adds the bare minimum needed to support autovectorisation of
      partial SVE vectors, namely moves and integer addition.  Later patches
      add more interesting cases.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* config/aarch64/aarch64-modes.def: Define partial SVE vector
      	float modes.
      	* config/aarch64/aarch64-protos.h (aarch64_sve_pred_mode): New
      	function.
      	* config/aarch64/aarch64.c (aarch64_classify_vector_mode): Handle the
      	new vector float modes.
      	(aarch64_sve_container_bits): New function.
      	(aarch64_sve_pred_mode): Likewise.
      	(aarch64_get_mask_mode): Use it.
      	(aarch64_sve_element_int_mode): Handle structure modes and partial
      	modes.
      	(aarch64_sve_container_int_mode): New function.
      	(aarch64_vectorize_related_mode): Return SVE modes when given
      	SVE modes.  Handle partial modes, taking the preferred number
      	of units from the size of the given mode.
      	(aarch64_hard_regno_mode_ok): Allow partial modes to be stored
      	in registers.
      	(aarch64_expand_sve_ld1rq): Use the mode form of aarch64_sve_pred_mode.
      	(aarch64_expand_sve_const_vector): Handle partial SVE vectors.
      	(aarch64_split_sve_subreg_move): Use the mode form of
      	aarch64_sve_pred_mode.
      	(aarch64_secondary_reload): Handle partial modes in the same way
      	as full big-endian vectors.
      	(aarch64_vector_mode_supported_p): Allow partial SVE vectors.
      	(aarch64_autovectorize_vector_modes): Try unpacked SVE vectors,
      	merging with the Advanced SIMD modes.  If two modes have the
      	same size, try the Advanced SIMD mode first.
      	(aarch64_simd_valid_immediate): Use the container rather than
      	the element mode for INDEX constants.
      	(aarch64_simd_vector_alignment): Make the alignment of partial
      	SVE vector modes the same as their minimum size.
      	(aarch64_evpc_sel): Use the mode form of aarch64_sve_pred_mode.
      	* config/aarch64/aarch64-sve.md (mov<SVE_FULL:mode>): Extend to...
      	(mov<SVE_ALL:mode>): ...this.
      	(movmisalign<SVE_FULL:mode>): Extend to...
      	(movmisalign<SVE_ALL:mode>): ...this.
      	(*aarch64_sve_mov<mode>_le): Rename to...
      	(*aarch64_sve_mov<mode>_ldr_str): ...this.
      	(*aarch64_sve_mov<SVE_FULL:mode>_be): Rename and extend to...
      	(*aarch64_sve_mov<SVE_ALL:mode>_no_ldr_str): ...this.  Handle
      	partial modes regardless of endianness.
      	(aarch64_sve_reload_be): Rename to...
      	(aarch64_sve_reload_mem): ...this and enable for little-endian.
      	Use aarch64_sve_pred_mode to get the appropriate predicate mode.
      	(@aarch64_pred_mov<SVE_FULL:mode>): Extend to...
      	(@aarch64_pred_mov<SVE_ALL:mode>): ...this.
      	(*aarch64_sve_mov<SVE_FULL:mode>_subreg_be): Extend to...
      	(*aarch64_sve_mov<SVE_ALL:mode>_subreg_be): ...this.
      	(@aarch64_sve_reinterpret<SVE_FULL:mode>): Extend to...
      	(@aarch64_sve_reinterpret<SVE_ALL:mode>): ...this.
      	(*aarch64_sve_reinterpret<SVE_FULL:mode>): Extend to...
      	(*aarch64_sve_reinterpret<SVE_ALL:mode>): ...this.
      	(maskload<SVE_FULL:mode><vpred>): Extend to...
      	(maskload<SVE_ALL:mode><vpred>): ...this.
      	(maskstore<SVE_FULL:mode><vpred>): Extend to...
      	(maskstore<SVE_ALL:mode><vpred>): ...this.
      	(vec_duplicate<SVE_FULL:mode>): Extend to...
      	(vec_duplicate<SVE_ALL:mode>): ...this.
      	(*vec_duplicate<SVE_FULL:mode>_reg): Extend to...
      	(*vec_duplicate<SVE_ALL:mode>_reg): ...this.
      	(sve_ld1r<SVE_FULL:mode>): Extend to...
      	(sve_ld1r<SVE_ALL:mode>): ...this.
      	(vec_series<SVE_FULL_I:mode>): Extend to...
      	(vec_series<SVE_I:mode>): ...this.
      	(*vec_series<SVE_FULL_I:mode>_plus): Extend to...
      	(*vec_series<SVE_I:mode>_plus): ...this.
      	(@aarch64_pred_sxt<SVE_FULL_HSDI:mode><SVE_PARTIAL_I:mode>): Avoid
      	new VPRED ambiguity.
      	(@aarch64_cond_sxt<SVE_FULL_HSDI:mode><SVE_PARTIAL_I:mode>): Likewise.
      	(add<SVE_FULL_I:mode>3): Extend to...
      	(add<SVE_I:mode>3): ...this.
      	* config/aarch64/iterators.md (SVE_ALL, SVE_I): New mode iterators.
      	(Vetype, Vesize, VEL, Vel, vwcore): Handle partial SVE vector modes.
      	(VPRED, vpred): Likewise.
      	(Vctype): New iterator.
      	(vw): Remove SVE modes.
      
      gcc/testsuite/
      	* gcc.target/aarch64/sve/mixed_size_1.c: New test.
      	* gcc.target/aarch64/sve/mixed_size_2.c: Likewise.
      	* gcc.target/aarch64/sve/mixed_size_3.c: Likewise.
      	* gcc.target/aarch64/sve/mixed_size_4.c: Likewise.
      	* gcc.target/aarch64/sve/mixed_size_5.c: Likewise.
      
      From-SVN: r278341
      Richard Sandiford committed
    • [AArch64] Tweak gcc.target/aarch64/sve/clastb_8.c · 7f333599
      clastb_8.c was using scan-tree-dump-times to check for fully-masked
      loops, which made it sensitive to the number of times we try to
      vectorize.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/testsuite/
      	* gcc.target/aarch64/sve/clastb_8.c: Use assembly tests to
      	check for fully-masked loops.
      
      From-SVN: r278340
      Richard Sandiford committed
    • [AArch64] Replace SVE_PARTIAL with SVE_PARTIAL_I · 6544cb52
      Another renaming, this time to make way for partial/unpacked
      float modes.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* config/aarch64/iterators.md (SVE_PARTIAL): Rename to...
      	(SVE_PARTIAL_I): ...this.
      	* config/aarch64/aarch64-sve.md: Apply the above renaming throughout.
      
      From-SVN: r278339
      Richard Sandiford committed
    • [AArch64] Add "FULL" to SVE mode iterator names · f75cdd2c
      An upcoming patch will make more use of partial/unpacked SVE vectors.
      We then need a distinction between mode iterators that include partial
      modes and those that only include "full" modes.  This patch prepares
      for that by adding "FULL" to the names of iterators that only select
      full modes.  There should be no change in behaviour.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* config/aarch64/iterators.md (SVE_ALL): Rename to...
      	(SVE_FULL): ...this.
      	(SVE_I): Rename to...
      	(SVE_FULL_I): ...this.
      	(SVE_F): Rename to...
      	(SVE_FULL_F): ...this.
      	(SVE_BHSI): Rename to...
      	(SVE_FULL_BHSI): ...this.
      	(SVE_HSD): Rename to...
      	(SVE_FULL_HSD): ...this.
      	(SVE_HSDI): Rename to...
      	(SVE_FULL_HSDI): ...this.
      	(SVE_HSF): Rename to...
      	(SVE_FULL_HSF): ...this.
      	(SVE_SD): Rename to...
      	(SVE_FULL_SD): ...this.
      	(SVE_SDI): Rename to...
      	(SVE_FULL_SDI): ...this.
      	(SVE_SDF): Rename to...
      	(SVE_FULL_SDF): ...this.
      	(SVE_S): Rename to...
      	(SVE_FULL_S): ...this.
      	(SVE_D): Rename to...
      	(SVE_FULL_D): ...this.
      	* config/aarch64/aarch64-sve.md: Apply the above renaming throughout.
      	* config/aarch64/aarch64-sve2.md: Likewise.
      
      From-SVN: r278338
      Richard Sandiford committed
    • [AArch64] Enable VECT_COMPARE_COSTS by default for SVE · eb23241b
      This patch enables VECT_COMPARE_COSTS by default for SVE, both so
      that we can compare SVE against Advanced SIMD and so that (with future
      patches) we can compare multiple SVE vectorisation approaches against
      each other.  It also adds a target-specific --param to control this.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* config/aarch64/aarch64.opt (--param=aarch64-sve-compare-costs):
      	New option.
      	* doc/invoke.texi: Document it.
      	* config/aarch64/aarch64.c (aarch64_autovectorize_vector_modes):
      	By default, return VECT_COMPARE_COSTS for SVE.
      
      gcc/testsuite/
      	* gcc.target/aarch64/sve/reduc_3.c: Split multi-vector cases out
      	into...
      	* gcc.target/aarch64/sve/reduc_3_costly.c: ...this new test,
      	passing -fno-vect-cost-model for them.
      	* gcc.target/aarch64/sve/slp_6.c: Add -fno-vect-cost-model.
      	* gcc.target/aarch64/sve/slp_7.c,
      	* gcc.target/aarch64/sve/slp_7_run.c: Split multi-vector cases out
      	into...
      	* gcc.target/aarch64/sve/slp_7_costly.c,
      	* gcc.target/aarch64/sve/slp_7_costly_run.c: ...these new tests,
      	passing -fno-vect-cost-model for them.
      	* gcc.target/aarch64/sve/while_7.c: Add -fno-vect-cost-model.
      	* gcc.target/aarch64/sve/while_9.c: Likewise.
      
      From-SVN: r278337
      Richard Sandiford committed
    • Optionally pick the cheapest loop_vec_info · bcc7e346
      This patch adds a mode in which the vectoriser tries each available
      base vector mode and picks the one with the lowest cost.  The new
      behaviour is selected by autovectorize_vector_modes.
      
      The patch keeps the current behaviour of preferring a VF of
      loop->simdlen over any larger or smaller VF, regardless of costs
      or target preferences.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* target.h (VECT_COMPARE_COSTS): New constant.
      	* target.def (autovectorize_vector_modes): Return a bitmask of flags.
      	* doc/tm.texi: Regenerate.
      	* targhooks.h (default_autovectorize_vector_modes): Update accordingly.
      	* targhooks.c (default_autovectorize_vector_modes): Likewise.
      	* config/aarch64/aarch64.c (aarch64_autovectorize_vector_modes):
      	Likewise.
      	* config/arc/arc.c (arc_autovectorize_vector_modes): Likewise.
      	* config/arm/arm.c (arm_autovectorize_vector_modes): Likewise.
      	* config/i386/i386.c (ix86_autovectorize_vector_modes): Likewise.
      	* config/mips/mips.c (mips_autovectorize_vector_modes): Likewise.
      	* tree-vectorizer.h (_loop_vec_info::vec_outside_cost)
      	(_loop_vec_info::vec_inside_cost): New member variables.
      	* tree-vect-loop.c (_loop_vec_info::_loop_vec_info): Initialize them.
      	(vect_better_loop_vinfo_p, vect_joust_loop_vinfos): New functions.
      	(vect_analyze_loop): When autovectorize_vector_modes returns
      	VECT_COMPARE_COSTS, try vectorizing the loop with each available
      	vector mode and picking the one with the lowest cost.
      	(vect_estimate_min_profitable_iters): Record the computed costs
      	in the loop_vec_info.
      
      From-SVN: r278336
      Richard Sandiford committed
    • Extend can_duplicate_and_interleave_p to mixed-size vectors · f884cd2f
      This patch makes can_duplicate_and_interleave_p cope with mixtures of
      vector sizes, by using queries based on get_vectype_for_scalar_type
      instead of directly querying GET_MODE_SIZE (vinfo->vector_mode).
      
      int_mode_for_size is now the first check we do for a candidate mode,
      so it seemed better to restrict it to MAX_FIXED_MODE_SIZE.  This avoids
      unnecessary work and avoids trying to create scalar types that the
      target might not support.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* tree-vectorizer.h (can_duplicate_and_interleave_p): Take an
      	element type rather than an element mode.
      	* tree-vect-slp.c (can_duplicate_and_interleave_p): Likewise.
      	Use get_vectype_for_scalar_type to query the natural types
      	for a given element type rather than basing everything on
      	GET_MODE_SIZE (vinfo->vector_mode).  Limit int_mode_for_size
      	query to MAX_FIXED_MODE_SIZE.
      	(duplicate_and_interleave): Update call accordingly.
      	* tree-vect-loop.c (vectorizable_reduction): Likewise.
      
      From-SVN: r278335
      Richard Sandiford committed
    • Apply maximum nunits for BB SLP · 9b75f56d
      The BB vectoriser picked vector types in the same way as the loop
      vectoriser: it picked a vector mode/size for the region and then
      based all the vector types off that choice.  This meant we could
      end up trying to use vector types that had too many elements for
      the group size.
      
      The main part of this patch is therefore about passing the SLP
      group size down to routines like get_vectype_for_scalar_type and
      ensuring that each vector type in the SLP tree is chosen wrt the
      group size.  That part in itself is pretty easy and mechanical.
      
      The main warts are:
      
      (1) We normally pick a STMT_VINFO_VECTYPE for data references at an
          early stage (vect_analyze_data_refs).  However, nothing in the
          BB vectoriser relied on this, or on the min_vf calculated from it.
          I couldn't see anything other than vect_recog_bool_pattern that
          tried to access the vector type before the SLP tree is built.
      
      (2) It's possible for the same statement to be used in groups of
          different sizes.  Taking the group size into account meant that
          we could try to pick different vector types for the same statement.
      
          This problem should go away with the move to doing everything on
          SLP trees, where presumably we would attach the vector type to the
          SLP node rather than the stmt_vec_info.  Until then, the patch just
          uses a first-come, first-served approach.
      
      (3) A similar problem exists for grouped data references, where
          different statements in the same dataref group could be used
          in SLP nodes that have different group sizes.  The patch copes
          with that by making sure that all vector types in a dataref
          group remain consistent.
      
      The patch means that:
      
          void
          f (int *x, short *y)
          {
            x[0] += y[0];
            x[1] += y[1];
            x[2] += y[2];
            x[3] += y[3];
          }
      
      now produces:
      
              ldr     q0, [x0]
              ldr     d1, [x1]
              saddw   v0.4s, v0.4s, v1.4h
              str     q0, [x0]
              ret
      
      instead of:
      
              ldrsh   w2, [x1]
              ldrsh   w3, [x1, 2]
              fmov    s0, w2
              ldrsh   w2, [x1, 4]
              ldrsh   w1, [x1, 6]
              ins     v0.s[1], w3
              ldr     q1, [x0]
              ins     v0.s[2], w2
              ins     v0.s[3], w1
              add     v0.4s, v0.4s, v1.4s
              str     q0, [x0]
              ret
      
      Unfortunately it also means we start to vectorise
      gcc.target/i386/pr84101.c for -m32.  That seems like a target
      cost issue though; see PR92265 for details.
      
      2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>
      
      gcc/
      	* tree-vectorizer.h (vect_get_vector_types_for_stmt): Take an
      	optional maximum nunits.
      	(get_vectype_for_scalar_type): Likewise.  Also declare a form that
      	takes an slp_tree.
      	(get_mask_type_for_scalar_type): Take an optional slp_tree.
      	(vect_get_mask_type_for_stmt): Likewise.
      	* tree-vect-data-refs.c (vect_analyze_data_refs): Don't store
      	the vector type in STMT_VINFO_VECTYPE for BB vectorization.
      	* tree-vect-patterns.c (vect_recog_bool_pattern): Use
      	vect_get_vector_types_for_stmt instead of STMT_VINFO_VECTYPE
      	to get an assumed vector type for data references.
      	* tree-vect-slp.c (vect_update_shared_vectype): New function.
      	(vect_update_all_shared_vectypes): Likewise.
      	(vect_build_slp_tree_1): Pass the group size to
      	vect_get_vector_types_for_stmt.  Use vect_update_shared_vectype
      	for BB vectorization.
      	(vect_build_slp_tree_2): Call vect_update_all_shared_vectypes
      	before building the vectof from scalars.
      	(vect_analyze_slp_instance): Pass the group size to
      	get_vectype_for_scalar_type.
      	(vect_slp_analyze_node_operations_1): Don't recompute the vector
      	types for BB vectorization here; just handle the case in which
      	we deferred the choice for booleans.
      	(vect_get_constant_vectors): Pass the slp_tree to
      	get_vectype_for_scalar_type.
      	* tree-vect-stmts.c (vect_prologue_cost_for_slp_op): Likewise.
      	(vectorizable_call): Likewise.
      	(vectorizable_simd_clone_call): Likewise.
      	(vectorizable_conversion): Likewise.
      	(vectorizable_shift): Likewise.
      	(vectorizable_operation): Likewise.
      	(vectorizable_comparison): Likewise.
      	(vect_is_simple_cond): Take the slp_tree as argument and
      	pass it to get_vectype_for_scalar_type.
      	(vectorizable_condition): Update call accordingly.
      	(get_vectype_for_scalar_type): Take a group_size argument.
      	For BB vectorization, limit the the vector to that number
      	of elements.  Also define an overload that takes an slp_tree.
      	(get_mask_type_for_scalar_type): Add an slp_tree argument and
      	pass it to get_vectype_for_scalar_type.
      	(vect_get_vector_types_for_stmt): Add a group_size argument
      	and pass it to get_vectype_for_scalar_type.  Don't use the
      	cached vector type for BB vectorization if a group size is given.
      	Handle data references in that case.
      	(vect_get_mask_type_for_stmt): Take an slp_tree argument and
      	pass it to get_mask_type_for_scalar_type.
      
      gcc/testsuite/
      	* gcc.dg/vect/bb-slp-4.c: Expect the block to be vectorized
      	with -fno-vect-cost-model.
      	* gcc.dg/vect/bb-slp-bool-1.c: New test.
      	* gcc.target/aarch64/vect_mixed_sizes_14.c: Likewise.
      	* gcc.target/i386/pr84101.c: XFAIL for -m32.
      
      From-SVN: r278334
      Richard Sandiford committed
    • Fix nonspec_time when there is no cached value. · 23ff8c05
      	* ipa-inline.h (do_estimate_edge_time): Add nonspec_time
      	parameter.
      	(estimate_edge_time): Use it.
      	* ipa-inline-analysis.c (do_estimate_edge_time): Add
      	ret_nonspec_time parameter.
      
      From-SVN: r278333
      Jan Hubicka committed
    • Implement the <tuple> part of C++20 p1032 Misc constexpr bits. · 6d1402f0
      2019-11-15  Edward Smith-Rowland  <3dw4rd@verizon.net>
      
      	Implement the <tuple> part of C++20 p1032 Misc constexpr bits.
      	* include/std/tuple (_Head_base, _Tuple_impl(allocator_arg_t,...)
      	(_M_assign, tuple(allocator_arg_t,...), _Inherited, operator=, _M_swap)
      	(swap, pair(piecewise_construct_t,): Constexpr.
      	* (__uses_alloc0::_Sink::operator=, __uses_alloc_t): Constexpr.
      	* testsuite/20_util/tuple/cons/constexpr_allocator_arg_t.cc: New test.
      	* testsuite/20_util/tuple/constexpr_swap.cc : New test.
      	* testsuite/20_util/uses_allocator/69293_neg.cc: Extra error for C++20.
      	* testsuite/20_util/uses_allocator/cons_neg.cc: : Extra error for C++20.
      
      From-SVN: r278331
      Edward Smith-Rowland committed
    • Daily bump. · 97e4a5ee
      From-SVN: r278328
      GCC Administrator committed
  2. 15 Nov, 2019 17 commits
    • libstdc++: Fix <stop_token> and improve tests · e73ca078
      	* include/std/stop_token: Reduce header dependencies by including
      	internal headers.
      	(stop_token::swap(stop_token&), swap(stop_token&, stop_token&)):
      	Define.
      	(operator!=(const stop_token&, const stop_token&)): Fix return value.
      	(stop_token::_Stop_cb::_Stop_cb(Cb&&)): Use std::forward instead of
      	(stop_token::_Stop_state_t) [_GLIBCXX_HAS_GTHREADS]: Use lock_guard
      	instead of unique_lock.
      	[!_GLIBCXX_HAS_GTHREADS]: Do not use mutex.
      	(stop_token::stop_token(_Stop_state)): Change parameter to lvalue
      	reference.
      	(stop_source): Remove unnecessary using-declarations for names only
      	used once.
      	(swap(stop_source&, stop_source&)): Define.
      	(stop_callback(const stop_token&, _Cb&&))
      	(stop_callback(stop_token&&, _Cb&&)): Replace lambdas with a named
      	function. Use std::forward instead of std::move. Run callbacks if a
      	stop request has already been made.
      	(stop_source::_M_execute()): Remove.
      	(stop_source::_S_execute(_Stop_cb*)): Define.
      	* include/std/version (__cpp_lib_jthread): Define conditionally.
      	* testsuite/30_threads/stop_token/stop_callback.cc: New test.
      	* testsuite/30_threads/stop_token/stop_source.cc: New test.
      	* testsuite/30_threads/stop_token/stop_token.cc: Enable test for
      	immediate execution of callback.
      
      From-SVN: r278325
      Jonathan Wakely committed
    • Diagnose duplicate C2x standard attributes. · d5fbe5e0
      For each of the attributes currently included in C2x, it has a
      constraint that the attribute shall appear at most once in each
      attribute list (attribute-list being what appear between a single [[
      and ]]).
      
      This patch implements that check.  As the corresponding check in the
      C++ front end (cp_parser_check_std_attribute) makes violations into
      errors, I made them into errors, with the same wording, for C as well.
      
      There is an existing check in the case of the fallthrough attribute,
      with a warning rather than an error, in attribute_fallthrough_p.  That
      is more general, as it also covers __attribute__ ((fallthrough)) and
      the case of [[fallthrough]] [[fallthrough]] (multiple attribute-lists
      in a single attribute-specifier-sequence), which is not a constraint
      violation.  To avoid some [[fallthrough, fallthrough]] being diagnosed
      twice, the check I added avoids adding duplicate attributes to the
      list.
      
      Bootstrapped with no regressions on x86_64-pc-linux-gnu.
      
      gcc/c:
      	* c-parser.c (c_parser_std_attribute_specifier): Diagnose
      	duplicate standard attributes.
      
      gcc/testsuite:
      	* gcc.dg/c2x-attr-deprecated-4.c, gcc.dg/c2x-attr-fallthrough-4.c,
      	gcc.dg/c2x-attr-maybe_unused-4.c: New tests.
      
      From-SVN: r278324
      Joseph Myers committed
    • typeck.c (cp_truthvalue_conversion): Add tsubst_flags_t parameter and use it in calls... · 2ab340fe
      /cp
      2019-11-15  Paolo Carlini  <paolo.carlini@oracle.com>
      
      	* typeck.c (cp_truthvalue_conversion): Add tsubst_flags_t parameter
      	and use it in calls; also pass the location_t of the expression to
      	cp_build_binary_op and c_common_truthvalue_conversion.
      	* rtti.c (build_dynamic_cast_1): Adjust call.
      	* cvt.c (ocp_convert): Likewise.
      	* cp-gimplify.c (cp_fold): Likewise.
      	* cp-tree.h (cp_truthvalue_conversion): Update declaration.
      
      /testsuite
      2019-11-15  Paolo Carlini  <paolo.carlini@oracle.com>
      
      	* g++.dg/warn/Walways-true-1.C: Check locations too.
      	* g++.dg/warn/Walways-true-2.C: Likewise.
      	* g++.dg/warn/Walways-true-3.C: Likewise.
      	* g++.dg/warn/Waddress-1.C: Check additional location.
      
      From-SVN: r278320
      Paolo Carlini committed
    • Forgot to change teh date range. · f982d12a
      From-SVN: r278318
      Edward Smith-Rowland committed
    • Implement the default_searcher part of C++20 p1032 Misc constexpr bits. · 12536431
      2019-11-15  Edward Smith-Rowland  <3dw4rd@verizon.net>
      
      	Implement the default_searcher part of C++20 p1032 Misc constexpr bits.
      	* include/std/functional
      	(default_searcher, default_searcher::operator()): Constexpr.
      	* testsuite/20_util/function_objects/constexpr_searcher.cc: New.
      
      From-SVN: r278317
      Edward Smith-Rowland committed
    • testmain.exp: link against GOLIBS · ae0b0fc6
          
          Patch by Maciej W. Rozycki.
          
          Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/207458
      
      From-SVN: r278316
      Ian Lance Taylor committed
    • libstdc++: Implement LWG 3149 for std::default_constructible · a31517cb
      The change approved in Belfast did not actually rename the concept from
      std::default_constructible to std::default_initializable, even though
      that was intended. That is expected to be done soon as a separate issue,
      so I'm implementing that now too.
      
      	* include/bits/iterator_concepts.h (weakly_incrementable): Adjust.
      	* include/std/concepts (default_constructible): Rename to
      	default_initializable and require default-list-initialization and
      	default-initialization to be valid (LWG 3149).
      	(semiregular): Adjust to new name.
      	* testsuite/std/concepts/concepts.lang/concept.defaultconstructible/
      	1.cc: Rename directory to concept.defaultinitializable and adjust to
      	new name.
      	* testsuite/std/concepts/concepts.lang/concept.defaultinitializable/
      	lwg3149.cc: New test.
      	* testsuite/util/testsuite_iterators.h (test_range): Adjust.
      
      From-SVN: r278314
      Jonathan Wakely committed
    • libstdc++: Implement LWG 3070 in path::lexically_relative · 01eb211b
      	* src/c++17/fs_path.cc [_GLIBCXX_FILESYSTEM_IS_WINDOWS]
      	(is_disk_designator): New helper function.
      	(path::_Parser::root_path()): Use is_disk_designator.
      	(path::lexically_relative(const path&)): Implement resolution of
      	LWG 3070.
      	* testsuite/27_io/filesystem/path/generation/relative.cc: Check with
      	path components that look like a root-name.
      
      From-SVN: r278313
      Jonathan Wakely committed
    • m68k: add musl support · 838fd641
      Add the dynamic linker name and fix a type name to use the public name
      instead of the glibc internal name.
      
      gcc/ChangeLog:
      
      2019-11-15  Szabolcs Nagy  <szabolcs.nagy@arm.com>
      
      	* config/m68k/linux.h (MUSL_DYNAMIC_LINKER): Define.
      
      libgcc/ChangeLog:
      
      2019-11-15  Szabolcs Nagy  <szabolcs.nagy@arm.com>
      
      	* config/m68k/linux-unwind.h (struct uw_ucontext): Use sigset_t instead
      	of __sigset_t.
      
      From-SVN: r278312
      Szabolcs Nagy committed
    • Support C2x [[maybe_unused]] attribute. · 97cc1187
      This patch adds support for the C2x [[maybe_unused]] attribute, using
      the same handler as for GNU __attribute__ ((unused)).
      
      As with other such attribute support, I think turning certain warnings
      into pedwarns for usage in cases where that is a constraint violation
      can be addressed later as a bug fix, as can the C2x constraint for
      various standard attributes that they do not appear more than once
      inside a single [[]].
      
      However, the warnings that appear in c2x-attr-maybe_unused-1.c (that
      the attribute is ignored on member declarations) need to remain as
      warnings not pedwarns, since C2x does permit the attribute there.  (Or
      they could be silenced, on the basis that GCC doesn't have warnings
      for unused struct and union members so it's completely harmless that
      it's ignoring an attribute that might do something useful with another
      compiler that does have such warnings.)
      
      Bootstrapped with no regressions on x86_64-pc-linux-gnu.
      
      gcc/c:
      	* c-decl.c (std_attribute_table): Add maybe_unused.
      
      gcc/testsuite:
      	* gcc.dg/c2x-attr-maybe_unused-1.c,
      	gcc.dg/c2x-attr-maybe_unused-2.c,
      	gcc.dg/c2x-attr-maybe_unused-3.c: New tests.
      
      From-SVN: r278310
      Joseph Myers committed
    • MAINTAINERS: Change my email address as maintainer. · a91eb234
      ChangeLog:
      
      2019-11-15  Kelvin Nilsen  <kelvin@gcc.gnu.org>
      
      	* MAINTAINERS: Change my email address as maintainer.
      
      From-SVN: r278309
      Kelvin Nilsen committed
    • microblaze: fix PR65649 · 66f9ccd5
      microblaze-linux-musl build fails without this.
      
      (This is a rebase of an earlier patch posted on bugzilla.)
      
      gcc/ChangeLog:
      
      2019-11-15  Nick Clifton  <nickc@redhat.com>
      	    Szabolcs Nagy  <szabolcs.nagy@arm.com>
      
      	PR target/65649
      	* config/microblaze/microblaze.c (print_operand): Print value as long.
      
      Co-Authored-By: Szabolcs Nagy <szabolcs.nagy@arm.com>
      
      From-SVN: r278308
      Nick Clifton committed
    • ipa-inline.c (edge_badness, [...]): Revert accidental commit. · 03f00a6d
      
      	* ipa-inline.c (edge_badness, inline_small_functions): Revert
      	accidental commit.
      
      From-SVN: r278307
      Jan Hubicka committed
    • [amdgcn] Unfix registers for frame pointer · 969089ff
      Allow the registers used for the frame pointer to be used for other purposes
      if the frame pointer is not being used.
      
      2019-11-15  Kwok Cheung Yeung  <kcy@codesourcery.com>
      
      	gcc/
      	* config/gcn/gcn.h (FIXED_REGISTERS): Unfix frame pointer.
      	(CALL_USED_REGISTERS): Make frame pointer callee-saved.
      
      From-SVN: r278306
      Kwok Cheung Yeung committed
    • [amdgcn] Update lower bounds for the number of registers in non-leaf kernels · 87fdbe69
      Reduce the lower limits on the number of registers requested by non-leaf
      kernels to help improve CU occupancy.
      
      2019-11-15  Kwok Cheung Yeung  <kcy@codesourcery.com>
      
      	gcc/
      	* config/gcn/gcn.c (MAX_NORMAL_SGPR_COUNT, MAX_NORMAL_VGPR_COUNT): New.
      	(gcn_conditional_register_usage): Use constants in place of hard-coded
      	values.
      	(gcn_hsa_declare_function_name): Set lower bound for number of
      	SGPRs/VGPRs in non-leaf kernels to MAX_NORMAL_SGPR_COUNT and
      	MAX_NORMAL_VGPR_COUNT.
      
      From-SVN: r278305
      Kwok Cheung Yeung committed
    • ipa: Remove stray declaration · 1ca59cbe
      2019-11-15  Martin Jambor  <mjambor@suse.cz>
      
      	* ipa-utils.h (ipa_remove_useless_jump_functions): Remove stray
      	declaration.
      
      From-SVN: r278303
      Martin Jambor committed
    • [amdgcn] Restrict registers available to non-kernel functions · 342f9464
      Restrict the number of SGPRs and VGPRs available to non-kernel functions
      to improve compute-unit occupancy with multiple threads.
      
      2019-11-15  Kwok Cheung Yeung  <kcy@codesourcery.com>
      
      	gcc/
      	* config/gcn/gcn.c (default_requested_args): New.
      	(gcn_parse_amdgpu_hsa_kernel_attribute): Initialize requested args
      	set with default_requested_args.
      	(gcn_conditional_register_usage): Limit register usage of non-kernel
      	functions.  Reassign fixed registers if a non-standard set of args is
      	requested.
      	* config/gcn/gcn.h (FIXED_REGISTERS): Fix registers according to ABI.
      
      From-SVN: r278301
      Kwok Cheung Yeung committed