- 15 Aug, 2019 29 commits
-
-
2019-08-15 Martin Liska <mliska@suse.cz> * tree-ssa-dce.c (propagate_necessity): We can't reach now operators with no arguments. (eliminate_unnecessary_stmts): Likewise here. From-SVN: r274529
Martin Liska committed -
2019-08-15 Richard Biener <rguenther@suse.de> c-family/ * c-common.c (c_stddef_cpp_builtins): When the GIMPLE FE is enabled, define __SIZETYPE__. * gcc.dg/pr80170.c: Adjust to use __SIZETYPE__. From-SVN: r274528
Richard Biener committed -
From-SVN: r274527
Uros Bizjak committed -
* config/i386/i386-features.c (general_scalar_chain::convert_insn) <case COMPARE>: Revert 2019-08-14 change. (convertible_comparison_p): Revert 2019-08-14 change. Return false for (TARGET_64BIT || mode != DImode). From-SVN: r274526
Uros Bizjak committed -
From-SVN: r274525
Aldy Hernandez committed -
In this PR we were passing an ordinary non-built-in function to targetm.vectorize.builtin_md_vectorized_function, which is only supposed to handle BUILT_IN_MD. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> gcc/ PR middle-end/91444 * tree-vect-stmts.c (vectorizable_call): Check that the function is a BUILT_IN_MD function before passing it to targetm.vectorize.builtin_md_vectorized_function. From-SVN: r274524
Richard Sandiford committed -
This patch adds an exported function for testing whether a mode is an SVE mode. The ACLE will make more use of it, but there's already one place that can benefit. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> gcc/ * config/aarch64/aarch64-protos.h (aarch64_sve_mode_p): Declare. * config/aarch64/aarch64.c (aarch64_sve_mode_p): New function. (aarch64_select_early_remat_modes): Use it. From-SVN: r274523
Richard Sandiford committed -
aarch64_simd_vector_alignment was only giving predicates 16-bit alignment in VLA mode, not VLS mode. I think the problem is latent because we can't yet create an ABI predicate type, but it seemed worth fixing in a standalone patch rather than as part of the main ACLE series. The ACLE patches have tests for this. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> gcc/ * config/aarch64/aarch64.c (aarch64_simd_vector_alignment): Return 16 for SVE predicates even if they are fixed-length. From-SVN: r274522
Richard Sandiford committed -
SVE defines an assembly alias: MOV pa.B, pb/Z, pc.B -> AND pa.B. pb/Z, pc.B, pc.B Our and<mode>3 pattern was instead using the functionally-equivalent: AND pa.B. pb/Z, pb.B, pc.B ^^^^ This patch duplicates pc.B instead so that the alias can be seen in disassembly. I wondered about using the alias in the pattern instead, but using AND explicitly seems to fit better with the pattern name and surrounding code. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> gcc/ * config/aarch64/aarch64-sve.md (and<PRED_ALL:mode>3): Make the operand order match the MOV /Z alias. From-SVN: r274521
Richard Sandiford committed -
This patch makes us always pass an explicit vector pattern to aarch64_output_sve_cnt_immediate, rather than assuming it's ALL. The ACLE patches need to be able to pass in other values. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> gcc/ * config/aarch64/aarch64.c (aarch64_output_sve_cnt_immediate): Take the vector pattern as an aarch64_svpattern argument. Update the overloaded caller accordingly. (aarch64_output_sve_scalar_inc_dec): Update call accordingly. (aarch64_output_sve_vector_inc_dec): Likewise. From-SVN: r274520
Richard Sandiford committed -
aarch64_add_offset contains code to decompose all SVE VL-based constants into native operations. The worst-case fallback is to load the number of SVE elements into a register and use a general multiplication. This patch improves that fallback by reusing expand_mult if can_create_pseudo_p, rather than emitting a MULT pattern directly. In order to increase the chances of being able to use a simple add-and-shift, the patch also tries to compute VG * the lowest set bit of the multiplier, rather than always using CNTD as the basis for the multiplication path. This is tested by the ACLE patches but is really an independent improvement. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> gcc/ * config/aarch64/aarch64.c (aarch64_add_offset): In the fallback multiplication case, try to compute VG * (lowest set bit) directly rather than always basing the multiplication on VG. Use expand_mult for the multiplication if we can. gcc/testsuite/ * gcc.target/aarch64/sve/loop_add_4.c: Expect 10 INCWs and INCDs rather than 8. From-SVN: r274519
Richard Sandiford committed -
The scalar addition patterns allowed all the VL constants that ADDVL and ADDPL allow, but wrote the instructions as INC or DEC if possible (i.e. adding or subtracting a number of elements * [1, 16] when the source and target registers the same). That works for the cases that the autovectoriser needs, but there are a few constants that INC and DEC can handle but ADDPL and ADDVL can't. E.g.: inch x0, all, mul #9 is not a multiple of the number of bytes in an SVE register, and so can't use ADDVL. It represents 36 times the number of bytes in an SVE predicate, putting it outside the range of ADDPL. This patch therefore adds separate alternatives for INC and DEC, tied to a new Uai constraint. It also adds an explicit "scalar" or "vector" to the function names, to avoid a clash with the existing support for vector INC and DEC. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> gcc/ * config/aarch64/aarch64-protos.h (aarch64_sve_scalar_inc_dec_immediate_p): Declare. (aarch64_sve_inc_dec_immediate_p): Rename to... (aarch64_sve_vector_inc_dec_immediate_p): ...this. (aarch64_output_sve_addvl_addpl): Take a single rtx argument. (aarch64_output_sve_scalar_inc_dec): Declare. (aarch64_output_sve_inc_dec_immediate): Rename to... (aarch64_output_sve_vector_inc_dec): ...this. * config/aarch64/aarch64.c (aarch64_sve_scalar_inc_dec_immediate_p) (aarch64_output_sve_scalar_inc_dec): New functions. (aarch64_output_sve_addvl_addpl): Remove the base and offset arguments. Only handle true ADDVL and ADDPL instructions; don't emit an INC or DEC. (aarch64_sve_inc_dec_immediate_p): Rename to... (aarch64_sve_vector_inc_dec_immediate_p): ...this. (aarch64_output_sve_inc_dec_immediate): Rename to... (aarch64_output_sve_vector_inc_dec): ...this. Update call to aarch64_sve_vector_inc_dec_immediate_p. * config/aarch64/predicates.md (aarch64_sve_scalar_inc_dec_immediate) (aarch64_sve_plus_immediate): New predicates. (aarch64_pluslong_operand): Accept aarch64_sve_plus_immediate rather than aarch64_sve_addvl_addpl_immediate. (aarch64_sve_inc_dec_immediate): Rename to... (aarch64_sve_vector_inc_dec_immediate): ...this. Update call to aarch64_sve_vector_inc_dec_immediate_p. (aarch64_sve_add_operand): Update accordingly. * config/aarch64/constraints.md (Uai): New constraint. (vsi): Update call to aarch64_sve_vector_inc_dec_immediate_p. * config/aarch64/aarch64.md (add<GPI:mode>3): Don't force the second operand into a register if it satisfies aarch64_sve_plus_immediate. (*add<GPI:mode>3_aarch64, *add<GPI:mode>3_poly_1): Add an alternative for Uai. Update calls to aarch64_output_sve_addvl_addpl. * config/aarch64/aarch64-sve.md (add<mode>3): Call aarch64_output_sve_vector_inc_dec instead of aarch64_output_sve_inc_dec_immediate. From-SVN: r274518
Richard Sandiford committed -
The current SVE REV patterns follow the AArch64 scheme, in which UNSPEC_REV<NN> reverses elements within an <NN>-bit granule. E.g. UNSPEC_REV64 on VNx8HI reverses the four 16-bit elements within each 64-bit granule. The native SVE scheme is the other way around: UNSPEC_REV64 is seen as an operation on 64-bit elements, with REVB swapping bytes within the elements, REVH swapping halfwords, and so on. This fits SVE more naturally because the operation can then be predicated per <NN>-bit granule/element. Making the patterns use the Advanced SIMD scheme was more natural when all we cared about were permutes, since we could then use the source and target of the permute in their original modes. However, the ACLE does need patterns that follow the native scheme, treating them as operations on integer elements. This patch defines the patterns that way instead and updates the existing uses to match. This also brings in a couple of helper routines from the ACLE branch. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> gcc/ * config/aarch64/iterators.md (UNSPEC_REVB, UNSPEC_REVH) (UNSPEC_REVW): New constants. (elem_bits): New mode attribute. (SVE_INT_UNARY): New int iterator. (optab): Handle UNSPEC_REV[BHW]. (sve_int_op): New int attribute. (min_elem_bits): Handle VNx16QI and the predicate modes. * config/aarch64/aarch64-sve.md (*aarch64_sve_rev64<mode>) (*aarch64_sve_rev32<mode>, *aarch64_sve_rev16vnx16qi): Delete. (@aarch64_pred_<SVE_INT_UNARY:optab><SVE_I:mode>): New pattern. * config/aarch64/aarch64.c (aarch64_sve_data_mode): New function. (aarch64_sve_int_mode, aarch64_sve_rev_unspec): Likewise. (aarch64_split_sve_subreg_move): Use UNSPEC_REV[BHW] instead of unspecs based on the total width of the reversed data. (aarch64_evpc_rev_local): Likewise (for SVE only). Use a reinterpret followed by a subreg on big-endian targets. gcc/testsuite/ * gcc.target/aarch64/sve/revb_1.c: Restrict to little-endian targets. Avoid including stdint.h. * gcc.target/aarch64/sve/revh_1.c: Likewise. * gcc.target/aarch64/sve/revw_1.c: Likewise. * gcc.target/aarch64/sve/revb_2.c: New big-endian test. * gcc.target/aarch64/sve/revh_2.c: Likewise. * gcc.target/aarch64/sve/revw_2.c: Likewise. From-SVN: r274517
Richard Sandiford committed -
This patch makes the floating-point conditional FMA patterns provide the same /z alternatives as the integer patterns added by a previous patch. We can handle cases in which individual inputs are allocated to the same register as the output, so we don't need to force all registers to be different. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> Kugan Vivekanandarajah <kugan.vivekanandarajah@linaro.org> gcc/ * config/aarch64/aarch64-sve.md (*cond_<SVE_COND_FP_TERNARY:optab><SVE_F:mode>_any): Add /z alternatives in which one of the inputs is in the same register as the output. gcc/testsuite/ * gcc.target/aarch64/sve/cond_mla_5.c: Allow FMAD as well as FMLA and FMSB as well as FMLS. Co-Authored-By: Kugan Vivekanandarajah <kuganv@linaro.org> From-SVN: r274516
Richard Sandiford committed -
We use EXT both to implement vec_extract for large indices and as a permute. In both cases we can use MOVPRFX to handle the case in which the first input and output can't be tied. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> gcc/ * config/aarch64/aarch64-sve.md (*vec_extract<mode><Vel>_ext) (*aarch64_sve_ext<mode>): Add MOVPRFX alternatives. gcc/testsuite/ * gcc.target/aarch64/sve/ext_2.c: Expect a MOVPRFX. * gcc.target/aarch64/sve/ext_3.c: New test. From-SVN: r274515
Richard Sandiford committed -
The floating-point subtraction patterns don't need to handle subtraction of constants, since those go through the addition patterns instead. There was a missing MOVPRFX alternative for FSUBR though. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> gcc/ * config/aarch64/aarch64-sve.md (*sub<SVE_F:mode>3): Remove immediate FADD and FSUB alternatives. Add a MOVPRFX alternative for FSUBR. From-SVN: r274514
Richard Sandiford committed -
FABD and some immediate instructions were missing MOVPRFX alternatives. This is tested by the ACLE patches but is really an independent improvement. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> Kugan Vivekanandarajah <kugan.vivekanandarajah@linaro.org> gcc/ * config/aarch64/aarch64-sve.md (add<SVE_I:mode>3, sub<SVE_I:mode>3) (<LOGICAL:optab><SVE_I:mode>3, *add<SVE_F:mode>3, *mul<SVE_F:mode>3) (*fabd<SVE_F:mode>3): Add more MOVPRFX alternatives. Co-Authored-By: Kugan Vivekanandarajah <kuganv@linaro.org> From-SVN: r274513
Richard Sandiford committed -
This patch makes us use reversed SVE shifts when the first operand can't be tied to the output but the second can. This is tested more thoroughly by the ACLE patches but is really an independent improvement. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> gcc/ * config/aarch64/aarch64-sve.md (*v<ASHIFT:optab><SVE_I:mode>3): Add an alternative that uses reversed shifts. gcc/testsuite/ * gcc.target/aarch64/sve/shift_1.c: Accept reversed shifts. Co-Authored-By: Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> From-SVN: r274512
Richard Sandiford committed -
The neoversen1 tuning struct gives better performance on the Cortex-A76, so use that. The only difference from the current tuning is the function and label alignment settings. This gives about 1.3% improvement on SPEC2006 int and 0.3% on SPEC2006 fp. * config/aarch64/aarch64-cores.def (cortex-a76): Use neoversen1 tuning struct. From-SVN: r274511
Kyrylo Tkachov committed -
This will be tested by the ACLE patches, but it's really an independent improvement. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> gcc/ * config/aarch64/aarch64-sve.md (aarch64_<su>abd<mode>_3): Add a commutativity marker. From-SVN: r274510
Richard Sandiford committed -
This patch uses predicated MLA, MLS, MAD and MSB to implement conditional "FMA"s on integers. This also requires providing the unpredicated optabs (fma and fnma) since otherwise tree-ssa-math-opts.c won't try to use the conditional forms. We still want to use shifts and adds in preference to multiplications, so the patch makes the optab expanders check for that. The tests cover floating-point types too, which are already handled, and which were already tested to some extent by gcc.dg/vect. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> Kugan Vivekanandarajah <kugan.vivekanandarajah@linaro.org> gcc/ * config/aarch64/aarch64-protos.h (aarch64_prepare_sve_int_fma) (aarch64_prepare_sve_cond_int_fma): Declare. * config/aarch64/aarch64.c (aarch64_convert_mult_to_shift) (aarch64_prepare_sve_int_fma): New functions. (aarch64_prepare_sve_cond_int_fma): Likewise. * config/aarch64/aarch64-sve.md (cond_<SVE_INT_BINARY:optab><SVE_I:mode>): Add a "@" marker. (fma<SVE_I:mode>4, cond_fma<SVE_I:mode>, *cond_fma<SVE_I:mode>_2) (*cond_fma<SVE_I:mode>_4, *cond_fma<SVE_I:mode>_any, fnma<SVE_I:mode>4) (cond_fnma<SVE_I:mode>, *cond_fnma<SVE_I:mode>_2) (*cond_fnma<SVE_I:mode>_4, *cond_fnma<SVE_I:mode>_any): New patterns. (*madd<mode>): Rename to... (*fma<mode>4): ...this. (*msub<mode>): Rename to... (*fnma<mode>4): ...this. gcc/testsuite/ * gcc.target/aarch64/sve/cond_mla_1.c: New test. * gcc.target/aarch64/sve/cond_mla_1_run.c: Likewise. * gcc.target/aarch64/sve/cond_mla_2.c: Likewise. * gcc.target/aarch64/sve/cond_mla_2_run.c: Likewise. * gcc.target/aarch64/sve/cond_mla_3.c: Likewise. * gcc.target/aarch64/sve/cond_mla_3_run.c: Likewise. * gcc.target/aarch64/sve/cond_mla_4.c: Likewise. * gcc.target/aarch64/sve/cond_mla_4_run.c: Likewise. * gcc.target/aarch64/sve/cond_mla_5.c: Likewise. * gcc.target/aarch64/sve/cond_mla_5_run.c: Likewise. * gcc.target/aarch64/sve/cond_mla_6.c: Likewise. * gcc.target/aarch64/sve/cond_mla_6_run.c: Likewise. * gcc.target/aarch64/sve/cond_mla_7.c: Likewise. * gcc.target/aarch64/sve/cond_mla_7_run.c: Likewise. * gcc.target/aarch64/sve/cond_mla_8.c: Likewise. * gcc.target/aarch64/sve/cond_mla_8_run.c: Likewise. Co-Authored-By: Kugan Vivekanandarajah <kuganv@linaro.org> From-SVN: r274509
Richard Sandiford committed -
This patch lets us use the immediate forms of FADD, FSUB, FSUBR, FMUL, FMAXNM and FMINNM for conditional arithmetic. (We already use them for normal unconditional arithmetic.) 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> Kugan Vivekanandarajah <kugan.vivekanandarajah@linaro.org> gcc/ * config/aarch64/aarch64.c (aarch64_print_vector_float_operand): Print 2.0 naturally. (aarch64_sve_float_mul_immediate_p): Return true for 2.0. * config/aarch64/predicates.md (aarch64_sve_float_negated_arith_immediate): New predicate, renamed from aarch64_sve_float_arith_with_sub_immediate. (aarch64_sve_float_arith_with_sub_immediate): Test for both positive and negative constants. (aarch64_sve_float_arith_with_sub_operand): Redefine as a register or an aarch64_sve_float_arith_with_sub_immediate. * config/aarch64/constraints.md (vsN): Use aarch64_sve_float_negated_arith_immediate. * config/aarch64/iterators.md (SVE_COND_FP_BINARY_I1): New int iterator. (sve_pred_fp_rhs2_immediate): New int attribute. * config/aarch64/aarch64-sve.md (cond_<SVE_COND_FP_BINARY:optab><SVE_F:mode>): Use sve_pred_fp_rhs1_operand and sve_pred_fp_rhs2_operand. (*cond_<SVE_COND_FP_BINARY_I1:optab><SVE_F:mode>_2_const) (*cond_<SVE_COND_FP_BINARY_I1:optab><SVE_F:mode>_any_const) (*cond_add<SVE_F:mode>_2_const, *cond_add<SVE_F:mode>_any_const) (*cond_sub<mode>_3_const, *cond_sub<mode>_any_const): New patterns. gcc/testsuite/ * gcc.target/aarch64/sve/cond_fadd_1.c: New test. * gcc.target/aarch64/sve/cond_fadd_1_run.c: Likewise. * gcc.target/aarch64/sve/cond_fadd_2.c: Likewise. * gcc.target/aarch64/sve/cond_fadd_2_run.c: Likewise. * gcc.target/aarch64/sve/cond_fadd_3.c: Likewise. * gcc.target/aarch64/sve/cond_fadd_3_run.c: Likewise. * gcc.target/aarch64/sve/cond_fadd_4.c: Likewise. * gcc.target/aarch64/sve/cond_fadd_4_run.c: Likewise. * gcc.target/aarch64/sve/cond_fsubr_1.c: Likewise. * gcc.target/aarch64/sve/cond_fsubr_1_run.c: Likewise. * gcc.target/aarch64/sve/cond_fsubr_2.c: Likewise. * gcc.target/aarch64/sve/cond_fsubr_2_run.c: Likewise. * gcc.target/aarch64/sve/cond_fsubr_3.c: Likewise. * gcc.target/aarch64/sve/cond_fsubr_3_run.c: Likewise. * gcc.target/aarch64/sve/cond_fsubr_4.c: Likewise. * gcc.target/aarch64/sve/cond_fsubr_4_run.c: Likewise. * gcc.target/aarch64/sve/cond_fmaxnm_1.c: Likewise. * gcc.target/aarch64/sve/cond_fmaxnm_1_run.c: Likewise. * gcc.target/aarch64/sve/cond_fmaxnm_2.c: Likewise. * gcc.target/aarch64/sve/cond_fmaxnm_2_run.c: Likewise. * gcc.target/aarch64/sve/cond_fmaxnm_3.c: Likewise. * gcc.target/aarch64/sve/cond_fmaxnm_3_run.c: Likewise. * gcc.target/aarch64/sve/cond_fmaxnm_4.c: Likewise. * gcc.target/aarch64/sve/cond_fmaxnm_4_run.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_1.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_1_run.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_2.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_2_run.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_3.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_3_run.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_4.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_4_run.c: Likewise. * gcc.target/aarch64/sve/cond_fmul_1.c: Likewise. * gcc.target/aarch64/sve/cond_fmul_1_run.c: Likewise. * gcc.target/aarch64/sve/cond_fmul_2.c: Likewise. * gcc.target/aarch64/sve/cond_fmul_2_run.c: Likewise. * gcc.target/aarch64/sve/cond_fmul_3.c: Likewise. * gcc.target/aarch64/sve/cond_fmul_3_run.c: Likewise. * gcc.target/aarch64/sve/cond_fmul_4.c: Likewise. * gcc.target/aarch64/sve/cond_fmul_4_run.c: Likewise. Co-Authored-By: Kugan Vivekanandarajah <kuganv@linaro.org> From-SVN: r274508
Richard Sandiford committed -
This patch extends the FABD support so that it handles conditional arithmetic. We're relying on combine for this, since there's no associated IFN_COND_* (yet?). 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> Kugan Vivekanandarajah <kugan.vivekanandarajah@linaro.org> gcc/ * config/aarch64/aarch64-sve.md (*aarch64_cond_abd<SVE_F:mode>_2) (*aarch64_cond_abd<SVE_F:mode>_3) (*aarch64_cond_abd<SVE_F:mode>_any): New patterns. gcc/testsuite/ * gcc.target/aarch64/sve/cond_fabd_1.c: New test. * gcc.target/aarch64/sve/cond_fabd_1_run.c: Likewise. * gcc.target/aarch64/sve/cond_fabd_2.c: Likewise. * gcc.target/aarch64/sve/cond_fabd_2_run.c: Likewise. * gcc.target/aarch64/sve/cond_fabd_3.c: Likewise. * gcc.target/aarch64/sve/cond_fabd_3_run.c: Likewise. * gcc.target/aarch64/sve/cond_fabd_4.c: Likewise. * gcc.target/aarch64/sve/cond_fabd_4_run.c: Likewise. * gcc.target/aarch64/sve/cond_fabd_5.c: Likewise. * gcc.target/aarch64/sve/cond_fabd_5_run.c: Likewise. Co-Authored-By: Kugan Vivekanandarajah <kuganv@linaro.org> From-SVN: r274507
Richard Sandiford committed -
This patch extends the [SU]ABD support so that it handles conditional arithmetic. We're relying on combine for this, since there's no associated IFN_COND_* (yet?). 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> Kugan Vivekanandarajah <kugan.vivekanandarajah@linaro.org> gcc/ * config/aarch64/aarch64-sve.md (*aarch64_cond_<su>abd<mode>_2) (*aarch64_cond_<su>abd<mode>_any): New patterns. gcc/testsuite/ * gcc.target/aarch64/sve/cond_abd_1.c: New test. * gcc.target/aarch64/sve/cond_abd_1_run.c: Likewise. * gcc.target/aarch64/sve/cond_abd_2.c: Likewise. * gcc.target/aarch64/sve/cond_abd_2_run.c: Likewise. * gcc.target/aarch64/sve/cond_abd_3.c: Likewise. * gcc.target/aarch64/sve/cond_abd_3_run.c: Likewise. * gcc.target/aarch64/sve/cond_abd_4.c: Likewise. * gcc.target/aarch64/sve/cond_abd_4_run.c: Likewise. * gcc.target/aarch64/sve/cond_abd_5.c: Likewise. * gcc.target/aarch64/sve/cond_abd_5_run.c: Likewise. Co-Authored-By: Kugan Vivekanandarajah <kuganv@linaro.org> From-SVN: r274506
Richard Sandiford committed -
This patch adds support for IFN_COND shifts left and shifts right. This is mostly mechanical, but since we try to handle conditional operations in the same way as unconditional operations in match.pd, we need to support IFN_COND shifts by scalars as well as vectors. E.g.: IFN_COND_SHL (cond, a, { 1, 1, ... }, fallback) and: IFN_COND_SHL (cond, a, 1, fallback) are the same operation, with: (for shiftrotate (lrotate rrotate lshift rshift) ... /* Prefer vector1 << scalar to vector1 << vector2 if vector2 is uniform. */ (for vec (VECTOR_CST CONSTRUCTOR) (simplify (shiftrotate @0 vec@1) (with { tree tem = uniform_vector_p (@1); } (if (tem) (shiftrotate @0 { tem; })))))) preferring the latter. The patch copes with this by extending create_convert_operand_from to handle scalar-to-vector conversions. 2019-08-15 Richard Sandiford <richard.sandiford@arm.com> Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> gcc/ * internal-fn.def (IFN_COND_SHL, IFN_COND_SHR): New internal functions. * internal-fn.c (FOR_EACH_CODE_MAPPING): Handle shifts. * match.pd (UNCOND_BINARY, COND_BINARY): Likewise. * optabs.def (cond_ashl_optab, cond_ashr_optab, cond_lshr_optab): New optabs. * optabs.h (create_convert_operand_from): Expand comment. * optabs.c (maybe_legitimize_operand): Allow implicit broadcasts when mapping scalar rtxes to vector operands. * config/aarch64/iterators.md (SVE_INT_BINARY): Add ashift, ashiftrt and lshiftrt. (sve_int_op, sve_int_op_rev, sve_pred_int_rhs2_operand): Handle them. * config/aarch64/aarch64-sve.md (*cond_<optab><mode>_2_const) (*cond_<optab><mode>_any_const): New patterns. gcc/testsuite/ * gcc.target/aarch64/sve/cond_shift_1.c: New test. * gcc.target/aarch64/sve/cond_shift_1_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_2.c: Likewise. * gcc.target/aarch64/sve/cond_shift_2_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_3.c: Likewise. * gcc.target/aarch64/sve/cond_shift_3_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_4.c: Likewise. * gcc.target/aarch64/sve/cond_shift_4_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_5.c: Likewise. * gcc.target/aarch64/sve/cond_shift_5_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_6.c: Likewise. * gcc.target/aarch64/sve/cond_shift_6_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_7.c: Likewise. * gcc.target/aarch64/sve/cond_shift_7_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_8.c: Likewise. * gcc.target/aarch64/sve/cond_shift_8_run.c: Likewise. * gcc.target/aarch64/sve/cond_shift_9.c: Likewise. * gcc.target/aarch64/sve/cond_shift_9_run.c: Likewise. Co-Authored-By: Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org> From-SVN: r274505
Richard Sandiford committed -
2019-08-15 Martin Liska <mliska@suse.cz> PR ipa/91438 * cgraph.c (cgraph_node::remove): When setting n->origin = NULL for all nested functions, reset also next_nested. From-SVN: r274504
Martin Liska committed -
2019-08-15 Martin Liska <mliska@suse.cz> * cgraph.c (cgraph_node::verify_node): Verify origin, nested and next_nested. From-SVN: r274503
Martin Liska committed -
2019-08-15 Martin Liska <mliska@suse.cz> PR ipa/91404 * passes.c (order): Remove. (uid_hash_t): Likewise). (remove_cgraph_node_from_order): Remove from set of pointers (cgraph_node *). (insert_cgraph_node_to_order): New. (duplicate_cgraph_node_to_order): New. (do_per_function_toporder): Register all 3 cgraph hooks. Skip removed_nodes now as we know about all of them. From-SVN: r274502
Martin Liska committed -
From-SVN: r274501
GCC Administrator committed
-
- 14 Aug, 2019 11 commits
-
-
gcc/testsuite/ChangeLog: * gcc.dg/strlenopt-73.c: Restrict 128-bit tests to i386. From-SVN: r274495
Martin Sebor committed -
The std::make_unique function wasn't added until C++14, and neither was the std::complex_literals namespace. gcc/cp: PR c++/91436 * name-lookup.c (get_std_name_hint): Fix min_dialect field for complex_literals and make_unique entries. gcc/testsuite: PR c++/91436 * g++.dg/lookup/missing-std-include-5.C: Limit test to C++14 and up. * g++.dg/lookup/missing-std-include-6.C: Don't check make_unique in test that runs for C++11. * g++.dg/lookup/missing-std-include-8.C: Check make_unique here. From-SVN: r274492
Jonathan Wakely committed -
This non-standard extension is redundant and unused by the library. * include/std/type_traits (__is_nullptr_t): Add deprecated attribute. From-SVN: r274491
Jonathan Wakely committed -
i386-expand.c (ix86_expand_vector_init_one_nonzero): Use vector_set path for TARGET_MMX_WITH_SSE && TARGET_SSE4_1. * config/i386/i386-expand.c (ix86_expand_vector_init_one_nonzero) <case E_V8QImode>: Use vector_set path for TARGET_MMX_WITH_SSE && TARGET_SSE4_1. (ix86_expand_vector_init_one_nonzero) <case E_V8QImode>: Do not widen for TARGET_MMX_WITH_SSE && TARGET_SSE4_1. From-SVN: r274490
Uros Bizjak committed -
2019-08-14 Christophe Lyon <christophe.lyon@linaro.org> * gcc.c-torture/execute/noinit-attribute.c: Fix typo. From-SVN: r274489
Christophe Lyon committed -
2019-08-14 Edward Smith-Rowland <3dw4rd@verizon.net> Implement C++20 p0879 - Constexpr for swap and swap related functions. * include/std/version (__cpp_lib_constexpr_swap_algorithms): New macro. * include/bits/algorithmfwd.h (__cpp_lib_constexpr_swap_algorithms): New macro. (iter_swap, make_heap, next_permutation, partial_sort_copy, pop_heap) (prev_permutation, push_heap, reverse, rotate, sort_heap, swap) (swap_ranges, nth_element, partial_sort, sort): Add constexpr. * include/bits/move.h (swap): Add constexpr. * include/bits/stl_algo.h (__move_median_to_first, __reverse, reverse) (__gcd, __rotate, rotate, __partition, __heap_select) (__partial_sort_copy, partial_sort_copy, __unguarded_partition) (__unguarded_partition_pivot, __partial_sort, __introsort_loop, __sort) (__introselect, __chunk_insertion_sort, next_permutation) (prev_permutation, partition, partial_sort, nth_element, sort) (__iter_swap::iter_swap, iter_swap, swap_ranges): Add constexpr. * include/bits/stl_algobase.h (__iter_swap::iter_swap, iter_swap) (swap_ranges): Add constexpr. * include/bits/stl_heap.h (__push_heap, push_heap, __adjust_heap, __pop_heap, pop_heap, __make_heap, make_heap, __sort_heap, sort_heap): Add constexpr. * include/std/type_traits (swap): Add constexpr. * testsuite/25_algorithms/headers/algorithm/synopsis.cc: Add constexpr. * testsuite/25_algorithms/iter_swap/constexpr.cc: New test. * testsuite/25_algorithms/make_heap/constexpr.cc: New test. * testsuite/25_algorithms/next_permutation/constexpr.cc: New test. * testsuite/25_algorithms/nth_element/constexpr.cc: New test. * testsuite/25_algorithms/partial_sort/constexpr.cc: New test. * testsuite/25_algorithms/partial_sort_copy/constexpr.cc: New test. * testsuite/25_algorithms/partition/constexpr.cc: New test. * testsuite/25_algorithms/pop_heap/constexpr.cc: New test. * testsuite/25_algorithms/prev_permutation/constexpr.cc: New test. * testsuite/25_algorithms/push_heap/constexpr.cc: New test. * testsuite/25_algorithms/reverse/constexpr.cc: New test. * testsuite/25_algorithms/rotate/constexpr.cc: New test. * testsuite/25_algorithms/sort/constexpr.cc: New test. * testsuite/25_algorithms/sort_heap/constexpr.cc: New test. * testsuite/25_algorithms/swap/constexpr.cc: New test. * testsuite/25_algorithms/swap_ranges/constexpr.cc: New test. From-SVN: r274488
Edward Smith-Rowland committed -
2019-08-14 Bernd Edlinger <bernd.edlinger@hotmail.de> * builtins.c (expand_builtin_init_descriptor): Set memory alignment. From-SVN: r274487
Bernd Edlinger committed -
gcc/testsuite/ChangeLog: PR tree-optimization/91294 * gcc.dg/strlenopt-44.c: Adjust tested result. * gcc.dg/strlenopt-70.c: Avoid exercising unimplemnted optimization. * gcc.dg/strlenopt-73.c: New test. * gcc.dg/strlenopt-74.c: New test. * gcc.dg/strlenopt-75.c: New test. * gcc.dg/strlenopt-76.c: New test. * gcc.dg/strlenopt-77.c: New test. gcc/ChangeLog: PR tree-optimization/91294 * tree-ssa-strlen.c (handle_store): Avoid treating lower bound of source length as exact. From-SVN: r274486
Martin Sebor committed -
* parser.c (cp_parser_postfix_open_square_expression): Don't warn about a deprecated comma here. Pass warn_comma_subscript down to cp_parser_expression. (cp_parser_expression): New bool parameter. Warn about uses of a comma operator within a subscripting expression. (cp_parser_skip_to_closing_square_bracket): Revert to pre-r274121 state. (cp_parser_skip_to_closing_square_bracket_1): Remove. * g++.dg/cpp2a/comma5.C: New test. Co-Authored-By: Marek Polacek <polacek@redhat.com> From-SVN: r274483
Jakub Jelinek committed -
Similar to what already exists for TI msp430 or in TI compilers for arm, this patch adds support for the "noinit" attribute. It is convenient for embedded targets where the user wants to keep the value of some data when the program is restarted: such variables are not zero-initialized. It is mostly a helper/shortcut to placing variables in a dedicated section. It's probably desirable to add the following chunk to the GNU linker: diff --git a/ld/emulparams/armelf.sh b/ld/emulparams/armelf.sh index 272a8bc..9555cec 100644 --- a/ld/emulparams/armelf.sh +++ b/ld/emulparams/armelf.sh @@ -10,7 +10,19 @@ OTHER_TEXT_SECTIONS='*(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx)' OTHER_BSS_SYMBOLS="${CREATE_SHLIB+PROVIDE (}__bss_start__ = .${CREATE_SHLIB+)};" OTHER_BSS_END_SYMBOLS="${CREATE_SHLIB+PROVIDE (}_bss_end__ = .${CREATE_SHLIB+)}; ${CREATE_SHLIB+PROVIDE (}__bss_end__ = .${CREATE_SHLIB+)};" OTHER_END_SYMBOLS="${CREATE_SHLIB+PROVIDE (}__end__ = .${CREATE_SHLIB+)};" -OTHER_SECTIONS='.note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) }' +OTHER_SECTIONS=' +.note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /* This section contains data that is not initialised during load + *or* application reset. */ + .noinit (NOLOAD) : + { + . = ALIGN(2); + PROVIDE (__noinit_start = .); + *(.noinit) + . = ALIGN(2); + PROVIDE (__noinit_end = .); + } +' so that the noinit section has the "NOLOAD" flag. I added a testcase if gcc.c-torture/execute, gated by the new noinit effective-target. Finally, I tested on arm-eabi, but not on msp430 for which I do not have the environment. gcc/ChangeLog: 2019-08-14 Christophe Lyon <christophe.lyon@linaro.org> * doc/extend.texi: Add "noinit" attribute documentation. * doc/sourcebuild.texi: Add noinit effective target documentation. * varasm.c (default_section_type_flags): Add support for "noinit" section. (default_elf_select_section): Add support for "noinit" attribute. * config/msp430/msp430.c (msp430_attribute_table): Remove "noinit" entry. gcc/c-family/ChangeLog: 2019-08-14 Christophe Lyon <christophe.lyon@linaro.org> * c-attribs.c (c_common_attribute_table): Add "noinit" entry. Add exclusion with "section" attribute. (attr_noinit_exclusions): New table. (handle_noinit_attribute): New function. gcc/testsuite/ChangeLog: 2019-08-14 Christophe Lyon <christophe.lyon@linaro.org> * lib/target-supports.exp (check_effective_target_noinit): New proc. * gcc.c-torture/execute/noinit-attribute.c: New test. From-SVN: r274482
Christophe Lyon committed -
2019-08-14 Richard Biener <rguenther@suse.de> Uroš Bizjak <ubizjak@gmail.com> PR target/91154 * config/i386/i386-features.h (scalar_chain::scalar_chain): Add mode arguments. (scalar_chain::smode): New member. (scalar_chain::vmode): Likewise. (dimode_scalar_chain): Rename to... (general_scalar_chain): ... this. (general_scalar_chain::general_scalar_chain): Take mode arguments. (timode_scalar_chain::timode_scalar_chain): Initialize scalar_chain base with TImode and V1TImode. * config/i386/i386-features.c (scalar_chain::scalar_chain): Adjust. (general_scalar_chain::vector_const_cost): Adjust for SImode chains. (general_scalar_chain::compute_convert_gain): Likewise. Add {S,U}{MIN,MAX} support. (general_scalar_chain::replace_with_subreg): Use vmode/smode. (general_scalar_chain::make_vector_copies): Likewise. Handle non-DImode chains appropriately. (general_scalar_chain::convert_reg): Likewise. (general_scalar_chain::convert_op): Likewise. (general_scalar_chain::convert_insn): Likewise. Add fatal_insn_not_found if the result is not recognized. (convertible_comparison_p): Pass in the scalar mode and use that. (general_scalar_to_vector_candidate_p): Likewise. Rename from dimode_scalar_to_vector_candidate_p. Add {S,U}{MIN,MAX} support. (scalar_to_vector_candidate_p): Remove by inlining into single caller. (general_remove_non_convertible_regs): Rename from dimode_remove_non_convertible_regs. (remove_non_convertible_regs): Remove by inlining into single caller. (convert_scalars_to_vector): Handle SImode and DImode chains in addition to TImode chains. * config/i386/i386.md (<maxmin><MAXMIN_IMODE>3): New expander. (*<maxmin><MAXMIN_IMODE>3_1): New insn-and-split. (*<maxmin>di3_doubleword): Likewise. * gcc.target/i386/pr91154.c: New testcase. * gcc.target/i386/minmax-3.c: Likewise. * gcc.target/i386/minmax-4.c: Likewise. * gcc.target/i386/minmax-5.c: Likewise. * gcc.target/i386/minmax-6.c: Likewise. * gcc.target/i386/minmax-1.c: Add -mno-stv. * gcc.target/i386/minmax-2.c: Likewise. Co-Authored-By: Uros Bizjak <ubizjak@gmail.com> From-SVN: r274481
Richard Biener committed
-