- 08 May, 2018 23 commits
-
-
gcc/ChangeLog: 2018-05-08 Kelvin Nilsen <kelvin@gcc.gnu.org> * doc/extend.texi (PowerPC Built-in Functions): Rename this subsection. (Basic PowerPC Built-in Functions): The new name of the subsection previously known as "PowerPC Built-in Functions". (Basic PowerPC Built-in Functions Available on all Configurations): New subsubsection. (Basic PowerPC Built-in Functions Available on ISA 2.05): New subsubsection. (Basic PowerPC Built-in Functions Available on ISA 2.06): New subsubsection. (Basic PowerPC Built-in Functions Available on ISA 2.07): New subsubsection. (Basic PowerPC Built-in Functions Available on ISA 3.0): New subsubsection. From-SVN: r260048
Kelvin Nilsen committed -
PR target/85693 * gcc.target/i386/pr85693.c: New test. From-SVN: r260047
Uros Bizjak committed -
* include/bits/regex_automaton.h (_NFA_base::_M_paren_stack, _NFA): Use normal std::vector even in Debug Mode. From-SVN: r260046
Jonathan Wakely committed -
PR target/85683 * config/i386/i386.md: Add peepholes for mem {+,-,&,|,^}= x; mem != 0 after cmpelim optimization. * gcc.target/i386/pr49095.c: Add -masm=att to dg-options. Add scan-assembler-times checking that except for [fh]*xor other functions don't use any load instructions. From-SVN: r260045
Jakub Jelinek committed -
Restore the behaviour in GCC 8 and earlier where _GLIBCXX_USE_FLOAT128 is not defined when configure detects support is missing. This avoids having three states where the macro is either 1, 0, or undefined. PR libstdc++/85672 * include/Makefile.am [!ENABLE_FLOAT128]: Change c++config.h entry to #undef _GLIBCXX_USE_FLOAT128 instead of defining it to zero. * include/Makefile.in: Regenerate. * include/bits/c++config (_GLIBCXX_USE_FLOAT128): Move definition within conditional block. From-SVN: r260043
Jonathan Wakely committed -
2018-05-08 Olga Makhotina <olga.makhotina@intel.com> gcc/ * config.gcc: Support "goldmont". * config/i386/driver-i386.c (host_detect_local_cpu): Detect "goldmont". * config/i386/i386-c.c (ix86_target_macros_internal): Handle PROCESSOR_GOLDMONT. * config/i386/i386.c (m_GOLDMONT): Define. (processor_target_table): Add "goldmont". (PTA_GOLDMONT): Define. (ix86_lea_outperforms): Add TARGET_GOLDMONT. (get_builtin_code_for_version): Handle PROCESSOR_GOLDMONT. (fold_builtin_cpu): Add M_INTEL_GOLDMONT. (fold_builtin_cpu): Add "goldmont". (ix86_add_stmt_cost): Add TARGET_GOLDMONT. (ix86_option_override_internal): Add "goldmont". * config/i386/i386.h (processor_costs): Define TARGET_GOLDMONT. (processor_type): Add PROCESSOR_GOLDMONT. * config/i386/i386.md: Add CPU "glm". * config/i386/glm.md: New file. * config/i386/x86-tune.def: Add m_GOLDMONT. * doc/invoke.texi: Add goldmont as x86 -march=/-mtune= CPU type. libgcc/ * config/i386/cpuinfo.h (processor_types): Add INTEL_GOLDMONT. * config/i386/cpuinfo.c (get_intel_cpu): Detect Goldmont. gcc/testsuite/ * gcc.target/i386/builtin_target.c: Test goldmont. * gcc.target/i386/funcspec-56.inc: Tests for arch=goldmont and arch=silvermont. From-SVN: r260042
Olga Makhotina committed -
PR target/85572 * config/i386/i386.c (ix86_expand_sse2_abs): Handle E_V2DImode and E_V4DImode. * config/i386/sse.md (abs<mode>2): Use VI_AVX2 iterator instead of VI1248_AVX512VL_AVX512BW. Handle V2DImode and V4DImode if not TARGET_AVX512VL using ix86_expand_sse2_abs. Formatting fixes. * g++.dg/other/sse2-pr85572-1.C: New test. * g++.dg/other/sse2-pr85572-2.C: New test. * g++.dg/other/sse4-pr85572-1.C: New test. * g++.dg/other/avx2-pr85572-1.C: New test. From-SVN: r260041
Jakub Jelinek committed -
PR target/85317 * config/i386/i386.c (ix86_fold_builtin): Handle IX86_BUILTIN_{,P}MOVMSK{PS,PD,B}{,128,256}. * gcc.target/i386/pr85317.c: New test. * gcc.target/i386/avx2-vpmovmskb-2.c (avx2_test): Add asm volatile optimization barrier to avoid optimizing away the expected insn. From-SVN: r260040
Jakub Jelinek committed -
PR target/85480 * config/i386/sse.md (ssequaterinsnmode): New mode attribute. (*<extract_type>_vinsert<shuffletype><extract_suf>_0): New pattern. * gcc.target/i386/avx512dq-pr85480-1.c: New test. * gcc.target/i386/avx512dq-pr85480-2.c: New test. From-SVN: r260039
Jakub Jelinek committed -
2018-05-08 Richard Sandiford <richard.sandiford@linaro.org> gcc/testsuite/ * g++.dg/other/sve_const_pred_1.C: Rename to... * g++.target/aarch64/sve/const_pred_1.C: ...this. Remove aarch64 target selectors and explicit -march options. * g++.dg/other/sve_const_pred_2.C: Rename to... * g++.target/aarch64/sve/const_pred_2.C: ...this and adjust likewise. * g++.dg/other/sve_const_pred_3.C: Rename to... * g++.target/aarch64/sve/const_pred_3.C: ...this and adjust likewise. * g++.dg/other/sve_const_pred_4.C: Rename to... * g++.target/aarch64/sve/const_pred_4.C: ...this and adjust likewise. * g++.dg/other/sve_tls_2.C: Rename to... * g++.target/aarch64/sve/tls_2.C: ...this and adjust likewise. * g++.dg/other/sve_vcond_1.C: Rename to... * g++.target/aarch64/sve/vcond_1.C: ...this and adjust likewise. * g++.dg/other/sve_vcond_1_run.C: Rename to... * g++.target/aarch64/sve/vcond_1_run.C: ...this and adjust likewise. From-SVN: r260038
Richard Sandiford committed -
2018-05-08 Richard Sandiford <richard.sandiford@linaro.org> gcc/testsuite/ PR testsuite/85586 * gcc.dg/vect/pr85586.c: Restrict LOOP VECTORIZED test to !vect_no_align. From-SVN: r260036
Richard Sandiford committed -
2018-05-08 Paolo Carlini <paolo.carlini@oracle.com> PR c++/57429 * g++.dg/cpp0x/deleted14.C: New. From-SVN: r260035
Paolo Carlini committed -
* configure.host: Add RISC-V support. * Makefile.am: Likewise. * Makefile.in: Regenerate. * src/riscv/ffi.c, src/riscv/ffitarget.h, src/riscv/sysv.S: New files. From-SVN: r260033
Andreas Schwab committed -
There are a number of places in parsecpu.awk where I've managed to get the operator precedence between ! and 'in' incorrect (! binds more tightly). In most cases this just makes a consistency test ineffective, but in a few cases it means we fail to correctly diagnose errors by the user (for example, when passing an invalid cpu or architecture name to configure. This patch fixes all the cases I could find, based on searching for all uses of the two operators in the same expression. The tweak to the API of check_fpu is to bring it into line with the other check functions - it now returns the result rather than printing it directly. The caller now does the printing, in the same way that the chkarch and chkcpu commands do. PR target/85658 * config/arm/parsecpu.awk (check_cpu): Fix operator precedence. (check_arch): Likewise. (check_fpu): Return the result rather than printing it. (end arch): Fix operator precedence. (end cpu): Likewise. (END): Print the result from check_fpu. From-SVN: r260032
Richard Earnshaw committed -
This patch adds SVE patterns that combine a PTRUE-predicated comparison with a separate AND. The main benefit is for optimising ANDs with the loop predicate, as in the testcase. However, one of the potential drawbacks is that it triggers even for cases in which two naturally-parallel comparisons are ANDed together. Whether that's a win or a less will depend on the schedule, but it has the potential to be a win more often than a loss. The combine patterns are undeniably ugly. One way of getting around them would be to allow 1->1 "splits" when combining 2 instructions, as well as 1->2 splits when combining more than 2 instructions (although that wouldn't really be a split). Another would be to have a way of defining target-specific rtx simplifications. branches/ARM/sve-branch has a prototype implementation of that, but it would need some clean-up before being ready to submit. It would also be good to make it closer to the match.pd style. Until then, I think what the combine patterns are doing is the "correct" implementation given the current infrastructure. 2018-05-08 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * config/aarch64/aarch64-sve.md (*pred_cmp<cmp_op><mode>_combine) (*pred_cmp<cmp_op><mode>, *fcm<cmp_op><mode>_and_combine) (*fcmuo<mode>_and_combine, *fcm<cmp_op><mode>_and) (*fcmuo<mode>_and): New patterns. gcc/testsuite/ * gcc.target/aarch64/sve/vcond_6.c: Do not expect any ANDs. XFAIL the BIC test. * gcc.target/aarch64/sve/vcond_7.c: New test. * gcc.target/aarch64/sve/vcond_7_run.c: Likewise. Co-Authored-By: Alan Hayward <alan.hayward@arm.com> Co-Authored-By: David Sherwood <david.sherwood@arm.com> From-SVN: r260031
Richard Sandiford committed -
2018-05-08 Paolo Carlini <paolo.carlini@oracle.com> PR c++/70563 * g++.dg/cpp0x/sfinae62.C: New. From-SVN: r260030
Paolo Carlini committed -
This patch rewrites the SVE comparison handling so that it uses UNSPEC_MERGE_PTRUE for comparisons that are known to be predicated on a PTRUE, for consistency with other patterns. Specific unspecs are then only needed for truly predicated floating-point comparisons, such as those used in the expansion of UNEQ for flag_trapping_math. The patch also makes sure that the comparison expanders attach a REG_EQUAL note to instructions that use UNSPEC_MERGE_PTRUE, so passes can use that as an alternative to the unspec pattern. (This happens automatically for optabs. The problem was that this code emits instruction patterns directly.) No specific benefit on its own, but it lays the groundwork for the next patch. 2018-05-08 Richard Sandiford <richard.sandiford@linaro.org> gcc/ * config/aarch64/iterators.md (UNSPEC_COND_LO, UNSPEC_COND_LS) (UNSPEC_COND_HI, UNSPEC_COND_HS, UNSPEC_COND_UO): Delete. (SVE_INT_CMP, SVE_FP_CMP): New code iterators. (cmp_op, sve_imm_con): New code attributes. (SVE_COND_INT_CMP, imm_con): Delete. (cmp_op): Remove above unspecs from int attribute. * config/aarch64/aarch64-sve.md (*vec_cmp<cmp_op>_<mode>): Rename to... (*cmp<cmp_op><mode>): ...this. Use UNSPEC_MERGE_PTRUE instead of comparison-specific unspecs. (*vec_cmp<cmp_op>_<mode>_ptest): Rename to... (*cmp<cmp_op><mode>_ptest): ...this and adjust likewise. (*vec_cmp<cmp_op>_<mode>_cc): Rename to... (*cmp<cmp_op><mode>_cc): ...this and adjust likewise. (*vec_fcm<cmp_op><mode>): Rename to... (*fcm<cmp_op><mode>): ...this and adjust likewise. (*vec_fcmuo<mode>): Rename to... (*fcmuo<mode>): ...this and adjust likewise. (*pred_fcm<cmp_op><mode>): New pattern. * config/aarch64/aarch64.c (aarch64_emit_unop, aarch64_emit_binop) (aarch64_emit_sve_ptrue_op, aarch64_emit_sve_ptrue_op_cc): New functions. (aarch64_unspec_cond_code): Remove handling of LTU, GTU, LEU, GEU and UNORDERED. (aarch64_gen_unspec_cond, aarch64_emit_unspec_cond): Delete. (aarch64_emit_sve_predicated_cond): New function. (aarch64_expand_sve_vec_cmp_int): Use aarch64_emit_sve_ptrue_op_cc. (aarch64_emit_unspec_cond_or): Replace with... (aarch64_emit_sve_or_conds): ...this new function. Use aarch64_emit_sve_ptrue_op for the individual comparisons and aarch64_emit_binop to OR them together. (aarch64_emit_inverted_unspec_cond): Replace with... (aarch64_emit_sve_inverted_cond): ...this new function. Use aarch64_emit_sve_ptrue_op for the comparison and aarch64_emit_unop to invert the result. (aarch64_expand_sve_vec_cmp_float): Update after the above changes. Use aarch64_emit_sve_ptrue_op for native comparisons. From-SVN: r260029
Richard Sandiford committed -
sve/vcond_6.c was effectively testing a three-input logical operation, since the result of BINOP needed to be ANDed with the loop predicate before loading src[i]. This patch makes it really test a binary operation instead. A later patch will add (and optimise) the three-operand case. 2018-05-08 Richard Sandiford <richard.sandiford@linaro.org> gcc/testsuite/ * gcc.target/aarch64/sve/vcond_6.c (LOOP): Unconditionally load from src[i]. From-SVN: r260028
Richard Sandiford committed -
2018-05-08 Paolo Carlini <paolo.carlini@oracle.com> PR c++/80691 * g++.dg/cpp0x/narrowing1.C: New. From-SVN: r260027
Paolo Carlini committed -
2018-05-08 Richard Biener <rguenther@suse.de> PR bootstrap/85571 config/ * bootstrap-lto-noplugin.mk: Disable compare. * bootstrap-lto.mk: Supply contrib/compare-lto for do-compare. contrib/ * compare-lto: New script derived from compare-debug. From-SVN: r260026
Richard Biener committed -
2018-05-08 Richard Biener <rguenther@suse.de> PR middle-end/85588 * gcc.dg/torture/pr85574.c: Rename to... * gcc.dg/torture/pr85588.c: ... this. From-SVN: r260024
Richard Biener committed -
2018-05-08 Thomas Koenig <tkoenig@gcc.gnu.org> PR fortran/54613 * check.c (gfc_check_minmaxloc): Remove error for BACK not being implemented. Use gfc_logical_4_kind for BACK. * simplify.c (min_max_choose): Add optional argument back_val. Handle it. (simplify_minmaxloc_to_scalar): Add argument back_val. Pass back_val to min_max_choose. (simplify_minmaxloc_to_nodim): Likewise. (simplify_minmaxloc_to_array): Likewise. (gfc_simplify_minmaxloc): Add argument back, handle it. Pass back_val to specific simplification functions. (gfc_simplify_minloc): Remove ATTRIBUTE_UNUSED from argument back, pass it on to gfc_simplify_minmaxloc. (gfc_simplify_maxloc): Likewise. * trans-intrinsic.c (gfc_conv_intrinsic_minmaxloc): Adjust comment. If BACK is true, use greater or equal (or lesser or equal) insteal of greater (or lesser). Mark the condition of having found a value which exceeds the limit as unlikely. 2018-05-08 Thomas Koenig <tkoenig@gcc.gnu.org> PR fortran/54613 * m4/iforeach-s.m4: Remove assertion that back is zero. * m4/iforeach.m4: Likewise. Remove leading 'do' before implementation start. * m4/ifunction-s.m4: Remove assertion that back is zero. * m4/ifunction.m4: Likewise. Remove for loop if HAVE_BACK_ARG is defined. * m4/maxloc0.m4: Reorganize loops. Split loops between >= and =, depending if back is true. Mark the condition of having found a value which exceeds the limit as unlikely. * m4/minloc0.m4: Likewise. * m4/maxloc1.m4: Likewise. * m4/minloc1.m4: Likewise. * m4/maxloc1s.m4: Handle back argument. * m4/minloc1s.m4: Likewise. * m4/maxloc2s.m4: Remove assertion that back is zero. Remove special handling of loop start. Handle back argument. * m4/minloc2s.m4: Likewise. * generated/iall_i1.c: Regenerated. * generated/iall_i16.c: Regenerated. * generated/iall_i2.c: Regenerated. * generated/iall_i4.c: Regenerated. * generated/iall_i8.c: Regenerated. * generated/iany_i1.c: Regenerated. * generated/iany_i16.c: Regenerated. * generated/iany_i2.c: Regenerated. * generated/iany_i4.c: Regenerated. * generated/iany_i8.c: Regenerated. * generated/iparity_i1.c: Regenerated. * generated/iparity_i16.c: Regenerated. * generated/iparity_i2.c: Regenerated. * generated/iparity_i4.c: Regenerated. * generated/iparity_i8.c: Regenerated. * generated/maxloc0_16_i1.c: Regenerated. * generated/maxloc0_16_i16.c: Regenerated. * generated/maxloc0_16_i2.c: Regenerated. * generated/maxloc0_16_i4.c: Regenerated. * generated/maxloc0_16_i8.c: Regenerated. * generated/maxloc0_16_r10.c: Regenerated. * generated/maxloc0_16_r16.c: Regenerated. * generated/maxloc0_16_r4.c: Regenerated. * generated/maxloc0_16_r8.c: Regenerated. * generated/maxloc0_16_s1.c: Regenerated. * generated/maxloc0_16_s4.c: Regenerated. * generated/maxloc0_4_i1.c: Regenerated. * generated/maxloc0_4_i16.c: Regenerated. * generated/maxloc0_4_i2.c: Regenerated. * generated/maxloc0_4_i4.c: Regenerated. * generated/maxloc0_4_i8.c: Regenerated. * generated/maxloc0_4_r10.c: Regenerated. * generated/maxloc0_4_r16.c: Regenerated. * generated/maxloc0_4_r4.c: Regenerated. * generated/maxloc0_4_r8.c: Regenerated. * generated/maxloc0_4_s1.c: Regenerated. * generated/maxloc0_4_s4.c: Regenerated. * generated/maxloc0_8_i1.c: Regenerated. * generated/maxloc0_8_i16.c: Regenerated. * generated/maxloc0_8_i2.c: Regenerated. * generated/maxloc0_8_i4.c: Regenerated. * generated/maxloc0_8_i8.c: Regenerated. * generated/maxloc0_8_r10.c: Regenerated. * generated/maxloc0_8_r16.c: Regenerated. * generated/maxloc0_8_r4.c: Regenerated. * generated/maxloc0_8_r8.c: Regenerated. * generated/maxloc0_8_s1.c: Regenerated. * generated/maxloc0_8_s4.c: Regenerated. * generated/maxloc1_16_i1.c: Regenerated. * generated/maxloc1_16_i16.c: Regenerated. * generated/maxloc1_16_i2.c: Regenerated. * generated/maxloc1_16_i4.c: Regenerated. * generated/maxloc1_16_i8.c: Regenerated. * generated/maxloc1_16_r10.c: Regenerated. * generated/maxloc1_16_r16.c: Regenerated. * generated/maxloc1_16_r4.c: Regenerated. * generated/maxloc1_16_r8.c: Regenerated. * generated/maxloc1_16_s1.c: Regenerated. * generated/maxloc1_16_s4.c: Regenerated. * generated/maxloc1_4_i1.c: Regenerated. * generated/maxloc1_4_i16.c: Regenerated. * generated/maxloc1_4_i2.c: Regenerated. * generated/maxloc1_4_i4.c: Regenerated. * generated/maxloc1_4_i8.c: Regenerated. * generated/maxloc1_4_r10.c: Regenerated. * generated/maxloc1_4_r16.c: Regenerated. * generated/maxloc1_4_r4.c: Regenerated. * generated/maxloc1_4_r8.c: Regenerated. * generated/maxloc1_4_s1.c: Regenerated. * generated/maxloc1_4_s4.c: Regenerated. * generated/maxloc1_8_i1.c: Regenerated. * generated/maxloc1_8_i16.c: Regenerated. * generated/maxloc1_8_i2.c: Regenerated. * generated/maxloc1_8_i4.c: Regenerated. * generated/maxloc1_8_i8.c: Regenerated. * generated/maxloc1_8_r10.c: Regenerated. * generated/maxloc1_8_r16.c: Regenerated. * generated/maxloc1_8_r4.c: Regenerated. * generated/maxloc1_8_r8.c: Regenerated. * generated/maxloc1_8_s1.c: Regenerated. * generated/maxloc1_8_s4.c: Regenerated. * generated/maxloc2_16_s1.c: Regenerated. * generated/maxloc2_16_s4.c: Regenerated. * generated/maxloc2_4_s1.c: Regenerated. * generated/maxloc2_4_s4.c: Regenerated. * generated/maxloc2_8_s1.c: Regenerated. * generated/maxloc2_8_s4.c: Regenerated. * generated/maxval_i1.c: Regenerated. * generated/maxval_i16.c: Regenerated. * generated/maxval_i2.c: Regenerated. * generated/maxval_i4.c: Regenerated. * generated/maxval_i8.c: Regenerated. * generated/maxval_r10.c: Regenerated. * generated/maxval_r16.c: Regenerated. * generated/maxval_r4.c: Regenerated. * generated/maxval_r8.c: Regenerated. * generated/minloc0_16_i1.c: Regenerated. * generated/minloc0_16_i16.c: Regenerated. * generated/minloc0_16_i2.c: Regenerated. * generated/minloc0_16_i4.c: Regenerated. * generated/minloc0_16_i8.c: Regenerated. * generated/minloc0_16_r10.c: Regenerated. * generated/minloc0_16_r16.c: Regenerated. * generated/minloc0_16_r4.c: Regenerated. * generated/minloc0_16_r8.c: Regenerated. * generated/minloc0_16_s1.c: Regenerated. * generated/minloc0_16_s4.c: Regenerated. * generated/minloc0_4_i1.c: Regenerated. * generated/minloc0_4_i16.c: Regenerated. * generated/minloc0_4_i2.c: Regenerated. * generated/minloc0_4_i4.c: Regenerated. * generated/minloc0_4_i8.c: Regenerated. * generated/minloc0_4_r10.c: Regenerated. * generated/minloc0_4_r16.c: Regenerated. * generated/minloc0_4_r4.c: Regenerated. * generated/minloc0_4_r8.c: Regenerated. * generated/minloc0_4_s1.c: Regenerated. * generated/minloc0_4_s4.c: Regenerated. * generated/minloc0_8_i1.c: Regenerated. * generated/minloc0_8_i16.c: Regenerated. * generated/minloc0_8_i2.c: Regenerated. * generated/minloc0_8_i4.c: Regenerated. * generated/minloc0_8_i8.c: Regenerated. * generated/minloc0_8_r10.c: Regenerated. * generated/minloc0_8_r16.c: Regenerated. * generated/minloc0_8_r4.c: Regenerated. * generated/minloc0_8_r8.c: Regenerated. * generated/minloc0_8_s1.c: Regenerated. * generated/minloc0_8_s4.c: Regenerated. * generated/minloc1_16_i1.c: Regenerated. * generated/minloc1_16_i16.c: Regenerated. * generated/minloc1_16_i2.c: Regenerated. * generated/minloc1_16_i4.c: Regenerated. * generated/minloc1_16_i8.c: Regenerated. * generated/minloc1_16_r10.c: Regenerated. * generated/minloc1_16_r16.c: Regenerated. * generated/minloc1_16_r4.c: Regenerated. * generated/minloc1_16_r8.c: Regenerated. * generated/minloc1_16_s1.c: Regenerated. * generated/minloc1_16_s4.c: Regenerated. * generated/minloc1_4_i1.c: Regenerated. * generated/minloc1_4_i16.c: Regenerated. * generated/minloc1_4_i2.c: Regenerated. * generated/minloc1_4_i4.c: Regenerated. * generated/minloc1_4_i8.c: Regenerated. * generated/minloc1_4_r10.c: Regenerated. * generated/minloc1_4_r16.c: Regenerated. * generated/minloc1_4_r4.c: Regenerated. * generated/minloc1_4_r8.c: Regenerated. * generated/minloc1_4_s1.c: Regenerated. * generated/minloc1_4_s4.c: Regenerated. * generated/minloc1_8_i1.c: Regenerated. * generated/minloc1_8_i16.c: Regenerated. * generated/minloc1_8_i2.c: Regenerated. * generated/minloc1_8_i4.c: Regenerated. * generated/minloc1_8_i8.c: Regenerated. * generated/minloc1_8_r10.c: Regenerated. * generated/minloc1_8_r16.c: Regenerated. * generated/minloc1_8_r4.c: Regenerated. * generated/minloc1_8_r8.c: Regenerated. * generated/minloc1_8_s1.c: Regenerated. * generated/minloc1_8_s4.c: Regenerated. * generated/minloc2_16_s1.c: Regenerated. * generated/minloc2_16_s4.c: Regenerated. * generated/minloc2_4_s1.c: Regenerated. * generated/minloc2_4_s4.c: Regenerated. * generated/minloc2_8_s1.c: Regenerated. * generated/minloc2_8_s4.c: Regenerated. * generated/minval_i1.c: Regenerated. * generated/minval_i16.c: Regenerated. * generated/minval_i2.c: Regenerated. * generated/minval_i4.c: Regenerated. * generated/minval_i8.c: Regenerated. * generated/minval_r10.c: Regenerated. * generated/minval_r16.c: Regenerated. * generated/minval_r4.c: Regenerated. * generated/minval_r8.c: Regenerated. * generated/norm2_r10.c: Regenerated. * generated/norm2_r16.c: Regenerated. * generated/norm2_r4.c: Regenerated. * generated/norm2_r8.c: Regenerated. * generated/parity_l1.c: Regenerated. * generated/parity_l16.c: Regenerated. * generated/parity_l2.c: Regenerated. * generated/parity_l4.c: Regenerated. * generated/parity_l8.c: Regenerated. * generated/product_c10.c: Regenerated. * generated/product_c16.c: Regenerated. * generated/product_c4.c: Regenerated. * generated/product_c8.c: Regenerated. * generated/product_i1.c: Regenerated. * generated/product_i16.c: Regenerated. * generated/product_i2.c: Regenerated. * generated/product_i4.c: Regenerated. * generated/product_i8.c: Regenerated. * generated/product_r10.c: Regenerated. * generated/product_r16.c: Regenerated. * generated/product_r4.c: Regenerated. * generated/product_r8.c: Regenerated. * generated/sum_c10.c: Regenerated. * generated/sum_c16.c: Regenerated. * generated/sum_c4.c: Regenerated. * generated/sum_c8.c: Regenerated. * generated/sum_i1.c: Regenerated. * generated/sum_i16.c: Regenerated. * generated/sum_i2.c: Regenerated. * generated/sum_i4.c: Regenerated. * generated/sum_i8.c: Regenerated. * generated/sum_r10.c: Regenerated. * generated/sum_r16.c: Regenerated. * generated/sum_r4.c: Regenerated. * generated/sum_r8.c: Regenerated. 2018-05-08 Thomas Koenig <tkoenig@gcc.gnu.org> PR fortran/54613 * gfortran.dg/minmaxloc_12.f90: New test case. * gfortran.dg/minmaxloc_13.f90: New test case. From-SVN: r260023
Thomas Koenig committed -
From-SVN: r260021
GCC Administrator committed
-
- 07 May, 2018 17 commits
-
-
* decl2.c (determine_visibility): Don't mess with template arguments from the containing scope. (vague_linkage_p): Check DECL_ABSTRACT_P before looking at a 'tor thunk. From-SVN: r260017
Jason Merrill committed -
https://gcc.gnu.org/ml/gcc-patches/2018-05/msg00299.html gcc/cp/ Remove fno-for-scope * cp-tree.h (DECL_ERROR_REPORTED, DECL_DEAD_FOR_LOCAL) (DECL_HAS_SHADOWED_FOR_VAR_P, DECL_SHADOWED_FOR_VAR) (SET_DECL_SHADOWED_FOR_VAR): Delete. (decl_shadowed_for_var_lookup, decl_shadowed_for_var_insert) (check_for_out_of_scope_variable, init_shadowed_var_for_decl): Don't declare. * name-lookup.h (struct cp_binding_level): Remove dead_vars_from_for field. * cp-lang.c (cp_init_ts): Delete. (LANG_HOOKS_INIT_TS): Override to cp_common_init_ts. * cp-objcp-common.c (shadowed_var_for_decl): Delete. (decl_shadowed_for_var_lookup, decl_shadowed_for_var_insert) (init_shadowed_var_for_decl): Delete. * decl.c (poplevel): Remove shadowed for var handling. (cxx_init_decl_processing): Remove -ffor-scope deprecation. * name-lookup.c (find_local_binding): Remove shadowed for var handling. (check_local_shadow): Likewise. (check_for_out_of_scope_variable): Delete. * parser.c (cp_parser_primary_expression): Remove shadowed for var handling. * pt.c (tsubst_decl): Remove DECL_DEAD_FOR_LOCAL setting. * semantics.c (begin_for_scope): Always have a scope. (begin_for_stmt, finish_for_stmt): Remove ARM-for scope handling. (begin_range_for_stmt, finish_id_expression): Likewise. gcc/ * doc/invoke.texi (C++ Dialect Options): Remove -ffor-scope. * doc/extend.texi (Deprecated Features): Remove -fno-for-scope (Backwards Compatibility): Likewise. c-family/ * c.opt (ffor-scope): Remove functionality, issue warning. gcc/objcp/ * objcp-lang.c (objcxx_init_ts): Don't call init_shadowed_var_for_decl. gcc/testsuite/ * g++.dg/cpp0x/range-for10.C: Delete. * g++.dg/ext/forscope1.C: Delete. * g++.dg/ext/forscope2.C: Delete. * g++.dg/template/for1.C: Delete. From-SVN: r260015
Nathan Sidwell committed -
* tree.c (vla_type_p): New. * typeck2.c (store_init_value, split_nonconstant_init_1): Check it rather than array_of_runtime_bound_p. From-SVN: r260012
Jason Merrill committed -
* doc/xml/manual/using.xml (table.cmd_options): Document that the C++17 Filesystem implementation also needs -lstdc++fs. From-SVN: r260011
Jonathan Wakely committed -
scanner.c (preprocessor_line): Call linemap_add after a line directive that changes the current filename. * scanner.c (preprocessor_line): Call linemap_add after a line directive that changes the current filename. * gfortran.dg/linefile.f90: New test. From-SVN: r260010
Jeff Law committed -
By performing the /= operation on a named local variable instead of a temporary the copy made for the return value can be elided. PR libstdc++/85671 * include/bits/fs_path.h (operator/): Permit copy elision. * include/experimental/bits/fs_path.h (operator/): Likewise. From-SVN: r260009
Jonathan Wakely committed -
2018-05-07 Edward Smith-Rowland <3dw4rd@verizon.net> Moar PR libstdc++/80506 * include/bits/random.tcc (gamma_distribution::__generate_impl()): Fix magic number used in loop condition. Actually put the file in. Don't know what my problem is today... From-SVN: r260008
Edward Smith-Rowland committed -
2018-05-07 Amaan Cheval <amaan.cheval@gmail.com> * config.host (x86_64-*-rtems*): Build crti.o and crtn.o. From-SVN: r260007
Amaan Cheval committed -
2018-05-07 Edward Smith-Rowland <3dw4rd@verizon.net> Moar PR libstdc++/80506 * include/bits/random.tcc (gamma_distribution::__generate_impl()): Fix magic number used in loop condition. From-SVN: r260004
Edward Smith-Rowland committed -
From-SVN: r260003
Edward Smith-Rowland committed -
From-SVN: r260002
Edward Smith-Rowland committed -
2018-05-07 Edward Smith-Rowland <3dw4rd@verizon.net> Moar PR libstdc++/80506 * include/bits/random.tcc (gamma_distribution::__generate_impl()): Fix magic number used in loop condition. From-SVN: r260001
Edward Smith-Rowland committed -
2018-05-07 Luis Machado <luis.machado@linaro.org> PR bootstrap/85681 Revert: 2018-05-07 Luis Machado <luis.machado@linaro.org> * config/aarch64/aarch64-protos.h (cpu_prefetch_tune) <prefetch_dynamic_strides>: New const bool field. * config/aarch64/aarch64.c (generic_prefetch_tune): Update to include prefetch_dynamic_strides. (exynosm1_prefetch_tune): Likewise. (thunderxt88_prefetch_tune): Likewise. (thunderx_prefetch_tune): Likewise. (thunderx2t99_prefetch_tune): Likewise. (qdf24xx_prefetch_tune): Likewise. Set prefetch_dynamic_strides to false. (aarch64_override_options_internal): Update to set PARAM_PREFETCH_DYNAMIC_STRIDES. * doc/invoke.texi (prefetch-dynamic-strides): Document new option. * params.def (PARAM_PREFETCH_DYNAMIC_STRIDES): New. * params.h (PARAM_PREFETCH_DYNAMIC_STRIDES): Define. * tree-ssa-loop-prefetch.c (should_issue_prefetch_p): Account for prefetch-dynamic-strides setting. 2018-05-07 Luis Machado <luis.machado@linaro.org> * config/aarch64/aarch64-protos.h (cpu_prefetch_tune) <minimum_stride>: New const int field. * config/aarch64/aarch64.c (generic_prefetch_tune): Update to include minimum_stride field. (exynosm1_prefetch_tune): Likewise. (thunderxt88_prefetch_tune): Likewise. (thunderx_prefetch_tune): Likewise. (thunderx2t99_prefetch_tune): Likewise. (qdf24xx_prefetch_tune): Likewise. Set minimum_stride to 2048. (aarch64_override_options_internal): Update to set PARAM_PREFETCH_MINIMUM_STRIDE. * doc/invoke.texi (prefetch-minimum-stride): Document new option. * params.def (PARAM_PREFETCH_MINIMUM_STRIDE): New. * params.h (PARAM_PREFETCH_MINIMUM_STRIDE): Define. * tree-ssa-loop-prefetch.c (should_issue_prefetch_p): Return false if stride is constant and is below the minimum stride threshold. From-SVN: r260000
Luis Machado committed -
From-SVN: r259999
Luis Machado committed -
2018-05-07 Luis Machado <luis.machado@linaro.org> * config/aarch64/aarch64.c (qdf24xx_prefetch_tune) <l2_cache_size>: Set to 512. From-SVN: r259998
Luis Machado committed -
The following patch adds an option to control software prefetching of memory references with non-constant/unknown strides. Currently we prefetch these references if the pass thinks there is benefit to doing so. But, since this is all based on heuristics, it's not always the case that we end up with better performance. For Falkor there is also the problem of conflicts with the hardware prefetcher, so we need to be more conservative in terms of what we issue software prefetch hints for. This also aligns GCC with what LLVM does for Falkor. Similarly to the previous patch, the defaults guarantee no change in behavior for other targets and architectures. 2018-05-07 Luis Machado <luis.machado@linaro.org> gcc/ * config/aarch64/aarch64-protos.h (cpu_prefetch_tune) <prefetch_dynamic_strides>: New const bool field. * config/aarch64/aarch64.c (generic_prefetch_tune): Update to include prefetch_dynamic_strides. (exynosm1_prefetch_tune): Likewise. (thunderxt88_prefetch_tune): Likewise. (thunderx_prefetch_tune): Likewise. (thunderx2t99_prefetch_tune): Likewise. (qdf24xx_prefetch_tune): Likewise. Set prefetch_dynamic_strides to false. (aarch64_override_options_internal): Update to set PARAM_PREFETCH_DYNAMIC_STRIDES. * doc/invoke.texi (prefetch-dynamic-strides): Document new option. * params.def (PARAM_PREFETCH_DYNAMIC_STRIDES): New. * params.h (PARAM_PREFETCH_DYNAMIC_STRIDES): Define. * tree-ssa-loop-prefetch.c (should_issue_prefetch_p): Account for prefetch-dynamic-strides setting. From-SVN: r259996
Luis Machado committed -
This patch adds a new option to control the minimum stride, for a memory reference, after which the loop prefetch pass may issue software prefetch hints for. There are two motivations: * Make the pass less aggressive, only issuing prefetch hints for bigger strides that are more likely to benefit from prefetching. I've noticed a case in cpu2017 where we were issuing thousands of hints, for example. * For processors that have a hardware prefetcher, like Falkor, it allows the loop prefetch pass to defer prefetching of smaller (less than the threshold) strides to the hardware prefetcher instead. This prevents conflicts between the software prefetcher and the hardware prefetcher. I've noticed considerable reduction in the number of prefetch hints and slightly positive performance numbers. This aligns GCC and LLVM in terms of prefetch behavior for Falkor. The default settings should guarantee no changes for existing targets. Those are free to tweak the settings as necessary. 2018-05-07 Luis Machado <luis.machado@linaro.org> Introduce option to limit software prefetching to known constant strides above a specific threshold with the goal of preventing conflicts with a hardware prefetcher. gcc/ * config/aarch64/aarch64-protos.h (cpu_prefetch_tune) <minimum_stride>: New const int field. * config/aarch64/aarch64.c (generic_prefetch_tune): Update to include minimum_stride field. (exynosm1_prefetch_tune): Likewise. (thunderxt88_prefetch_tune): Likewise. (thunderx_prefetch_tune): Likewise. (thunderx2t99_prefetch_tune): Likewise. (qdf24xx_prefetch_tune): Likewise. Set minimum_stride to 2048. (aarch64_override_options_internal): Update to set PARAM_PREFETCH_MINIMUM_STRIDE. * doc/invoke.texi (prefetch-minimum-stride): Document new option. * params.def (PARAM_PREFETCH_MINIMUM_STRIDE): New. * params.h (PARAM_PREFETCH_MINIMUM_STRIDE): Define. * tree-ssa-loop-prefetch.c (should_issue_prefetch_p): Return false if stride is constant and is below the minimum stride threshold. From-SVN: r259995
Luis Machado committed
-