- 01 Oct, 2018 13 commits
-
-
This patch adds a requirement that the number of outgoing arguments for a function is at least 8 bytes when using stack-clash protection and alloca. By using this condition we can avoid a check in the alloca code and so have smaller and simpler code there. A simplified version of the AArch64 stack frames is: +-----------------------+ | | | | | | +-----------------------+ |LR | +-----------------------+ |FP | +-----------------------+ |dynamic allocations | ---- expanding area which will push the outgoing +-----------------------+ args down during each allocation. |padding | +-----------------------+ |outgoing stack args | ---- safety buffer of 8 bytes (aligned) +-----------------------+ By always defining an outgoing argument, alloca(0) effectively is safe to probe at $sp due to the reserved buffer being there. It will never corrupt the stack. This is also safe for alloca(x) where x is 0 or x % page_size == 0. In the former it is the same case as alloca(0) while the latter is safe because any allocation pushes the outgoing stack args down: |FP | +-----------------------+ | | |dynamic allocations | ---- alloca (x) | | +-----------------------+ |padding | +-----------------------+ |outgoing stack args | ---- safety buffer of 8 bytes (aligned) +-----------------------+ Which means when you probe for the residual, if it's 0 you'll again just probe in the outgoing stack args range, which we know is non-zero (at least 8 bytes). gcc/ PR target/86486 * config/aarch64/aarch64.h (STACK_CLASH_MIN_BYTES_OUTGOING_ARGS, STACK_DYNAMIC_OFFSET): New. * config/aarch64/aarch64.c (aarch64_layout_frame): Update outgoing args size. (aarch64_stack_clash_protection_alloca_probe_range, TARGET_STACK_CLASH_PROTECTION_ALLOCA_PROBE_RANGE): New. gcc/testsuite/ PR target/86486 * gcc.target/aarch64/stack-check-alloca-1.c: New. * gcc.target/aarch64/stack-check-alloca-10.c: New. * gcc.target/aarch64/stack-check-alloca-2.c: New. * gcc.target/aarch64/stack-check-alloca-3.c: New. * gcc.target/aarch64/stack-check-alloca-4.c: New. * gcc.target/aarch64/stack-check-alloca-5.c: New. * gcc.target/aarch64/stack-check-alloca-6.c: New. * gcc.target/aarch64/stack-check-alloca-7.c: New. * gcc.target/aarch64/stack-check-alloca-8.c: New. * gcc.target/aarch64/stack-check-alloca-9.c: New. * gcc.target/aarch64/stack-check-alloca.h: New. * gcc.target/aarch64/stack-check-14.c: New. * gcc.target/aarch64/stack-check-15.c: New. From-SVN: r264751
Tamar Christina committed -
This patch adds a hook to tell the mid-end about the probing requirements of the target. On AArch64 we allow a specific range for which no probing needs to be done. This same range is also the amount that will have to be probed up when a probe is needed after dropping the stack. Defining this probe comes with the extra requirement that the outgoing arguments size of any function that uses alloca and stack clash be at the very least 8 bytes. With this invariant we can skip doing the zero checks for alloca and save some code. A simplified version of the AArch64 stack frame is: +-----------------------+ | | | | | | +-----------------------+ |LR | +-----------------------+ |FP | +-----------------------+ |dynamic allocations | -\ probe range hook effects these +-----------------------+ --\ and ensures that outgoing stack |padding | -- args is always > 8 when alloca. +-----------------------+ ---/ Which means it's always safe to probe |outgoing stack args |-/ at SP +-----------------------+ This allows us to generate better code than without the hook without affecting other targets. With this patch I am also removing the stack_clash_protection_final_dynamic_probe hook which was added specifically for AArch64 but that is no longer needed. gcc/ PR target/86486 * explow.c (anti_adjust_stack_and_probe_stack_clash): Support custom probe ranges. * target.def (stack_clash_protection_alloca_probe_range): New. (stack_clash_protection_final_dynamic_probe): Remove. * targhooks.h (default_stack_clash_protection_alloca_probe_range) New. (default_stack_clash_protection_final_dynamic_probe): Remove. * targhooks.c: Likewise. * doc/tm.texi.in (TARGET_STACK_CLASH_PROTECTION_ALLOCA_PROBE_RANGE): New. (TARGET_STACK_CLASH_PROTECTION_FINAL_DYNAMIC_PROBE): Remove. * doc/tm.texi: Regenerate. From-SVN: r264750
Tamar Christina committed -
This patch adds basic support for SVE stack clash protection. It is a first implementation and will use a loop to do the probing and stack adjustments. An example sequence is: .cfi_startproc mov x15, sp cntb x16, all, mul #11 add x16, x16, 304 .cfi_def_cfa_register 15 .SVLPSPL0: cmp x16, 61440 b.lt .SVLPEND0 sub sp, sp, 61440 str xzr, [sp, 0] sub x16, x16, 61440 b .SVLPSPL0 .SVLPEND0: sub sp, sp, x16 .cfi_escape 0xf,0xc,0x8f,0,0x92,0x2e,0,0x8,0x58,0x1e,0x23,0xb0,0x2,0x22 for a 64KB guard size, and for a 4KB guard size .cfi_startproc mov x15, sp cntb x16, all, mul #11 add x16, x16, 304 .cfi_def_cfa_register 15 .SVLPSPL0: cmp x16, 3072 b.lt .SVLPEND0 sub sp, sp, 3072 str xzr, [sp, 0] sub x16, x16, 3072 b .SVLPSPL0 .SVLPEND0: sub sp, sp, x16 .cfi_escape 0xf,0xc,0x8f,0,0x92,0x2e,0,0x8,0x58,0x1e,0x23,0xb0,0x2,0x22 This has about the same semantics as alloca, except we prioritize the common case where no probe is required. We also change the amount we adjust the stack and the probing interval to be the nearest value to `guard size - abi buffer` that fits in the 12-bit shifted immediate used by cmp. While this would mean we probe a bit more often than we require, in practice the amount of SVE vectors you'd need to spill is significant. Even more so to enter the loop more than once. gcc/ PR target/86486 * config/aarch64/aarch64-protos.h (aarch64_output_probe_sve_stack_clash): New. * config/aarch64/aarch64.c (aarch64_output_probe_sve_stack_clash, aarch64_clamp_to_uimm12_shift): New. (aarch64_allocate_and_probe_stack_space): Add SVE specific section. * config/aarch64/aarch64.md (probe_sve_stack_clash): New. gcc/testsuite/ PR target/86486 * gcc.target/aarch64/stack-check-prologue-16.c: New test * gcc.target/aarch64/stack-check-cfa-3.c: New test. * gcc.target/aarch64/sve/struct_vect_24.c: New test. * gcc.target/aarch64/sve/struct_vect_24_run.c: New test. From-SVN: r264749
Tamar Christina committed -
Since stack clash depends on the LR being saved for non-leaf functions this patch adds an assert such that if this changes we would notice this. gcc/ PR target/86486 * config/aarch64/aarch64.c (aarch64_layout_frame): Add assert. From-SVN: r264748
Tamar Christina committed -
This patch implements the use of the stack clash mitigation for aarch64. In Aarch64 we expect both the probing interval and the guard size to be 64KB and we enforce them to always be equal. We also probe up by 1024 bytes in the general case when a probe is required. AArch64 has the following probing conditions: 1a) Any initial adjustment less than 63KB requires no probing. An ABI defined safe buffer of 1Kbytes is used and a page size of 64k is assumed. b) Any final adjustment residual requires a probe at SP + 1KB. We know this to be safe since you would have done at least one page worth of allocations already to get to that point. c) Any final adjustment more than remainder (total allocation amount) larger than 1K - LR offset requires a probe at SP. safe buffer mentioned in 1a is maintained by the storing of FP/LR. In the case of -fomit-frame-pointer we can still count on LR being stored if the function makes a call, even if it's a tail call. The AArch64 frame layout code guarantees this and tests have been added to check against this particular case. 2) Any allocations larger than 1 page size, is done in increments of page size and probed up by 1KB leaving the residuals. 3a) Any residual for initial adjustment that is less than guard-size - 1KB requires no probing. Essentially this is a sliding window. The probing range determines the ABI safe buffer, and the amount to be probed up. Incrementally allocating less than the probing thresholds, e.g. recursive functions will not be an issue as the storing of LR counts as a probe. +-------------------+ | ABI SAFE REGION | +------------------------------ | | | | | | | | | | | | | | | | | | maximum amount | | | not needing a | | | probe | | | | | | | | | | | | | | | Probe offset when | ---------------------------- probe is required | | | +-------- +-------------------+ -------- Point of first probe | ABI SAFE REGION | --------------------- | | | | | | Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. Target was tested with stack clash on and off by default. GLIBC testsuite also ran with stack clash on by default and no new regressions. Co-Authored-By: Richard Sandiford <richard.sandiford@linaro.org> Co-Authored-By: Tamar Christina <tamar.christina@arm.com> From-SVN: r264747
Jeff Law committed -
Currently some target supports checks such as vect_int cache their results in a manner that would cause them not to be rechecked when running the same tests against a different variant in a multi variant run. This causes tests to be skipped or run when they shouldn't be. there is already an existing caching mechanism in place that does the caching correctly, but presumably these weren't used because some of these tests originally only contained static data. e.g. only checked if the target is aarch64*-*-* etc. This patch changes every function that needs to do any caching at all to use check_cached_effective_target which will cache per variant instead of globally. For those tests that already parameterize over et_index I have created check_cached_effective_target_indexed to handle this common case by creating a list containing the property name and the current value of et_index. These changes result in a much simpler implementation for most tests and a large reduction in lines for target-supports.exp. Regtested on aarch64-none-elf x86_64-pc-linux-gnu powerpc64-unknown-linux-gnu arm-none-eabi and no testsuite errors. Difference would depend on your site.exp. On arm we get about 4500 new testcases and on aarch64 the low 10s. On PowerPC and x86_64 no changes as expected since the default exp for these just test the default configuration. What this means for new target checks is that they should always use either check_cached_effective_target or check_cached_effective_target_indexed if the result of the check is to be cached. As an example the new vect_int looks like proc check_effective_target_vect_int { } { return [check_cached_effective_target_indexed <name> { expr { <condition> }}] } The debug information that was once there is now all hidden in check_cached_effective_target, (called from check_cached_effective_target_indexed) and so the only thing you are required to do is give it a unique cache name and a condition. The condition doesn't need to be an if statement so simple boolean expressions are enough here: [istarget i?86-*-*] || [istarget x86_64-*-*] || ([istarget powerpc*-*-*] && ![istarget powerpc-*-linux*paired*]) || ... From-SVN: r264745
Tamar Christina committed -
2018-10-01 MCC CS <deswurstes@users.noreply.github.com> PR tree-optimization/87261 * match.pd: Remove trailing whitespace. Add (x & y) | ~(x | y) -> ~(x ^ y), (~x | y) ^ (x ^ y) -> x | ~y and (x ^ y) | ~(x | y) -> ~(x & y) * gcc.dg/pr87261.c: New test. From-SVN: r264744
MCC CS committed -
* c-ada-spec.c (get_underlying_decl): Get to the main type variant. (dump_ada_node): Add const keyword. From-SVN: r264738
Eric Botcazou committed -
Avoid constants to end up in the limm field for particular instructions when compiling for size. gcc/ xxxx-xx-xx Claudiu Zissulescu <claziss@synopsys.com> * config/arc/arc.md (*add_n): Clean up pattern, update instruction constraints. (ashlsi3_insn): Update instruction constraints. (ashrsi3_insn): Likewise. (rotrsi3): Likewise. (add_shift): Likewise. * config/arc/constraints.md (Csz): New 32 bit constraint. It avoids placing in the limm field small constants which, otherwise, could end into a small instruction. testsuite/ xxxx-xx-xx Claudiu Zissulescu <claziss@synopsys.com> * gcc.target/arc/tph_addx.c: New test. From-SVN: r264737
Claudiu Zissulescu committed -
gcc/ Claudiu Zissulescu <claziss@synopsys.com> * config/arc/arc.md (maddsidi4_split): Don't use dmac if the destination register is not odd-even. (umaddsidi4_split): Likewise. gcc/testsuite/ Claudiu Zissulescu <claziss@synopsys.com> * gcc.target/arc/tmac-3.c: New file. From-SVN: r264736
Claudiu Zissulescu committed -
tree-inline.c (expand_call_inline): Store origin of fn in BLOCK_ABSTRACT_ORIGIN for the inline BLOCK. 2018-10-01 Richard Biener <rguenther@suse.de> * tree-inline.c (expand_call_inline): Store origin of fn in BLOCK_ABSTRACT_ORIGIN for the inline BLOCK. * tree.c (block_ultimate_origin): Simplify and do some checking. From-SVN: r264734
Richard Biener committed -
for gcc/ada/ChangeLog * gcc-interface/lang-specs.h (default_compilers): When given fcompare-debug-second, adjust auxbase like cc1, and pass gnatd_A. * gcc-interface/misc.c (flag_compare_debug): Remove variable. (gnat_post_options): Do not set it. * lib-writ.adb (flag_compare_debug): Remove import. (Write_ALI): Do not test it. From-SVN: r264732
Alexandre Oliva committed -
From-SVN: r264731
GCC Administrator committed
-
- 30 Sep, 2018 8 commits
-
-
* config/i386/mmx.md (EMMS): New int iterator. (emms): New int attribute. (mmx_<emms>): Macroize insn from *mmx_emms and *mmx_femms using EMMS int iterator. Explicitly declare clobbers. (mmx_emms): Remove expander. (mmx_femms): Ditto. * config/i386/predicates.md (emms_operation): Remove predicate. (vzeroall_pattern): New predicate. (vzeroupper_pattern): Rename from vzeroupper_operation. * config/i386/i386.c (ix86_avx_u128_mode_after): Use vzeroupper_pattern and vzeroall_pattern predicates. From-SVN: r264727
Uros Bizjak committed -
re PR rtl-optimization/86939 (IRA incorrectly creates an interference between a pseudo register and a hard register) gcc/ PR rtl-optimization/86939 * ira-lives.c (make_hard_regno_born): Rename from this... (make_hard_regno_live): ... to this. Remove update to conflict information. Update function comment. (make_hard_regno_dead): Add conflict information update. Update function comment. (make_object_born): Rename from this... (make_object_live): ... to this. Remove update to conflict information. Update function comment. (make_object_dead): Add conflict information update. Update function comment. (mark_pseudo_regno_live): Call make_object_live. (mark_pseudo_regno_subword_live): Likewise. (mark_hard_reg_dead): Update function comment. (mark_hard_reg_live): Call make_hard_regno_live. (process_bb_node_lives): Likewise. * lra-lives.c (make_hard_regno_born): Rename from this... (make_hard_regno_live): ... to this. Remove update to conflict information. Remove now uneeded check_pic_pseudo_p argument. Update function comment. (make_hard_regno_dead): Add check_pic_pseudo_p argument and add update to conflict information. Update function comment. (mark_pseudo_live): Remove update to conflict information. Update function comment. (mark_pseudo_dead): Add conflict information update. (mark_regno_live): Call make_hard_regno_live. (mark_regno_dead): Call make_hard_regno_dead with new arguement. (process_bb_lives): Call make_hard_regno_live and make_hard_regno_dead. From-SVN: r264726
Peter Bergner committed -
2018-09-30 Paul Thomas <pault@gcc.gnu.org> PR fortran/87359 * trans-array.c (gfc_is_reallocatable_lhs): Correct the problem introduced by r264358, which prevented components of associate names from being reallocated on assignment. 2018-09-30 Paul Thomas <pault@gcc.gnu.org> PR fortran/87359 * gfortran.dg/associate_40.f90 : New test. From-SVN: r264725
Paul Thomas committed -
2018-09-30 Paul Thomas <pault@gcc.gnu.org> PR fortran/70752 PR fortran/72709 * trans-array.c (gfc_conv_scalarized_array_ref): If this is a deferred type and the info->descriptor is present, use the info->descriptor (gfc_conv_array_ref): Is the se expr is a descriptor type, pass it as 'decl' rather than the symbol backend_decl. (gfc_array_allocate): If the se string_length is a component reference, fix it and use it for the expression string length if the latter is not a variable type. If it is a variable do an assignment. Make use of component ref string lengths to set the descriptor 'span'. (gfc_conv_expr_descriptor): For pointer assignment, do not set the span field if gfc_get_array_span returns zero. * trans.c (get_array_span): If the upper bound a character type is zero, use the descriptor span if available. 2018-09-30 Paul Thomas <pault@gcc.gnu.org> PR fortran/70752 PR fortran/72709 * gfortran.dg/deferred_character_25.f90 : New test. * gfortran.dg/deferred_character_26.f90 : New test. * gfortran.dg/deferred_character_27.f90 : New test to verify that PR82617 remains fixed. From-SVN: r264724
Paul Thomas committed -
www.oracle.com
* doc/xml/manual/messages.xml: Switch link to www.oracle.com to https. From-SVN: r264723
Gerald Pfeifer committed -
* doc/xml/manual/policy_data_structures_biblio.xml: Update link to Microsoft Component Model Object Technologies. From-SVN: r264722
Gerald Pfeifer committed -
2018-09-30 Paul Thomas <pault@gcc.gnu.org> PR fortran/70149 * trans-decl.c (gfc_get_symbol_decl): A deferred character length pointer that is initialized needs the string length to be initialized as well. 2018-09-30 Paul Thomas <pault@gcc.gnu.org> PR fortran/70149 * gfortran.dg/deferred_character_24.f90 : New test. From-SVN: r264721
Paul Thomas committed -
From-SVN: r264720
GCC Administrator committed
-
- 29 Sep, 2018 6 commits
-
-
When passing and returning BLKmode values in 2 integer registers, use 1 TImode register instead of 2 DImode registers. Otherwise, V1TImode may be used to move and store such BLKmode values, which prevent RTL optimizations. gcc/ PR target/87370 * config/i386/i386.c (construct_container): Use TImode for BLKmode values in 2 integer registers. gcc/testsuite/ PR target/87370 * gcc.target/i386/pr87370.c: New test. From-SVN: r264716
H.J. Lu committed -
2018-09-29 Paul Thomas <pault@gcc.gnu.org> PR fortran/65667 * trans-expr.c (gfc_trans_assignment_1): If there is dependency fix the rse stringlength. 2018-09-29 Paul Thomas <pault@gcc.gnu.org> PR fortran/65667 * gfortran.dg/dependency_52.f90 : New test. From-SVN: r264715
Paul Thomas committed -
* builtins.c (unterminated_array): Pass in c_strlen_data * to c_strlen rather than just a tree *. (c_strlen): Change NONSTR argument to a c_strlen_data pointer. Update recursive calls appropriately. If caller did not provide a suitable data pointer, create a local one. When a non-terminated string is discovered, bubble up information about the string via the c_strlen_data object. * builtins.h (c_strlen): Update prototype. (c_strlen_data): New structure. * gimple-fold.c (get_range_strlen): Update calls to c_strlen. For a type 2 call, if c_strlen indicates a non-terminated string use the length of the non-terminated string. (gimple_fold_builtin_stpcpy): Update calls to c_strlen. From-SVN: r264712
Jeff Law committed -
PR target/87467 * config/i386/avx512fintrin.h (_mm512_abs_pd, _mm512_mask_abs_pd): Use __m512d type for __A argument rather than __m512. * gcc.target/i386/avx512f-abspd-1.c (SIZE): Divide by two. (CALC): Use double instead of float. (TEST): Adjust to test _mm512_abs_pd and _mm512_mask_abs_pd rather than _mm512_abs_ps and _mm512_mask_abs_ps. From-SVN: r264711
Jakub Jelinek committed -
* doc/xml/gnu/fdl-1.3.xml: The Free Software Foundation web site now uses https. Also omit the unnecessary trailing slash. * doc/xml/gnu/gpl-3.0.xml: Ditto. From-SVN: r264710
Gerald Pfeifer committed -
From-SVN: r264709
GCC Administrator committed
-
- 28 Sep, 2018 13 commits
-
-
* match.pd (simple_comparison): Don't optimize if either operand is a function pointer when target needs function pointer canonicalization. From-SVN: r264705
John David Anglin committed -
Now that e.g. ASM_CPU_POWER5_SPEC is always "-mpower5" it is clearer and easier to just write that directly. * config/rs6000/driver-rs6000.c (asm_names): Adjust the entries for power5 .. power9 to remove indirection. * config/rs6000/rs6000.h (ASM_CPU_POWER5_SPEC, ASM_CPU_POWER6_SPEC, ASM_CPU_POWER7_SPEC, ASM_CPU_POWER8_SPEC, ASM_CPU_POWER9_SPEC, ASM_CPU_476_SPEC): Delete. (ASM_CPU_SPEC): Adjust. (EXTRA_SPECS): Delete asm_cpu_power5, asm_cpu_power6, asm_cpu_power7, asm_cpu_power8, asm_cpu_power9, asm_cpu_476. From-SVN: r264704
Segher Boessenkool committed -
Every supported assembler supports these instructions. Committing. * config.in: Delete HAVE_AS_DCI. * config/powerpcspe/powerpcspe.h: Treat HAVE_AS_DCI as always true. * config/rs6000/rs6000.h: Ditto. * configure.ac: Delete HAVE_AS_DCI. * configure: Regenerate. From-SVN: r264703
Segher Boessenkool committed -
All supported assemblers know lwsync, so we never need to generate this instruction using the .long escape hatch. * config.in (HAVE_AS_LWSYNC): Delete. * config/powerpcspe/powerpcspe.h (TARGET_LWSYNC_INSTRUCTION): Delete. * config/powerpcspe/sync.md (*lwsync): Always generate lwsync, never do it as a .long . * config/rs6000/rs6000.h (TARGET_LWSYNC_INSTRUCTION): Delete. * config/rs6000/sync.md (*lwsync): Always generate lwsync, never do it as a .long . * configure.ac: Delete HAVE_AS_LWSYNC. * configure: Regenerate. From-SVN: r264702
Segher Boessenkool committed -
* calls.c (expand_call): Try to do a tail call for thunks at -O0 too. * cgraph.h (struct cgraph_thunk_info): Add indirect_offset. (cgraph_node::create_thunk): Add indirect_offset parameter. (thunk_adjust): Likewise. * cgraph.c (cgraph_node::create_thunk): Add indirect_offset parameter and initialize the corresponding field with it. (cgraph_node::dump): Dump indirect_offset field. * cgraphclones.c (duplicate_thunk_for_node): Deal with indirect_offset. * cgraphunit.c (cgraph_node::analyze): Be prepared for external thunks. (thunk_adjust): Add indirect_offset parameter and deal with it. (cgraph_node::expand_thunk): Deal with the indirect_offset field and pass it to thunk_adjust. Do not call the target hook if it's non-zero or if the thunk is external or local. Fix formatting. Do not chain the RESULT_DECL to BLOCK_VARS. Pass the static chain to the target, if any, in the GIMPLE representation. * ipa-icf.c (sem_function::equals_wpa): Deal with indirect_offset. * lto-cgraph.c (lto_output_node): Write indirect_offset field. (input_node): Read indirect_offset field. * tree-inline.c (expand_call_inline): Pass indirect_offset field in the call to thunk_adjust. * tree-nested.c (struct nesting_info): Add thunk_p field. (create_nesting_tree): Set it. (convert_all_function_calls): Copy static chain from targets to thunks. (finalize_nesting_tree_1): Return early for thunks. (unnest_nesting_tree_1): Do not finalize thunks. (gimplify_all_functions): Do not gimplify thunks. cp/ * method.c (use_thunk): Adjust call to cgraph_node::create_thunk. ada/ * gcc-interface/decl.c (is_cplusplus_method): Do not require C++ convention on Interfaces. * gcc-interface/trans.c (Subprogram_Body_to_gnu): Try to create a bona-fide thunk and hand it over to the middle-end. (get_controlling_type): New function. (use_alias_for_thunk_p): Likewise. (thunk_labelno): New static variable. (make_covariant_thunk): New function. (maybe_make_gnu_thunk): Likewise. * gcc-interface/utils.c (finish_subprog_decl): Set DECL_CONTEXT of the result DECL here instead of... (end_subprog_body): ...here. Co-Authored-By: Pierre-Marie de Rodat <derodat@adacore.com> From-SVN: r264701
Eric Botcazou committed -
functions.h (__foreign_iterator_aux3(const _Safe_iterator<>&, const _InputeIter&, const _InputIter&, __true_type)): Use empty() rather than begin() == end(). 2018-09-28 François Dumont <fdumont@gcc.gnu.org> * include/debug/functions.h (__foreign_iterator_aux3(const _Safe_iterator<>&, const _InputeIter&, const _InputIter&, __true_type)): Use empty() rather than begin() == end(). From-SVN: r264699
François Dumont committed -
gcc/ChangeLog: * opt-suggestions.c (option_proposer::build_option_suggestions): Release "option_values". From-SVN: r264698
David Malcolm committed -
As noted at Cauldron, dumpfile.c currently emits "note: " for all kinds of dump message, so that (after filtering) there's no distinction between MSG_OPTIMIZED_LOCATIONS vs MSG_NOTE vs MSG_MISSED_OPTIMIZATION in the textual output. This patch changes dumpfile.c so that the "note: " varies to show which MSG_* was used, with the string prefix matching that used for filtering in -fopt-info, hence e.g. directive_unroll_3.f90:24:0: optimized: loop unrolled 7 times and: pr19210-1.c:24:3: missed: missed loop optimization: niters analysis ends up with assumptions. The patch adds "dg-optimized" and "dg-missed" directives for use in the testsuite for matching these (with -fopt-info on stderr; they don't help for dumpfile output). The patch also converts the various problem-reporting dump messages in coverage.c:get_coverage_counts to use MSG_MISSED_OPTIMIZATION rather than MSG_OPTIMIZED_LOCATIONS, as the docs call out "optimized" as "information when an optimization is successfully applied", whereas "missed" is for "information about missed optimizations", and problems with profile data seem to me to fall much more into the latter category than the former. Doing so requires converting a few tests from using "-fopt-info" (which is implicitly "-fopt-info-optimized-optall") to getting the "missed" optimizations. Changing them to "-fopt-info-missed" added lots of noise from the vectorizer, so I changed these tests to use "-fopt-info-missed-ipa". gcc/ChangeLog: * coverage.c (get_coverage_counts): Convert problem-reporting dump messages from MSG_OPTIMIZED_LOCATIONS to MSG_MISSED_OPTIMIZATION. * dumpfile.c (kind_as_string): New function. (dump_loc): Rather than a hardcoded prefix of "note: ", use kind_as_string to vary the prefix based on dump_kind. (selftest::test_capture_of_dump_calls): Update for above. gcc/testsuite/ChangeLog: * c-c++-common/unroll-1.c: Update expected output from "note" to "optimized". * c-c++-common/unroll-2.c: Likewise. * c-c++-common/unroll-3.c: Likewise. * g++.dg/tree-ssa/dom-invalid.C: Update expected output from dg-message to dg-missed. Convert param from -fopt-info to -fopt-info-missed-ipa. * g++.dg/tree-ssa/pr81408.C: Update expected output from dg-message to dg-missed. * g++.dg/vect/slp-pr56812.cc: Update expected output from dg-message to dg-optimized. * gcc.dg/pr26570.c: Update expected output from dg-message to dg-missed. Convert param from -fopt-info to -fopt-info-missed-ipa. * gcc.dg/pr32773.c: Likewise. * gcc.dg/tree-ssa/pr19210-1.c: Update expected output from dg-message to dg-missed. * gcc.dg/unroll-2.c: Update expected output from dg-message to dg-optimized. * gcc.dg/vect/nodump-vect-opt-info-1.c: Likewise. Convert param from -fopt-info to -fopt-info-vec. * gfortran.dg/directive_unroll_1.f90: Update expected output from "note" to "optimized". * gfortran.dg/directive_unroll_2.f90: Likewise. * gfortran.dg/directive_unroll_3.f90: Likewise. * gnat.dg/unroll4.adb: Likewise. * lib/gcc-dg.exp (dg-optimized): New procedure. (dg-missed): New procedure. From-SVN: r264697
David Malcolm committed -
As reported in <https://gcc.gnu.org/ml/gcc-patches/2018-09/msg01684.html>, some fp-int-convert tests fail after my fix for PR c/87390, in Arm / AArch64 configurations where _Float16 uses excess precision by default. The issue is comparisons of the results of a conversion by assignment (compile-time or run-time) from integer to floating-point with the original integer value; previously this would compare against an implicit compile-time conversion to the target type, but now, for C11 and later, it compares against an implicit compile-time conversion to a possibly wider evaluation format. This is fixed by adding casts to the test so that the comparison is with a value converted explicitly to the target type at compile time, without any use of a wider evaluation format. PR c/87390 * gcc.dg/torture/fp-int-convert.h (TEST_I_F_VAL): Convert integer values explicitly to target type for comparison. From-SVN: r264696
Joseph Myers committed -
* config/i386/i386.h (SSE_REGNO): Fix check for FIRST_REX_SSE_REG. (GET_SSE_REGNO): Rename from SSE_REGNO. Update all uses for rename. From-SVN: r264695
Uros Bizjak committed -
* config/i386/i386.h (CC_REGNO): Remove FPSR_REGS. * config/i386/i386.c (ix86_fixed_condition_code_regs): Use INVALID_REGNUM instead of FPSR_REG. (ix86_md_asm_adjust): Do not clobber FPSR_REG. * config/i386/i386.md: Update comment of FP compares. (fldenv): Do not clobber FPSR_REG. From-SVN: r264694
Uros Bizjak committed -
From-SVN: r264693
Steve Ellcey committed -
re PR testsuite/87433 (gcc.dg/zero_bits_compound-1.c and gcc.target/aarch64/ashltidisi.c tests fail after combine two to two instruction patch on aarch64) 2018-09-28 Steve Ellcey <sellcey@cavium.com> PR testsuite/87433 * gcc.target/aarch64/ashltidisi.c: Expect 3 asr instructions instead of 4. From-SVN: r264692
Steve Ellcey committed
-