- 04 Mar, 2020 23 commits
-
-
gcc/analyzer/ChangeLog: * engine.cc (worklist::worklist): Remove unused field m_eg. (class viz_callgraph_edge): Remove unused field m_call_sedge. (class viz_callgraph): Remove unused field m_sg. * exploded-graph.h (worklist::::m_eg): Remove unused field.
David Malcolm committed -
The discussion of iterator_traits<volatile T*>::value_type and the example with three tempalte arguments related to an earlier version of the patch, not the one committed. Also improve the comment on __memcmpable. * include/bits/cpp_type_traits.h (__memcpyable): Fix comment.
Jonathan Wakely committed -
PR87560 reports an ICE when a test case is compiled with -mpower9-vector and -mno-altivec. This patch terminates compilation with an error when this combination (and other unreasonable ones) are requested. Bootstrapped and tested on powerpc64le-unknown-linux-gnu with no regressions. Reported error is now: f951: Error: '-mno-altivec' turns off '-mpower9-vector' 2020-03-02 Bill Schmidt <wschmidt@linux.ibm.com> PR target/87560 * rs6000-cpus.def (OTHER_ALTIVEC_MASKS): New #define. * rs6000.c (rs6000_disable_incompatible_switches): Add table entry for OPTION_MASK_ALTIVEC.
Bill Schmidt committed -
Building a zTPF cross currently fails when building libstdc++ complaining about the __UINTPTR_TYPE__ to be missing. Fixed by including the glibc-stdint.h header. 2020-03-04 Andreas Krebbel <krebbel@linux.ibm.com> * config.gcc: Include the glibc-stdint.h header for zTPF.
Andreas Krebbel committed -
For the zTPF we must not use floating point registers. gcc/ChangeLog: 2020-03-04 Andreas Krebbel <krebbel@linux.ibm.com> * config/s390/s390.c (s390_secondary_memory_needed): Disallow direct FPR-GPR copies. (s390_register_info_gprtofpr): Disallow GPR content to be saved in FPRs.
Andreas Krebbel committed -
libgcc is supposed to be built with the trace skip flags and branch targets. Add a zTPF header file fragment and add the -mtpf-trace-skip option. libgcc/ChangeLog: 2020-03-04 Andreas Krebbel <krebbel@linux.ibm.com> * config.host: Include the new makefile fragment. * config/s390/t-tpf: New file.
Andreas Krebbel committed -
The zTPF OS implements a tracing facility for function entry and exit which uses global flags and trace function addresses. The addresses of the flags as well as the trace functions are currently hard-coded in the zTPF specific GCC parts of the IBM Z back-end. With this patch these addresses can be changed at compile-time using the new command line options. For convenience one additional command line option (-mtpf-trace-skip) implements a new set of hard-coded addresses. gcc/ChangeLog: 2020-03-04 Andreas Krebbel <krebbel@linux.ibm.com> * config/s390/s390.c (s390_emit_prologue): Specify the 2 new operands to the prologue_tpf expander. (s390_emit_epilogue): Likewise. (s390_option_override_internal): Do error checking and setup for the new options. * config/s390/tpf.h (TPF_TRACE_PROLOGUE_CHECK) (TPF_TRACE_EPILOGUE_CHECK, TPF_TRACE_PROLOGUE_TARGET) (TPF_TRACE_EPILOGUE_TARGET, TPF_TRACE_PROLOGUE_SKIP_TARGET) (TPF_TRACE_EPILOGUE_SKIP_TARGET): New macro definitions. * config/s390/tpf.md ("prologue_tpf", "epilogue_tpf"): Add two new operands for the check flag and the branch target. * config/s390/tpf.opt ("mtpf-trace-hook-prologue-check") ("mtpf-trace-hook-prologue-target") ("mtpf-trace-hook-epilogue-check") ("mtpf-trace-hook-epilogue-target", "mtpf-trace-skip"): New options. * doc/invoke.texi: Document -mtpf-trace-skip option. The other options are for debugging purposes and will not be documented here.
Andreas Krebbel committed -
* gcc.target/i386/pr91623.c: Add -fcommon in order to re-trigger the needed code for the test-case which was added in r10-2910-g9151048d.
Martin Liska committed -
In the following testcase we emit wrong debug info for the karg parameter in the DW_TAG_inlined_subroutine into main. The problem is that the karg PARM_DECL is DECL_BY_REFERENCE and thus in the IL has const K & type, but in the source just const K. When the function is inlined, we create a VAR_DECL for it, but don't set DECL_BY_REFERENCE, so when emitting DW_AT_location, we treat it like a const K & typed variable, but it has DW_AT_abstract_origin which has just the const K type and thus the debugger thinks the variable has const K type. Fixed by copying the DECL_BY_REFERENCE flag. Not doing it in copy_decl_for_dup_finish, because copy_decl_no_change already copies that flag through copy_node and in copy_result_decl_to_var it is undesirable, as we handle DECL_BY_REFERENCE in that case instead by changing the type. 2020-03-04 Jakub Jelinek <jakub@redhat.com> PR debug/93888 * tree-inline.c (copy_decl_to_var): Copy DECL_BY_REFERENCE flag. * g++.dg/guality/pr93888.C: New test.
Jakub Jelinek committed -
The following patch attempts to avoid dangerous overflows in the various push_partial_def HOST_WIDE_INT computations. This is achieved by performing the subtraction offset2i - offseti in the push_partial_def function and before doing that doing some tweaks. If a constant store (non-CONSTRUCTOR) is too large (perhaps just hypothetical case), native_encode_expr would fail for it, but we don't necessarily need to fail right away, instead we can treat it like non-constant store and if it is already shadowed, we can ignore it. Otherwise, if it at most 64-byte and the caller ensured that there is a range overlap and push_partial_def ensures the load is at most 64-byte, I think we should be fine, offset (relative to the load) can be from -64*8+1 to 64*8-1 only and size at most 64*8, so no risks of overflowing HOST_WIDE_INT computations. For CONSTRUCTOR (or non-constant) stores, those can be indeed arbitrarily large, the caller just checks that both the absolute offset and size fit into signed HWI. But, we store the same bytes in that case over and over (both in the {} case where it is all 0, and in the hypothetical future case where we handle in push_partial_def also memset (, 123, )), so we can tweak the write range for our purposes. For {} store we could just cap it at the start offset and/or offset+size because all the bits are 0, but I wrote it in anticipation of the memset case and so the relative offset can now be down to -7 and similarly size can grow up to 64 bytes + 14 bits, all this trying to preserve the offset difference % BITS_PER_UNIT or end as well. 2020-03-04 Jakub Jelinek <jakub@redhat.com> * tree-ssa-sccvn.c (vn_walk_cb_data::push_partial_def): Add offseti argument. Change pd argument so that it can be modified. Turn constant non-CONSTRUCTOR store into non-constant if it is too large. Adjust offset and size of CONSTRUCTOR or non-constant store to avoid overflows. (vn_walk_cb_data::vn_walk_cb_data, vn_reference_lookup_3): Adjust callers.
Jakub Jelinek committed -
Pointers eventually need intermediate conversions in code generation. Allowing them is much easier than fending them off since niter and scev expansion easily drag those in. 2020-02-04 Richard Biener <rguenther@suse.de> PR tree-optimization/93964 * graphite-isl-ast-to-gimple.c (gcc_expression_from_isl_ast_expr_id): Add intermediate conversion for pointer to integer converts. * graphite-scop-detection.c (assign_parameter_index_in_region): Relax assert. * gcc.dg/graphite/pr93964.c: New testcase.
Richard Biener committed -
PR c/93886 PR c/93887 * doc/invoke.texi: Clarify --help=language and --help=common interaction.
Martin Liska committed -
* method.c: Wrap array in ctor with braces in order to silent clang warnings.
Martin Liska committed -
When a function returns void or the return value is ignored, ass_var is NULL_TREE. The tail recursion handling generally assumes DCE has been performed and so doesn't expect to encounter useless assignments after the call and expects them to be part of the return value adjustment that need to be changed into tail recursion additions/multiplications. process_assignment does some verification and has a way to tell the caller to try to move dead or whatever other stmts that don't participate in the return value modifications before it is returned. For binary rhs assignments it is just fine, neither op0 nor op1 will be NULL_TREE and thus if *ass_var is NULL_TREE, it will not match, but unary rhs is handled by only setting op0 to rhs1 and setting op1 to NULL_TREE. And at this point, NULL_TREE == NULL_TREE and thus we think e.g. the c_2 = -e_3(D); dead stmt is actually a return value modification, so we queue it as multiplication and then create a void type SSA_NAME accumulator for it and ICE shortly after. Fixed by making sure op1 == *ass_var comparison is done only if *ass_var. 2020-03-04 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/94001 * tree-tailcall.c (process_assignment): Before comparing op1 to *ass_var, verify *ass_var is non-NULL. * gcc.dg/pr94001.c: New test.
Jakub Jelinek committed -
Last code gen change of LTGT didn't consider the situation of cbranch with LTGT, branch only support few compare codes. gcc/ChangeLog * config/riscv/riscv.c (riscv_emit_float_compare): Using NE to compare the result of IOR. gcc/testsuite/ChangeLog * gcc.dg/pr93995.c: New.
Kito Cheng committed -
My GCC 9 patch for C++20 P0846R0 (ADL and function templates) tweaked cp_parser_template_name to only return an identifier if name lookup didn't find anything. In the deduce4.C case it means that we now return an OVERLOAD. That means that cp_parser_template_id will call lookup_template_function whereby producing a TEMPLATE_ID_EXPR with unknown_type_node. Previously, we created a TEMPLATE_ID_EXPR with no type, making it type-dependent. What we have now is no longer type-dependent. And so, when we call finish_call_expr after we've parsed "foo<int>(10)", even though we're in a template, we still do the normal processing, thus perform overload resolution. When adding the template candidate foo we need to deduce the template arguments, and that is where things go downhill. When fn_type_unification sees that we have explicit template arguments, but they aren't complete, it will use them to substitute the function type. So we substitute e.g. "void <T33d> (U)". But the explicit template argument was for a different parameter so we don't actually substitute anything. But the problem here was that we reduced the template level of 'U' anyway. So then when we're actually deducing the template arguments via type_unification_real, we fail in unify: 22932 if (TEMPLATE_TYPE_LEVEL (parm) 22933 != template_decl_level (tparm)) 22934 /* The PARM is not one we're trying to unify. Just check 22935 to see if it matches ARG. */ because 'parm' has been reduced but 'tparm' has not yet. Therefore we shouldn't reduce the template level of template parameters when tf_partial aka template argument deduction substitution. But we can only return after performing the cp_build_qualified_type etc. business otherwise things break horribly. 2020-03-03 Jason Merrill <jason@redhat.com> Marek Polacek <polacek@redhat.com> PR c++/90505 - mismatch in template argument deduction. * pt.c (tsubst): Don't reduce the template level of template parameters when tf_partial. * g++.dg/template/deduce4.C: New test. * g++.dg/template/deduce5.C: New test. * g++.dg/template/deduce6.C: New test. * g++.dg/template/deduce7.C: New test.
Marek Polacek committed -
When deciding whether to perform the memset optimization in ranges::fill_n, we were crucially neglecting to check that the output pointer's value type is a byte type. This patch adds such a check to the problematic condition in ranges::fill_n. At the same time, this patch relaxes the overly conservative __is_byte<_Tp>::__value check that requires the fill type be a byte type. It's overly conservative because it means we won't enable the memset optimization in the following example char c[100]; ranges::fill(c, 37); because the fill type is deduced to be int here. Rather than requiring that the fill type be a byte type, it seems safe to just require the fill type be an integral type, which is what this patch does. libstdc++-v3/ChangeLog: PR libstdc++/94017 * include/bits/ranges_algobase.h (__fill_n_fn::operator()): Refine condition for when to use memset, making sure to additionally check that the output pointer's value type is a non-volatile byte type. Instead of requiring that the fill type is a byte type, just require that it's an integral type. * testsuite/20_util/specialized_algorithms/uninitialized_fill/94017.cc: New test. * testsuite/20_util/specialized_algorithms/uninitialized_fill_n/94017.cc: New test. * testsuite/25_algorithms/fill/94013.cc: Uncomment part that was blocked by PR 94017. * testsuite/25_algorithms/fill/94017.cc: New test. * testsuite/25_algorithms/fill_n/94017.cc: New test.
Patrick Palka committed -
This adds support for move-only input iterators in the ranges::unitialized_* algorithms defined in <memory>, as per LWG 3355. The only changes needed are to add calls to std::move in the appropriate places and to use operator- instead of ranges::distance because the latter cannot be used with a move-only iterator that has a sized sentinel, as is the case here. (This issue with ranges::distance is LWG 3392.) libstdc++-v3/ChangeLog: LWG 3355 The memory algorithms should support move-only input iterators introduced by P1207 * include/bits/ranges_uninitialized.h (__uninitialized_copy_fn::operator()): Use std::move to avoid attempting to copy __ifirst, which could be a move-only input iterator. Use operator- instead of ranges::distance to compute distance from a sized sentinel. (__uninitialized_copy_n_fn::operator()): Likewise. (__uninitialized_move_fn::operator()): Likewise. (__uninitialized_move_n_fn::operator()): Likewise. (__uninitialized_destroy_fn::operator()): Use std::move to avoid attempting to copy __first. (__uninitialized_destroy_n_fn::operator()): Likewise. * testsuite/20_util/specialized_algorithms/destroy/constrained.cc: Augment test. * .../specialized_algorithms/uninitialized_copy/constrained.cc: Likewise. * .../specialized_algorithms/uninitialized_move/constrained.cc: Likewise.
Patrick Palka committed -
This adds a testsuite range type whose end() is a sized sentinel to <testsuite_iterators.h>, which will be used in the tests that verify LWG 3355. libstdc++-v3/ChangeLog: * testsuite/util/testsuite_iterators.h (test_range::get_iterator): Make protected instead of private. (test_sized_range_sized_sent): New.
Patrick Palka committed -
This adds a move-only testsuite iterator wrapper to <testsuite_iterators.h> which will be used in the tests for LWG 3355. The tests for LWG 3389 and 3390 are adjusted to use this new iterator wrapper. libstdc++-v3/ChangeLog: * testsuite/util/testsuite_iterators.h (input_iterator_wrapper_nocopy): New testsuite iterator. * testsuite/24_iterators/counted_iterator/lwg3389.cc: Use it. * testsuite/24_iterators/move_iterator/lwg3390.cc: Likewise.
Patrick Palka committed -
We are passing a value type as the first argument to is_nothrow_assignable_v, but the result of that is inevitably false. Since this predicate is a part of the condition that guards the corresponding optimizations for these algorithms, this bug means these optimizations are never used. We should be passing a reference type to is_nothrow_assignable_v instead. libstdc++-v3/ChangeLog: * include/bits/ranges_uninitialized.h (uninitialized_copy_fn::operator()): Pass a reference type as the first argument to is_nothrow_assignable_v. (uninitialized_copy_fn::operator()): Likewise. (uninitialized_move_fn::operator()): Likewise. Return an in_out_result with the input iterator stripped of its move_iterator. (uninitialized_move_n_fn::operator()): Likewise. (uninitialized_fill_fn::operator()): Pass a reference type as the first argument to is_nothrow_assignable_v. (uninitialized_fill_n_fn::operator()): Likewise.
Patrick Palka committed -
gcc/cp * coroutines.cc (captures_temporary): Strip component_ref to its base object. gcc/testsuite * g++.dg/coroutines/torture/co-await-15-capture-comp-ref.C: New test.
JunMa committed -
GCC Administrator committed
-
- 03 Mar, 2020 16 commits
-
-
Several algorithms check the is_trivially_copyable trait to decide whether to dispatch to memmove or memcmp as an optimization. Since r271435 (CWG DR 2094) the trait is true for volatile-qualified scalars, but we can't use memmove or memcmp when the type is volatile. We need to also check for volatile types. This is complicated by the fact that in C++20 (but not earlier standards) iterator_traits<volatile T*>::value_type is T, so we can't just check whether the value_type is volatile. The solution in this patch is to introduce new traits __memcpyable and __memcmpable which combine into a single trait the checks for pointers, the value types being the same, and the type being trivially copyable but not volatile-qualified. PR libstdc++/94013 * include/bits/cpp_type_traits.h (__memcpyable, __memcmpable): New traits to control when to use memmove and memcmp optimizations. (__is_nonvolatile_trivially_copyable): New helper trait. * include/bits/ranges_algo.h (__lexicographical_compare_fn): Do not use memcmp optimization with volatile data. * include/bits/ranges_algobase.h (__equal_fn): Use __memcmpable. (__copy_or_move, __copy_or_move_backward): Use __memcpyable. * include/bits/stl_algobase.h (__copy_move_a2): Use __memcpyable. (__copy_move_backward_a2): Likewise. (__equal_aux1): Use __memcmpable. (__lexicographical_compare_aux): Do not use memcmp optimization with volatile data. * testsuite/25_algorithms/copy/94013.cc: New test. * testsuite/25_algorithms/copy_backward/94013.cc: New test. * testsuite/25_algorithms/equal/94013.cc: New test. * testsuite/25_algorithms/fill/94013.cc: New test. * testsuite/25_algorithms/lexicographical_compare/94013.cc: New test. * testsuite/25_algorithms/move/94013.cc: New test. * testsuite/25_algorithms/move_backward/94013.cc: New test.
Jonathan Wakely committed -
We ICE on the following testcase since I've added the SAVE_EXPR-like constexpr handling where the TARGET_EXPR initializer (and cleanup) is evaluated only once (because it might have side-effects like new or delete expressions in it). The problem is if the TARGET_EXPR (but I guess in theory SAVE_EXPR too) initializer is *non_constant_p. We still remember the result, but already not that it is *non_constant_p. Normally that wouldn't be a big problem, if something is *non_constant_p, we only or into it and so the whole expression will be non-constant too. Except in the builtins handling, we try to evaluate the arguments with non_constant_p pointing into a dummy1 bool which we ignore. This is because some builtins might fold into a constant even if they don't have a constexpr argument. Unfortunately if we evaluate the TARGET_EXPR first in the argument of such a builtin and then once again, we don't set *non_constant_p. So, either we don't remember the TARGET_EXPR/SAVE_EXPR result if it wasn't constant, like the following patch does, or we could remember it, but in some way that would make it clear that it is non-constant (e.g. by pushing into the global->values SAVE_EXPR, SAVE_EXPR entry and perhaps for TARGET_EXPR don't remember it on TARGET_EXPR_SLOT, but the TARGET_EXPR itself and similarly push TARGET_EXPR, TARGET_EXPR and if we see those after the lookup, diagnose + set *non_constant_p. Or we could perhaps during the builtin argument evaluation push expressions into a different save_expr vec and undo them afterwards. 2020-03-03 Jakub Jelinek <jakub@redhat.com> PR c++/93998 * constexpr.c (cxx_eval_constant_expression) <case TARGET_EXPR, case SAVE_EXPR>: Don't record anything if *non_constant_p is true. * g++.dg/ext/pr93998.C: New test.
Jakub Jelinek committed -
Unified syntax has been the official syntax for thumb1 assembly for over 10 years now. It's time we made preparations for that becoming the default in the assembler. But before we can start doing that we really need to clean up some laggards from the olden days. Libgcc support for thumb1 is one such example. This patch converts all of the legacy (disjoint) syntax that I could find over to unified code. The identification was done by using a trick version of gas that defaulted to unified mode which then faults if legacy syntax is encountered. The code produced was then compared against the old code to check for differences. One such difference does exist, but that is because in unified syntax 'movs rd, rn' is encoded as 'lsls rd, rn, #0', rather than 'adds rd, rn, #0'; but that is a deliberate change that was introduced because the lsls encoding more closely reflects the behaviour of 'movs' in arm state (where only some of the condition flags are modified). * config/arm/bpabi-v6m.S (aeabi_lcmp): Convert thumb1 code to unified syntax. (aeabi_ulcmp, aeabi_ldivmod, aeabi_uldivmod): Likewise. (aeabi_frsub, aeabi_cfcmpeq, aeabi_fcmpeq): Likewise. (aeabi_fcmp, aeabi_drsub, aeabi_cdrcmple): Likewise. (aeabi_cdcmpeq, aeabi_dcmpeq, aeabi_dcmp): Likewise. * config/arm/lib1funcs.S (Lend_fde): Convert thumb1 code to unified syntax. (divsi3, modsi3): Likewise. (clzdi2, ctzsi2): Likewise. * config/arm/libunwind.S (restore_core_regs): Convert thumb1 code to unified syntax. (UNWIND_WRAPPER): Likewise.
Richard Earnshaw committed -
This patch is part of a series adding support for Armv8.6-A features. It implements intrinsics to convert between bfloat16 and float32 formats. gcc/ChangeLog: * config/arm/arm_bf16.h (vcvtah_f32_bf16, vcvth_bf16_f32): New. * config/arm/arm_neon.h (vcvt_f32_bf16, vcvtq_low_f32_bf16): New. (vcvtq_high_f32_bf16, vcvt_bf16_f32): New. (vcvtq_low_bf16_f32, vcvtq_high_bf16_f32): New. * config/arm/arm_neon_builtins.def (vbfcvt, vbfcvt_high): New entries. (vbfcvtv4sf, vbfcvtv4sf_high): Likewise. * config/arm/iterators.md (VBFCVT, VBFCVTM): New mode iterators. (V_bf_low, V_bf_cvt_m): New mode attributes. * config/arm/neon.md (neon_vbfcvtv4sf<VBFCVT:mode>): New. (neon_vbfcvtv4sf_highv8bf, neon_vbfcvtsf): New. (neon_vbfcvt<VBFCVT:mode>, neon_vbfcvt_highv8bf): New. (neon_vbfcvtbf_cvtmode<mode>, neon_vbfcvtbf): New * config/arm/unspecs.md (UNSPEC_BFCVT, UNSPEC_BFCVT_HIG): New. gcc/testsuite/ChangeLog: * gcc.target/arm/simd/bf16_cvt_1.c: New test.
Dennis Zhang committed -
As noted in LWG 3410 the specification in the C++20 draft performs more iterator comparisons than necessary when the end of either range is reached. Our implementation followed that specification. This removes the redundant comparisons so that we do no unnecessary work as soon as we find that we've reached the end of either range. The odd-looking return statement is because it generates better code than the original version that copied the global constants. * include/bits/stl_algobase.h (lexicographical_compare_three_way): Avoid redundant iterator comparisons (LWG 3410).
Jonathan Wakely committed -
As mentioned in the PR and discussed on IRC, the following patch is the patch that fixes the originally reported issue. We have there because of the premature bitfield comparison -> BIT_FIELD_REF optimization: s$s4_19 = 0; s.s4 = s$s4_19; _10 = BIT_FIELD_REF <s, 8, 0>; _13 = _10 & 8; and no other s fields are initialized. If they would be all initialized with constants, then my earlier PR93582 bitfield handling patches would handle it already, but if at least one bit we ignore after the BIT_AND_EXPR masking is not initialized or is initialized earlier to non-constant, we aren't able to look through it until combine, which is too late for the warnings on the dead code. This patch handles BIT_AND_EXPR where the first operand is a SSA_NAME initialized with a memory load and second operand is INTEGER_CST, by trying a partial def lookup after pushing the ranges of 0 bits in the mask as artificial initializers. In the above case on little-endian, we push offset 0 size 3 {} partial def and offset 4 size 4 (the result is unsigned char) and then perform normal partial def handling. My initial version of the patch failed miserably during bootstrap, because data->finish (...) called vn_reference_lookup_or_insert_for_pieces which I believe tried to remember the masked value rather than real for the reference, or for failed lookup visit_reference_op_load called vn_reference_insert. The following version makes sure we aren't calling either of those functions in the masked case, as we don't know anything better about the reference from whatever has been discovered when the load stmt has been visited, the patch just calls vn_nary_op_insert_stmt on failure with the lhs (apparently calling it with the INTEGER_CST doesn't work). 2020-03-03 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/93582 * tree-ssa-sccvn.h (vn_reference_lookup): Add mask argument. * tree-ssa-sccvn.c (struct vn_walk_cb_data): Add mask and masked_result members, initialize them in the constructor and if mask is non-NULL, artificially push_partial_def {} for the portions of the mask that contain zeros. (vn_walk_cb_data::finish): If mask is non-NULL, set masked_result to val and return (void *)-1. Formatting fix. (vn_reference_lookup_pieces): Adjust vn_walk_cb_data initialization. Formatting fix. (vn_reference_lookup): Add mask argument. If non-NULL, don't call fully_constant_vn_reference_p nor vn_reference_lookup_1 and return data.mask_result. (visit_nary_op): Handle BIT_AND_EXPR of a memory load and INTEGER_CST mask. (visit_stmt): Formatting fix. * gcc.dg/tree-ssa/pr93582-10.c: New test. * gcc.dg/pr93582.c: New test. * gcc.c-torture/execute/pr93582.c: New test.
Jakub Jelinek committed -
This fixes a common mistake in removing a store that looks redudnant but is not because it changes the dynamic type of the memory and thus makes a difference for following loads with TBAA. 2020-03-03 Richard Biener <rguenther@suse.de> PR tree-optimization/93946 * alias.h (refs_same_for_tbaa_p): Declare. * alias.c (refs_same_for_tbaa_p): New function. * tree-ssa-alias.c (ao_ref_alias_set): For a NULL ref return zero. * tree-ssa-scopedtables.h (avail_exprs_stack::lookup_avail_expr): Add output argument giving access to the hashtable entry. * tree-ssa-scopedtables.c (avail_exprs_stack::lookup_avail_expr): Likewise. * tree-ssa-dom.c: Include alias.h. (dom_opt_dom_walker::optimize_stmt): Validate TBAA state before removing redundant store. * tree-ssa-sccvn.h (vn_reference_s::base_set): New member. (ao_ref_init_from_vn_reference): Adjust prototype. (vn_reference_lookup_pieces): Likewise. (vn_reference_insert_pieces): Likewise. * tree-ssa-sccvn.c: Track base alias set in addition to alias set everywhere. (eliminate_dom_walker::eliminate_stmt): Also check base alias set when removing redundant stores. (visit_reference_op_store): Likewise. * dse.c (record_store): Adjust valdity check for redundant store removal. * gcc.dg/torture/pr93946-1.c: New testcase. * gcc.dg/torture/pr93946-2.c: Likewise.
Richard Biener committed -
In Fedora we configure GCC with --with-arch=zEC12 --with-tune=z13 right now and furthermore redhat-rpm-config adds to rpm packages -march=zEC12 -mtune=z13 options (among others). While looking at the git compilation, I've been surprised that -O2 actually behaves differently from -O2 -mtune=z13 in this configuration, and indeed, seems --with-tune= is completely ignored on s390 if --with-arch= is specified. i386 had the same problem, but got that fixed in 2006, see PR26877. The thing is that for tune, we add -mtune=%(VALUE) only if neither -mtune= nor -march= is present, but as arch is processed first, it adds -march=%(VALUE) first and then -march= is always present and so -mtune= is never added. By reordering it in OPTION_DEFAULT_SPECS, we process tune first, add the default -mtune=%(VALUE) if -mtune= or -march= isn't seen, and then add -march=%(VALUE) if -march= isn't seen. It is true that cc1 etc. will be then invoked with -mtune=z13 -march=zEC12, but like if the user specifies it in that order, it should still use z13 tuning and zEC12 ISA set. 2020-03-03 Jakub Jelinek <jakub@redhat.com> PR target/26877 * config/s390/s390.h (OPTION_DEFAULT_SPECS): Reorder.
Jakub Jelinek committed -
The following testcase ICEs in cross to riscv64-linux. The problem is that we have a DImode integral constant (that doesn't fit into SImode), which is pushed into a constant pool and later access just the first half of it using a MEM. When plus_constant is called on such a MEM, if the constant has mode, we verify the mode, but if it doesn't, we don't and ICE later on when we think the CONST_INT is a valid SImode constant. 2020-03-03 Jakub Jelinek <jakub@redhat.com> PR rtl-optimization/94002 * explow.c (plus_constant): Punt if cst has VOIDmode and get_pool_mode is different from mode. * gcc.dg/pr94002.c: New test.
Jakub Jelinek committed -
All ARC's small data adressing is using address scaling feature of the load/store instructions (i.e., the address is made of a general pointer plus a shifted offset. The shift amount depends on the addressing mode). This patch is checking the offset of an address if it fits the scalled constraint. If so, a small data access is generated. This patch fixes execute' pr93249 failure. gcc/ xxxx-xx-xx Claudiu Zissulescu <claziss@synopsys.com> * config/arc/arc.c (leigitimate_small_data_address_p): Check if an address has an offset which fits the scalling constraint for a load/store operation. (legitimate_scaled_address_p): Update use leigitimate_small_data_address_p. (arc_print_operand): Likewise. (arc_legitimate_address_p): Likewise. (legitimate_small_data_address_p): Likewise. Signed-off-by: Claudiu Zissulescu <claziss@gmail.com>
Claudiu Zissulescu committed -
With the refurbish of ARC600' accumulator support, the mlo_operand doesn't reflect the proper low accumulator register for the newer ARCv2 accumulator register used by the fma instructions. Hence, replace it with accl_operand predicate. gcc/ xxxx-xx-xx Claudiu Zissulescu <claziss@synopsys.com> * config/arc/arc.md (fmasf4_fpu): Use accl_operand predicate. (fnmasf4_fpu): Likewise.
Claudiu Zissulescu committed -
Early expand ADDDI3 and SUBDI3 for better code gen. gcc/ xxxx-xx-xx Claudiu Zissulescu <claziss@synopsys.com> * config/arc/arc.md (adddi3): Early expand the 64bit operation into 32bit ops. (subdi3): Likewise. (adddi3_i): Remove pattern. (subdi3_i): Likewise.
Claudiu Zissulescu committed -
gcc/ xxxx-xx-xx Claudiu Zissulescu <claziss@synopsys.com> * config/arc/arc.md (eh_return): Add length info.
Claudiu Zissulescu committed -
2020-03-03 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/93927 * gcc.c-torture/compile/pr93927-1.c: New test. * gcc.c-torture/compile/pr93927-2.c: New test.
Jakub Jelinek committed -
gcc/cp * coroutines.cc (finish_co_await_expr): Build co_await_expr with unknown_type_node. (finish_co_yield_expr): Ditto. *pt.c (type_dependent_expression_p): Set co_await/yield_expr with unknown type as dependent. gcc/testsuite * g++.dg/coroutines/torture/co-await-14-template-traits.C: New test.
JunMa committed -
GCC Administrator committed
-
- 02 Mar, 2020 1 commit
-
-
The note about duplicates attached to analyzer diagnostics feels like an implementation detail; it's likely just noise from the perspective of an end-user. This patch disables it by default, introducing a flag to re-enable it. gcc/analyzer/ChangeLog: * analyzer.opt (fanalyzer-show-duplicate-count): New option. * diagnostic-manager.cc (diagnostic_manager::emit_saved_diagnostic): Use the above to guard the printing of the duplicate count. gcc/ChangeLog: * doc/invoke.texi (-fanalyzer-show-duplicate-count): New. gcc/testsuite/ChangeLog: * gcc.dg/analyzer/CVE-2005-1689-dedupe-issue.c: Add -fanalyzer-show-duplicate-count.
David Malcolm committed
-