- 04 Mar, 2020 9 commits
-
-
Last code gen change of LTGT didn't consider the situation of cbranch with LTGT, branch only support few compare codes. gcc/ChangeLog * config/riscv/riscv.c (riscv_emit_float_compare): Using NE to compare the result of IOR. gcc/testsuite/ChangeLog * gcc.dg/pr93995.c: New.
Kito Cheng committed -
My GCC 9 patch for C++20 P0846R0 (ADL and function templates) tweaked cp_parser_template_name to only return an identifier if name lookup didn't find anything. In the deduce4.C case it means that we now return an OVERLOAD. That means that cp_parser_template_id will call lookup_template_function whereby producing a TEMPLATE_ID_EXPR with unknown_type_node. Previously, we created a TEMPLATE_ID_EXPR with no type, making it type-dependent. What we have now is no longer type-dependent. And so, when we call finish_call_expr after we've parsed "foo<int>(10)", even though we're in a template, we still do the normal processing, thus perform overload resolution. When adding the template candidate foo we need to deduce the template arguments, and that is where things go downhill. When fn_type_unification sees that we have explicit template arguments, but they aren't complete, it will use them to substitute the function type. So we substitute e.g. "void <T33d> (U)". But the explicit template argument was for a different parameter so we don't actually substitute anything. But the problem here was that we reduced the template level of 'U' anyway. So then when we're actually deducing the template arguments via type_unification_real, we fail in unify: 22932 if (TEMPLATE_TYPE_LEVEL (parm) 22933 != template_decl_level (tparm)) 22934 /* The PARM is not one we're trying to unify. Just check 22935 to see if it matches ARG. */ because 'parm' has been reduced but 'tparm' has not yet. Therefore we shouldn't reduce the template level of template parameters when tf_partial aka template argument deduction substitution. But we can only return after performing the cp_build_qualified_type etc. business otherwise things break horribly. 2020-03-03 Jason Merrill <jason@redhat.com> Marek Polacek <polacek@redhat.com> PR c++/90505 - mismatch in template argument deduction. * pt.c (tsubst): Don't reduce the template level of template parameters when tf_partial. * g++.dg/template/deduce4.C: New test. * g++.dg/template/deduce5.C: New test. * g++.dg/template/deduce6.C: New test. * g++.dg/template/deduce7.C: New test.
Marek Polacek committed -
When deciding whether to perform the memset optimization in ranges::fill_n, we were crucially neglecting to check that the output pointer's value type is a byte type. This patch adds such a check to the problematic condition in ranges::fill_n. At the same time, this patch relaxes the overly conservative __is_byte<_Tp>::__value check that requires the fill type be a byte type. It's overly conservative because it means we won't enable the memset optimization in the following example char c[100]; ranges::fill(c, 37); because the fill type is deduced to be int here. Rather than requiring that the fill type be a byte type, it seems safe to just require the fill type be an integral type, which is what this patch does. libstdc++-v3/ChangeLog: PR libstdc++/94017 * include/bits/ranges_algobase.h (__fill_n_fn::operator()): Refine condition for when to use memset, making sure to additionally check that the output pointer's value type is a non-volatile byte type. Instead of requiring that the fill type is a byte type, just require that it's an integral type. * testsuite/20_util/specialized_algorithms/uninitialized_fill/94017.cc: New test. * testsuite/20_util/specialized_algorithms/uninitialized_fill_n/94017.cc: New test. * testsuite/25_algorithms/fill/94013.cc: Uncomment part that was blocked by PR 94017. * testsuite/25_algorithms/fill/94017.cc: New test. * testsuite/25_algorithms/fill_n/94017.cc: New test.
Patrick Palka committed -
This adds support for move-only input iterators in the ranges::unitialized_* algorithms defined in <memory>, as per LWG 3355. The only changes needed are to add calls to std::move in the appropriate places and to use operator- instead of ranges::distance because the latter cannot be used with a move-only iterator that has a sized sentinel, as is the case here. (This issue with ranges::distance is LWG 3392.) libstdc++-v3/ChangeLog: LWG 3355 The memory algorithms should support move-only input iterators introduced by P1207 * include/bits/ranges_uninitialized.h (__uninitialized_copy_fn::operator()): Use std::move to avoid attempting to copy __ifirst, which could be a move-only input iterator. Use operator- instead of ranges::distance to compute distance from a sized sentinel. (__uninitialized_copy_n_fn::operator()): Likewise. (__uninitialized_move_fn::operator()): Likewise. (__uninitialized_move_n_fn::operator()): Likewise. (__uninitialized_destroy_fn::operator()): Use std::move to avoid attempting to copy __first. (__uninitialized_destroy_n_fn::operator()): Likewise. * testsuite/20_util/specialized_algorithms/destroy/constrained.cc: Augment test. * .../specialized_algorithms/uninitialized_copy/constrained.cc: Likewise. * .../specialized_algorithms/uninitialized_move/constrained.cc: Likewise.
Patrick Palka committed -
This adds a testsuite range type whose end() is a sized sentinel to <testsuite_iterators.h>, which will be used in the tests that verify LWG 3355. libstdc++-v3/ChangeLog: * testsuite/util/testsuite_iterators.h (test_range::get_iterator): Make protected instead of private. (test_sized_range_sized_sent): New.
Patrick Palka committed -
This adds a move-only testsuite iterator wrapper to <testsuite_iterators.h> which will be used in the tests for LWG 3355. The tests for LWG 3389 and 3390 are adjusted to use this new iterator wrapper. libstdc++-v3/ChangeLog: * testsuite/util/testsuite_iterators.h (input_iterator_wrapper_nocopy): New testsuite iterator. * testsuite/24_iterators/counted_iterator/lwg3389.cc: Use it. * testsuite/24_iterators/move_iterator/lwg3390.cc: Likewise.
Patrick Palka committed -
We are passing a value type as the first argument to is_nothrow_assignable_v, but the result of that is inevitably false. Since this predicate is a part of the condition that guards the corresponding optimizations for these algorithms, this bug means these optimizations are never used. We should be passing a reference type to is_nothrow_assignable_v instead. libstdc++-v3/ChangeLog: * include/bits/ranges_uninitialized.h (uninitialized_copy_fn::operator()): Pass a reference type as the first argument to is_nothrow_assignable_v. (uninitialized_copy_fn::operator()): Likewise. (uninitialized_move_fn::operator()): Likewise. Return an in_out_result with the input iterator stripped of its move_iterator. (uninitialized_move_n_fn::operator()): Likewise. (uninitialized_fill_fn::operator()): Pass a reference type as the first argument to is_nothrow_assignable_v. (uninitialized_fill_n_fn::operator()): Likewise.
Patrick Palka committed -
gcc/cp * coroutines.cc (captures_temporary): Strip component_ref to its base object. gcc/testsuite * g++.dg/coroutines/torture/co-await-15-capture-comp-ref.C: New test.
JunMa committed -
GCC Administrator committed
-
- 03 Mar, 2020 16 commits
-
-
Several algorithms check the is_trivially_copyable trait to decide whether to dispatch to memmove or memcmp as an optimization. Since r271435 (CWG DR 2094) the trait is true for volatile-qualified scalars, but we can't use memmove or memcmp when the type is volatile. We need to also check for volatile types. This is complicated by the fact that in C++20 (but not earlier standards) iterator_traits<volatile T*>::value_type is T, so we can't just check whether the value_type is volatile. The solution in this patch is to introduce new traits __memcpyable and __memcmpable which combine into a single trait the checks for pointers, the value types being the same, and the type being trivially copyable but not volatile-qualified. PR libstdc++/94013 * include/bits/cpp_type_traits.h (__memcpyable, __memcmpable): New traits to control when to use memmove and memcmp optimizations. (__is_nonvolatile_trivially_copyable): New helper trait. * include/bits/ranges_algo.h (__lexicographical_compare_fn): Do not use memcmp optimization with volatile data. * include/bits/ranges_algobase.h (__equal_fn): Use __memcmpable. (__copy_or_move, __copy_or_move_backward): Use __memcpyable. * include/bits/stl_algobase.h (__copy_move_a2): Use __memcpyable. (__copy_move_backward_a2): Likewise. (__equal_aux1): Use __memcmpable. (__lexicographical_compare_aux): Do not use memcmp optimization with volatile data. * testsuite/25_algorithms/copy/94013.cc: New test. * testsuite/25_algorithms/copy_backward/94013.cc: New test. * testsuite/25_algorithms/equal/94013.cc: New test. * testsuite/25_algorithms/fill/94013.cc: New test. * testsuite/25_algorithms/lexicographical_compare/94013.cc: New test. * testsuite/25_algorithms/move/94013.cc: New test. * testsuite/25_algorithms/move_backward/94013.cc: New test.
Jonathan Wakely committed -
We ICE on the following testcase since I've added the SAVE_EXPR-like constexpr handling where the TARGET_EXPR initializer (and cleanup) is evaluated only once (because it might have side-effects like new or delete expressions in it). The problem is if the TARGET_EXPR (but I guess in theory SAVE_EXPR too) initializer is *non_constant_p. We still remember the result, but already not that it is *non_constant_p. Normally that wouldn't be a big problem, if something is *non_constant_p, we only or into it and so the whole expression will be non-constant too. Except in the builtins handling, we try to evaluate the arguments with non_constant_p pointing into a dummy1 bool which we ignore. This is because some builtins might fold into a constant even if they don't have a constexpr argument. Unfortunately if we evaluate the TARGET_EXPR first in the argument of such a builtin and then once again, we don't set *non_constant_p. So, either we don't remember the TARGET_EXPR/SAVE_EXPR result if it wasn't constant, like the following patch does, or we could remember it, but in some way that would make it clear that it is non-constant (e.g. by pushing into the global->values SAVE_EXPR, SAVE_EXPR entry and perhaps for TARGET_EXPR don't remember it on TARGET_EXPR_SLOT, but the TARGET_EXPR itself and similarly push TARGET_EXPR, TARGET_EXPR and if we see those after the lookup, diagnose + set *non_constant_p. Or we could perhaps during the builtin argument evaluation push expressions into a different save_expr vec and undo them afterwards. 2020-03-03 Jakub Jelinek <jakub@redhat.com> PR c++/93998 * constexpr.c (cxx_eval_constant_expression) <case TARGET_EXPR, case SAVE_EXPR>: Don't record anything if *non_constant_p is true. * g++.dg/ext/pr93998.C: New test.
Jakub Jelinek committed -
Unified syntax has been the official syntax for thumb1 assembly for over 10 years now. It's time we made preparations for that becoming the default in the assembler. But before we can start doing that we really need to clean up some laggards from the olden days. Libgcc support for thumb1 is one such example. This patch converts all of the legacy (disjoint) syntax that I could find over to unified code. The identification was done by using a trick version of gas that defaulted to unified mode which then faults if legacy syntax is encountered. The code produced was then compared against the old code to check for differences. One such difference does exist, but that is because in unified syntax 'movs rd, rn' is encoded as 'lsls rd, rn, #0', rather than 'adds rd, rn, #0'; but that is a deliberate change that was introduced because the lsls encoding more closely reflects the behaviour of 'movs' in arm state (where only some of the condition flags are modified). * config/arm/bpabi-v6m.S (aeabi_lcmp): Convert thumb1 code to unified syntax. (aeabi_ulcmp, aeabi_ldivmod, aeabi_uldivmod): Likewise. (aeabi_frsub, aeabi_cfcmpeq, aeabi_fcmpeq): Likewise. (aeabi_fcmp, aeabi_drsub, aeabi_cdrcmple): Likewise. (aeabi_cdcmpeq, aeabi_dcmpeq, aeabi_dcmp): Likewise. * config/arm/lib1funcs.S (Lend_fde): Convert thumb1 code to unified syntax. (divsi3, modsi3): Likewise. (clzdi2, ctzsi2): Likewise. * config/arm/libunwind.S (restore_core_regs): Convert thumb1 code to unified syntax. (UNWIND_WRAPPER): Likewise.
Richard Earnshaw committed -
This patch is part of a series adding support for Armv8.6-A features. It implements intrinsics to convert between bfloat16 and float32 formats. gcc/ChangeLog: * config/arm/arm_bf16.h (vcvtah_f32_bf16, vcvth_bf16_f32): New. * config/arm/arm_neon.h (vcvt_f32_bf16, vcvtq_low_f32_bf16): New. (vcvtq_high_f32_bf16, vcvt_bf16_f32): New. (vcvtq_low_bf16_f32, vcvtq_high_bf16_f32): New. * config/arm/arm_neon_builtins.def (vbfcvt, vbfcvt_high): New entries. (vbfcvtv4sf, vbfcvtv4sf_high): Likewise. * config/arm/iterators.md (VBFCVT, VBFCVTM): New mode iterators. (V_bf_low, V_bf_cvt_m): New mode attributes. * config/arm/neon.md (neon_vbfcvtv4sf<VBFCVT:mode>): New. (neon_vbfcvtv4sf_highv8bf, neon_vbfcvtsf): New. (neon_vbfcvt<VBFCVT:mode>, neon_vbfcvt_highv8bf): New. (neon_vbfcvtbf_cvtmode<mode>, neon_vbfcvtbf): New * config/arm/unspecs.md (UNSPEC_BFCVT, UNSPEC_BFCVT_HIG): New. gcc/testsuite/ChangeLog: * gcc.target/arm/simd/bf16_cvt_1.c: New test.
Dennis Zhang committed -
As noted in LWG 3410 the specification in the C++20 draft performs more iterator comparisons than necessary when the end of either range is reached. Our implementation followed that specification. This removes the redundant comparisons so that we do no unnecessary work as soon as we find that we've reached the end of either range. The odd-looking return statement is because it generates better code than the original version that copied the global constants. * include/bits/stl_algobase.h (lexicographical_compare_three_way): Avoid redundant iterator comparisons (LWG 3410).
Jonathan Wakely committed -
As mentioned in the PR and discussed on IRC, the following patch is the patch that fixes the originally reported issue. We have there because of the premature bitfield comparison -> BIT_FIELD_REF optimization: s$s4_19 = 0; s.s4 = s$s4_19; _10 = BIT_FIELD_REF <s, 8, 0>; _13 = _10 & 8; and no other s fields are initialized. If they would be all initialized with constants, then my earlier PR93582 bitfield handling patches would handle it already, but if at least one bit we ignore after the BIT_AND_EXPR masking is not initialized or is initialized earlier to non-constant, we aren't able to look through it until combine, which is too late for the warnings on the dead code. This patch handles BIT_AND_EXPR where the first operand is a SSA_NAME initialized with a memory load and second operand is INTEGER_CST, by trying a partial def lookup after pushing the ranges of 0 bits in the mask as artificial initializers. In the above case on little-endian, we push offset 0 size 3 {} partial def and offset 4 size 4 (the result is unsigned char) and then perform normal partial def handling. My initial version of the patch failed miserably during bootstrap, because data->finish (...) called vn_reference_lookup_or_insert_for_pieces which I believe tried to remember the masked value rather than real for the reference, or for failed lookup visit_reference_op_load called vn_reference_insert. The following version makes sure we aren't calling either of those functions in the masked case, as we don't know anything better about the reference from whatever has been discovered when the load stmt has been visited, the patch just calls vn_nary_op_insert_stmt on failure with the lhs (apparently calling it with the INTEGER_CST doesn't work). 2020-03-03 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/93582 * tree-ssa-sccvn.h (vn_reference_lookup): Add mask argument. * tree-ssa-sccvn.c (struct vn_walk_cb_data): Add mask and masked_result members, initialize them in the constructor and if mask is non-NULL, artificially push_partial_def {} for the portions of the mask that contain zeros. (vn_walk_cb_data::finish): If mask is non-NULL, set masked_result to val and return (void *)-1. Formatting fix. (vn_reference_lookup_pieces): Adjust vn_walk_cb_data initialization. Formatting fix. (vn_reference_lookup): Add mask argument. If non-NULL, don't call fully_constant_vn_reference_p nor vn_reference_lookup_1 and return data.mask_result. (visit_nary_op): Handle BIT_AND_EXPR of a memory load and INTEGER_CST mask. (visit_stmt): Formatting fix. * gcc.dg/tree-ssa/pr93582-10.c: New test. * gcc.dg/pr93582.c: New test. * gcc.c-torture/execute/pr93582.c: New test.
Jakub Jelinek committed -
This fixes a common mistake in removing a store that looks redudnant but is not because it changes the dynamic type of the memory and thus makes a difference for following loads with TBAA. 2020-03-03 Richard Biener <rguenther@suse.de> PR tree-optimization/93946 * alias.h (refs_same_for_tbaa_p): Declare. * alias.c (refs_same_for_tbaa_p): New function. * tree-ssa-alias.c (ao_ref_alias_set): For a NULL ref return zero. * tree-ssa-scopedtables.h (avail_exprs_stack::lookup_avail_expr): Add output argument giving access to the hashtable entry. * tree-ssa-scopedtables.c (avail_exprs_stack::lookup_avail_expr): Likewise. * tree-ssa-dom.c: Include alias.h. (dom_opt_dom_walker::optimize_stmt): Validate TBAA state before removing redundant store. * tree-ssa-sccvn.h (vn_reference_s::base_set): New member. (ao_ref_init_from_vn_reference): Adjust prototype. (vn_reference_lookup_pieces): Likewise. (vn_reference_insert_pieces): Likewise. * tree-ssa-sccvn.c: Track base alias set in addition to alias set everywhere. (eliminate_dom_walker::eliminate_stmt): Also check base alias set when removing redundant stores. (visit_reference_op_store): Likewise. * dse.c (record_store): Adjust valdity check for redundant store removal. * gcc.dg/torture/pr93946-1.c: New testcase. * gcc.dg/torture/pr93946-2.c: Likewise.
Richard Biener committed -
In Fedora we configure GCC with --with-arch=zEC12 --with-tune=z13 right now and furthermore redhat-rpm-config adds to rpm packages -march=zEC12 -mtune=z13 options (among others). While looking at the git compilation, I've been surprised that -O2 actually behaves differently from -O2 -mtune=z13 in this configuration, and indeed, seems --with-tune= is completely ignored on s390 if --with-arch= is specified. i386 had the same problem, but got that fixed in 2006, see PR26877. The thing is that for tune, we add -mtune=%(VALUE) only if neither -mtune= nor -march= is present, but as arch is processed first, it adds -march=%(VALUE) first and then -march= is always present and so -mtune= is never added. By reordering it in OPTION_DEFAULT_SPECS, we process tune first, add the default -mtune=%(VALUE) if -mtune= or -march= isn't seen, and then add -march=%(VALUE) if -march= isn't seen. It is true that cc1 etc. will be then invoked with -mtune=z13 -march=zEC12, but like if the user specifies it in that order, it should still use z13 tuning and zEC12 ISA set. 2020-03-03 Jakub Jelinek <jakub@redhat.com> PR target/26877 * config/s390/s390.h (OPTION_DEFAULT_SPECS): Reorder.
Jakub Jelinek committed -
The following testcase ICEs in cross to riscv64-linux. The problem is that we have a DImode integral constant (that doesn't fit into SImode), which is pushed into a constant pool and later access just the first half of it using a MEM. When plus_constant is called on such a MEM, if the constant has mode, we verify the mode, but if it doesn't, we don't and ICE later on when we think the CONST_INT is a valid SImode constant. 2020-03-03 Jakub Jelinek <jakub@redhat.com> PR rtl-optimization/94002 * explow.c (plus_constant): Punt if cst has VOIDmode and get_pool_mode is different from mode. * gcc.dg/pr94002.c: New test.
Jakub Jelinek committed -
All ARC's small data adressing is using address scaling feature of the load/store instructions (i.e., the address is made of a general pointer plus a shifted offset. The shift amount depends on the addressing mode). This patch is checking the offset of an address if it fits the scalled constraint. If so, a small data access is generated. This patch fixes execute' pr93249 failure. gcc/ xxxx-xx-xx Claudiu Zissulescu <claziss@synopsys.com> * config/arc/arc.c (leigitimate_small_data_address_p): Check if an address has an offset which fits the scalling constraint for a load/store operation. (legitimate_scaled_address_p): Update use leigitimate_small_data_address_p. (arc_print_operand): Likewise. (arc_legitimate_address_p): Likewise. (legitimate_small_data_address_p): Likewise. Signed-off-by: Claudiu Zissulescu <claziss@gmail.com>
Claudiu Zissulescu committed -
With the refurbish of ARC600' accumulator support, the mlo_operand doesn't reflect the proper low accumulator register for the newer ARCv2 accumulator register used by the fma instructions. Hence, replace it with accl_operand predicate. gcc/ xxxx-xx-xx Claudiu Zissulescu <claziss@synopsys.com> * config/arc/arc.md (fmasf4_fpu): Use accl_operand predicate. (fnmasf4_fpu): Likewise.
Claudiu Zissulescu committed -
Early expand ADDDI3 and SUBDI3 for better code gen. gcc/ xxxx-xx-xx Claudiu Zissulescu <claziss@synopsys.com> * config/arc/arc.md (adddi3): Early expand the 64bit operation into 32bit ops. (subdi3): Likewise. (adddi3_i): Remove pattern. (subdi3_i): Likewise.
Claudiu Zissulescu committed -
gcc/ xxxx-xx-xx Claudiu Zissulescu <claziss@synopsys.com> * config/arc/arc.md (eh_return): Add length info.
Claudiu Zissulescu committed -
2020-03-03 Jakub Jelinek <jakub@redhat.com> PR tree-optimization/93927 * gcc.c-torture/compile/pr93927-1.c: New test. * gcc.c-torture/compile/pr93927-2.c: New test.
Jakub Jelinek committed -
gcc/cp * coroutines.cc (finish_co_await_expr): Build co_await_expr with unknown_type_node. (finish_co_yield_expr): Ditto. *pt.c (type_dependent_expression_p): Set co_await/yield_expr with unknown type as dependent. gcc/testsuite * g++.dg/coroutines/torture/co-await-14-template-traits.C: New test.
JunMa committed -
GCC Administrator committed
-
- 02 Mar, 2020 15 commits
-
-
The note about duplicates attached to analyzer diagnostics feels like an implementation detail; it's likely just noise from the perspective of an end-user. This patch disables it by default, introducing a flag to re-enable it. gcc/analyzer/ChangeLog: * analyzer.opt (fanalyzer-show-duplicate-count): New option. * diagnostic-manager.cc (diagnostic_manager::emit_saved_diagnostic): Use the above to guard the printing of the duplicate count. gcc/ChangeLog: * doc/invoke.texi (-fanalyzer-show-duplicate-count): New. gcc/testsuite/ChangeLog: * gcc.dg/analyzer/CVE-2005-1689-dedupe-issue.c: Add -fanalyzer-show-duplicate-count.
David Malcolm committed -
gcc/ChangeLog: * doc/invoke.texi (Static Analyzer Options): Add -Wanalyzer-stale-setjmp-buffer to the list of options enabled by -fanalyzer.
David Malcolm committed -
PR analyzer/93959 reported that g++.dg/analyzer/malloc.C was failing with no output on Solaris. The issue is that <stdlib.h> there has "using std::free;", converting all the "free" calls to std::free, which fails the name-matching via is_named_call_p. This patch implements an is_std_named_call_p variant of is_named_call_p to check for the name within "std", and uses it in sm-malloc.c to check for std::malloc, std::calloc, and std::free. gcc/analyzer/ChangeLog: PR analyzer/93959 * analyzer.cc (is_std_function_p): New function. (is_std_named_call_p): New functions. * analyzer.h (is_std_named_call_p): New decl. * sm-malloc.cc (malloc_state_machine::on_stmt): Check for "std::" variants when checking for malloc, calloc and free. gcc/testsuite/ChangeLog: PR analyzer/93959 * g++.dg/analyzer/cstdlib-2.C: New test. * g++.dg/analyzer/cstdlib.C: New test.
David Malcolm committed -
In the absence of specific comment on the handling of closures I'd implemented something more than was intended (extending the lifetime of lambda capture-by-copy vars to the duration of the coro). After discussion at WG21 in February and by email, the correct handling is to treat the closure "this" pointer the same way as for a regular one, and thus it is the user's responsibility to ensure that the lambda capture object has suitable lifetime for the coroutine. It is noted that users frequently get this wrong, so it would be a good thing to revisit for C++23. This patch removes the additional copying behaviour for lambda capture-by- copy vars. gcc/cp/ChangeLog: 2020-03-02 Iain Sandoe <iain@sandoe.co.uk> * coroutines.cc (struct local_var_info): Adjust to remove the reference to the captured var, and just to note that this is a lambda capture proxy. (transform_local_var_uses): Handle lambda captures specially. (struct param_frame_data): Add a visited set. (register_param_uses): Also check for param uses in lambda capture proxies. (struct local_vars_frame_data): Remove captures list. (register_local_var_uses): Handle lambda capture proxies by noting and bypassing them. (morph_fn_to_coro): Update to remove lifetime extension of lambda capture-by-copy vars. gcc/testsuite/ChangeLog: 2020-03-02 Iain Sandoe <iain@sandoe.co.uk> Jun Ma <JunMa@linux.alibaba.com> * g++.dg/coroutines/torture/class-05-lambda-capture-copy-local.C: * g++.dg/coroutines/torture/lambda-09-init-captures.C: New test. * g++.dg/coroutines/torture/lambda-10-mutable.C: New test.
Iain Sandoe committed -
*movstrict<mode>_1 insn pattern allows only general registers, so we have to reject modes not suitable for general regs in corresponding movstrict<mode> expander. PR target/93997 * config/i386/i386.md (movstrict<mode>): Allow only registers with VALID_INT_MODE_P modes. testsuite/ChangeLog: PR target/93997 * gcc.target/i386/pr93997.c: New test.
Uros Bizjak committed -
gcc/testsuite/ChangeLog: PR tree-optimization/92982 * gcc.dg/strlenopt-94.c: New test.
Martin Sebor committed -
One missing bit from r10-6656. The docs and target-supports.exp already handle -std=gnu++20. 2020-03-02 Marek Polacek <polacek@redhat.com> PR c++/93958 - add missing -std=gnu++20. * c.opt: Add -std=gnu++20.
Marek Polacek committed -
PR fortran/93486 * module.c: Increase size of variables used to read module names when loading interfaces from module files to permit cases where the name is the concatenation of a module and submodule name. * gfortran.dg/pr93486.f90: New test.
Andrew Benson committed -
The new 25_algorithms/lexicographical_compare/93972.cc test fails on targets where char is unsigned, revealing an existing regression with the std::__memcmp helper that had gone unnoticed in std::lexicographical_compare. When comparing char and unsigned char, the memcmp optimisation is enabled, but the new std::__memcmp function fails to compile for mismatched types. PR libstdc++/93972 * include/bits/stl_algobase.h (__memcmp): Allow pointer types to differ. * testsuite/25_algorithms/lexicographical_compare/uchar.cc: New test.
Jonathan Wakely committed -
The key property of this alias is not that it may be an empty type, but that the type argument may not be used. The fact it's replaced by an empty type is just an implementation detail. The name was also backwards with respect to the bool argument. This patch changes the name to better reflect its purpose. * include/std/ranges (__detail::__maybe_empty_t): Rename to __maybe_present_t. (__adaptor::_RangeAdaptor, join_view, split_view): Use new name.
Jonathan Wakely committed -
In general, we need to manage the lifetime of compiler- generated awaitable instances in the coroutine frame, since these must persist across suspension points. However, it is quite possible that the user might provide the awaitable instances, either as function params or as a local variable. We will already generate a frame entry for these as required. At present, under this circumstance, we are duplicating these, awaitable, initialising a second frame copy for them (which we then subsequently destroy manually after the suspension point). That's not efficient - so an undesirable thinko in the first place. However, there is also an actual bug; if the compiler elects to elide the copy (which is perfectly legal), it does not have visibility of the manual management of the post-suspend destruction - this subsequently leads to double-free errors. The solution is not to make the second copy (as noted, params and local vars already have frame copies with managed lifetimes). gcc/cp/ChangeLog: 2020-03-02 Iain Sandoe <iain@sandoe.co.uk> * coroutines.cc (build_co_await): Do not build frame proxy vars when the co_await expression is a function parameter or local var. (co_await_expander): Do not initialise a frame var with itself. (transform_await_expr): Only substitute the awaitable frame var if it's needed. (register_awaits): Do not make frame copies for param or local vars that are awaitables. gcc/testsuite/ChangeLog: 2020-03-02 Iain Sandoe <iain@sandoe.co.uk> * g++.dg/coroutines/torture/func-params-09-awaitable-parms.C: New test. * g++.dg/coroutines/torture/local-var-5-awaitable.C: New test.
Iain Sandoe committed -
Add support for V64DFmode addition, and V64DImode min, max. There's no direct hardware support for these, so we use regular vector instructions and separate lane shift instructions. Also add support for V64QI and V64HI reductions. Some of these require additional extends and truncates, because AMD GCN has 32-bit vector lanes. 2020-03-02 Andrew Stubbs <ams@codesourcery.com> gcc/ * config/gcn/gcn-valu.md (dpp_move<mode>): New. (reduc_insn): Use 'U' and 'B' operand codes. (reduc_<reduc_op>_scal_<mode>): Allow all types. (reduc_<reduc_op>_scal_v64di): Delete. (*<reduc_op>_dpp_shr_<mode>): Allow all 1reg types. (*plus_carry_dpp_shr_v64si): Change to ... (*plus_carry_dpp_shr_<mode>): ... this and allow all 1reg int types. (mov_from_lane63_v64di): Change to ... (mov_from_lane63_<mode>): ... this, and allow all 64-bit modes. * config/gcn/gcn.c (gcn_expand_dpp_shr_insn): Increase buffer size. Support UNSPEC_MOV_DPP_SHR output formats. (gcn_expand_reduc_scalar): Add "use_moves" reductions. Add "use_extends" reductions. (print_operand_address): Add 'I' and 'U' codes. * config/gcn/gcn.md (unspec): Add UNSPEC_MOV_DPP_SHR.
Andrew Stubbs committed -
* gcc.target/arm/fuse-caller-save.c: Update expected output.
Jeff Law committed -
Segher Boessenkool committed
-
* include/bits/ranges_algo.h (shift_right): Add 'typename' to dependent type.
Jonathan Wakely committed
-