1. 02 Feb, 2018 3 commits
  2. 01 Feb, 2018 26 commits
    • Change accidentally omitted from revision 257280. · 90bf9487
      From-SVN: r257313
      Ian Lance Taylor committed
    • math: adjust compilation flags, use them when testing · 28f3c814
          
          We were using special compilation flags for the math package, but we
          weren't using them when testing.  That meant that our tests were not
          checking the real code we were providing.  Fix that.
          
          Fixing that revealed that we were not using a good set of flags, or at
          least were not using flags that let the tests pass.  Adjust the flags
          to stop using -funsafe-math-optimizations on x86.  Instead always use
          -ffp-contract=off -fno-math-errno -fno-trapping-math for all targets.
          
          Fixes golang/go#23647
          
          Reviewed-on: https://go-review.googlesource.com/91355
      
      From-SVN: r257312
      Ian Lance Taylor committed
    • re PR c++/84125 (ICE in generic lambda when static_assert argument is implicitly converted to bool) · d15f0fa7
      	PR c++/84125
      	* typeck.c (build_address): Relax the assert when
      	processing_template_decl.
      
      	* g++.dg/cpp1y/lambda-generic-84125.C:New test.
      
      From-SVN: r257311
      Marek Polacek committed
    • PR 83975 Associate target with non-constant character length · ae976c33
      When associating a variable of type character, if the length of the
      target isn't known at compile time, generate an error. See PR 83344
      for more details.
      
      Regtested on x86_64-pc-linux-gnu.
      
      gcc/fortran/ChangeLog:
      
      2018-02-01  Janne Blomqvist  <jb@gcc.gnu.org>
      
      	PR 83975
      	PR 83344
      	* resolve.c (resolve_assoc_var): Generate an error if
      	target length unknown.
      
      From-SVN: r257310
      Janne Blomqvist committed
    • PR c++/84126 - ICE with variadic generic lambda · bfa28724
      	PR c++/84036
      	PR c++/82249
      	* pt.c (tsubst_pack_expansion): Handle function parameter_packs in
      	PACK_EXPANSION_EXTRA_ARGS.
      
      From-SVN: r257307
      Jason Merrill committed
    • re PR target/56010 (Powerpc, -mcpu=native and -mtune=native use the wrong name for target 7450) · 6a92e053
      	PR target/56010
      	PR target/83743
      	* config/rs6000/driver-rs6000.c: #include "diagnostic.h".
      	#include "opts.h".
      	(rs6000_supported_cpu_names): New static variable.
      	(linux_cpu_translation_table): Likewise.
      	(elf_platform) <cpu>: Define new static variable and use it.
      	Translate kernel AT_PLATFORM name to canonical name if needed.
      	Error if platform name is unknown.
      
      From-SVN: r257305
      Peter Bergner committed
    • re PR middle-end/84089 (FAIL: g++.dg/cpp1y/lambda-generic-x.C -std=gnu++14… · 177a9700
      re PR middle-end/84089 (FAIL: g++.dg/cpp1y/lambda-generic-x.C  -std=gnu++14 (internal compiler error))
      
      	PR target/84089
      	* config/pa/predicates.md (base14_operand): Handle E_VOIDmode.
      
      From-SVN: r257304
      Aldy Hernandez committed
    • re PR target/84128 (i686: Stack spilling in -fstack-clash-protection prologue neglects %esp change) · 89e06365
      	PR target/84128
      	* config/i386/i386.c (release_scratch_register_on_entry): Add new
      	OFFSET and RELEASE_VIA_POP arguments.  Use SP+OFFSET to restore
      	the scratch if RELEASE_VIA_POP is false.
      	(ix86_adjust_stack_and_probe_stack_clash): Un-constify SIZE.
      	If we have to save a temporary register, decrement SIZE appropriately.
      	Pass new arguments to release_scratch_register_on_entry.
      	(ix86_adjust_stack_and_probe): Likewise.
      	(ix86_emit_probe_stack_range): Pass new arguments to
      	release_scratch_register_on_entry.
      
      	PR target/84128
      	* gcc.target/i386/pr84128.c: New test.
      
      From-SVN: r257303
      Jeff Law committed
    • re PR rtl-optimization/84157 ([nvptx] ICE: RTL check: expected code 'reg', have 'lshiftrt') · ff814010
      	PR rtl-optimization/84157
      	* combine.c (change_zero_ext): Use REG_P predicate in
      	front of HARD_REGISTER_P predicate.
      
      From-SVN: r257302
      Uros Bizjak committed
    • avr.c (avr_option_override): Move disabling of -fdelete-null-pointer-checks to... · 19416210
      gcc/
      	* config/avr/avr.c (avr_option_override): Move disabling of
      	-fdelete-null-pointer-checks to...
      	* common/config/avr/avr-common.c (avr_option_optimization_table):
      	...here.
      testsuite/
      	* gcc.dg/tree-ssa/vrp111.c (dg-options): Add
      	-fdelete-null-pointer-checks.
      
      From-SVN: r257301
      Georg-Johann Lay committed
    • compiler: omit field name for embedded fields in reflection string · 4d0bf3e1
          
          This matches the gc compiler.
          
          The test case was sent for the master repo as
          https://golang.org/cl/91138.
          
          Fixes golang/go#23620
          
          Reviewed-on: https://go-review.googlesource.com/91139
      
      From-SVN: r257300
      Ian Lance Taylor committed
    • net: declare lib_getaddrinfo as returning int32 · fc876f22
          
          Otherwise on a 64-bit system we will read the 32-bit value as a 64-bit
          value.  Since getaddrinfo returns negative numbers as error values,
          these will be interpreted as numbers like 0xfffffffe rather than -2,
          and the comparisons with values like syscall.EAI_NONAME will fail.
          
          Fixes golang/go#23645
          
          Reviewed-on: https://go-review.googlesource.com/91296
      
      From-SVN: r257299
      Ian Lance Taylor committed
    • re PR c++/83796 (Abstract classes allowed to be instantiated when initialised as… · 73b7d28f
      re PR c++/83796 (Abstract classes allowed to be instantiated when initialised as default parameter to function or constructor)
      
      /cp
      2018-02-01  Paolo Carlini  <paolo.carlini@oracle.com>
      
      	PR c++/83796
      	* call.c (convert_like_real): If w're initializing from {} explicitly
      	call abstract_virtuals_error_sfinae.
      
      /testsuite
      2018-02-01  Paolo Carlini  <paolo.carlini@oracle.com>
      
      	PR c++/83796
      	* g++.dg/cpp0x/abstract-default1.C: New.
      
      From-SVN: r257298
      Paolo Carlini committed
    • Use range info in split_constant_offset (PR 81635) · 3ae12932
      This patch implements the original suggestion for fixing PR 81635:
      use range info in split_constant_offset to see whether a conversion
      of a wrapping type can be split.  The range info problem described in:
      
          https://gcc.gnu.org/ml/gcc-patches/2017-08/msg01002.html
      
      seems to have been fixed.
      
      The patch is part 1.  There needs to be a follow-on patch to handle:
      
        for (unsigned int i = 0; i < n; i += 4)
          {
            ...[i + 2]...
            ...[i + 3]...
      
      which the old SCEV test handles, but which the range check doesn't.
      At the moment we record that the low two bits of "i" are clear,
      but we still end up with a maximum range of 0xffffffff rather than
      0xfffffffc.
      
      2018-01-31  Richard Sandiford  <richard.sandiford@linaro.org>
      
      gcc/
      	PR tree-optimization/81635
      	* tree-data-ref.c (split_constant_offset_1): For types that
      	wrap on overflow, try to use range info to prove that wrapping
      	cannot occur.
      
      gcc/testsuite/
      	PR tree-optimization/81635
      	* gcc.dg/vect/bb-slp-pr81635-1.c: New test.
      	* gcc.dg/vect/bb-slp-pr81635-2.c: Likewise.
      
      From-SVN: r257296
      Richard Sandiford committed
    • [PR83370][AARCH64]Use tighter register constraint for sibcall patterns. · d677263e
      In aarch64 backend, ip0/ip1 register will be used in the prologue/epilogue as
      temporary register.
      
      When the compiler is performing sibcall optimization. It has the chance to use
      ip0/ip1 register for indirect function call to hold the address. However,
      those two register might be clobbered by the epilogue code which makes the
      last sibcall instruction invalid.
      
      The patch here renames the register class CALLER_SAVE_REGS to TAILCALL_ADDR_REGS
      to reflect its usage, and remove IP registers from this class.
      
      gcc/
      
      2018-02-01  Renlin Li  <renlin.li@arm.com>
      
      	PR target/83370
      	* config/aarch64/aarch64.c (aarch64_class_max_nregs): Handle
      	TAILCALL_ADDR_REGS.
      	(aarch64_register_move_cost): Likewise.
      	* config/aarch64/aarch64.h (reg_class): Rename CALLER_SAVE_REGS to
      	TAILCALL_ADDR_REGS.
      	(REG_CLASS_NAMES): Likewise.
      	(REG_CLASS_CONTENTS): Rename CALLER_SAVE_REGS to
      	TAILCALL_ADDR_REGS. Remove IP registers.
      	* config/aarch64/aarch64.md (Ucs): Update register constraint.
      
      gcc/testsuite/
      
      2018-02-01  Richard Sandiford  <richard.sandiford@linaro.org>
      
      	PR target/83370
      	* gcc.target/aarch64/pr83370.c: New.
      
      From-SVN: r257294
      Renlin Li committed
    • domwalk.h (dom_walker::dom_walker): Add additional constructor for specifying… · dc3b4a20
      domwalk.h (dom_walker::dom_walker): Add additional constructor for specifying RPO order and allow NULL for that.
      
      2018-02-01  Richard Biener  <rguenther@suse.de>
      
      	* domwalk.h (dom_walker::dom_walker): Add additional constructor
      	for specifying RPO order and allow NULL for that.
      	* domwalk.c (dom_walker::dom_walker): Likewise.
      	(dom_walker::walk): Handle NULL RPO order.
      	* tree-into-ssa.c (rewrite_dom_walker): Do not walk dom children
      	in RPO order.
      	(rewrite_update_dom_walker): Likewise.
      	(mark_def_dom_walker): Likewise.
      
      	* gcc.dg/graphite/pr35356-1.c: Adjust.
      
      From-SVN: r257293
      Richard Biener committed
    • [AArch64] Fix SVE testsuite failures for ILP32 (PR 83846) · 0c64497d
      The SVE tests are split into code-quality compile tests and runtime
      tests.  A lot of the former are geared towards LP64.  It would be
      possible (but tedious!) to mark up every line that is expected to work
      only for LP64, but I think it would be a constant source of problems.
      
      Since the code has not been tuned for ILP32 yet, I think the best
      thing is to select only the runtime tests for that combination.
      They all pass on aarch64-elf and aarch64_be-elf except vec-cond-[34].c,
      which are unsupported due to the lack of fenv support.
      
      The patch also replaces uses of built-in types with stdint.h types
      where possible.  (This excludes tests that change the endianness,
      since we can't assume that system header files work in that case.)
      
      2018-02-01  Richard Sandiford  <richard.sandiford@linaro.org>
      
      gcc/testsuite/
      	PR testsuite/83846
      	* gcc.target/aarch64/sve/aarch64-sve.exp: Only do *_run tests
      	for ILP32.
      	* gcc.target/aarch64/sve/clastb_2_run.c (main): Use TYPE instead
      	of hard-coding the choice.
      	* gcc.target/aarch64/sve/clastb_4_run.c (main): Likewise.
      	* gcc.target/aarch64/sve/clastb_5_run.c (main): Likewise.
      	* gcc.target/aarch64/sve/clastb_3_run.c (main): Likewise.  Generalize
      	memset call.
      	* gcc.target/aarch64/sve/const_pred_1.C: Include stdint.h and use
      	stdint.h types.
      	* gcc.target/aarch64/sve/const_pred_2.C: Likewise.
      	* gcc.target/aarch64/sve/const_pred_3.C: Likewise.
      	* gcc.target/aarch64/sve/const_pred_4.C: Likewise.
      	* gcc.target/aarch64/sve/load_const_offset_2.c: Likewise.
      	* gcc.target/aarch64/sve/logical_1.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_load_1.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_load_2.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_load_3.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_load_4.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_load_5.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_load_6.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_load_7.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_load_8.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_store_1.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_store_2.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_store_3.c: Likewise.
      	* gcc.target/aarch64/sve/mask_struct_store_4.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_1.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_2.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_2_run.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_3.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_3_run.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_4.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_4_run.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_7.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_8.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_8_run.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_9.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_9_run.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_10.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_10_run.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_11.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_11_run.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_12.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_12_run.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_13.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_13_run.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_14.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_18.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_19.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_20.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_21.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_22.c: Likewise.
      	* gcc.target/aarch64/sve/struct_vect_23.c: Likewise.
      	* gcc.target/aarch64/sve/popcount_1.c (popcount_64): Use
      	__builtin_popcountll rather than __builtin_popcountl.
      
      Reviewed-by: James Greenhalgh <james.greenhalgh@arm.com>
      
      From-SVN: r257290
      Richard Sandiford committed
    • [AArch64] Handle SVE subregs that are effectively REVs · 002092be
      Subreg reads should be equivalent to storing the inner register to
      memory and loading the appropriate memory bytes back, with subreg
      writes doing the reverse.  For the reasons explained in the comments,
      this isn't what happens for big-endian SVE if we simply reinterpret
      one vector register as having a different element size, so the
      conceptual store and load is needed in the general case.
      
      However, that obviously produces poor code if we do it too often.
      The patch therefore adds a pattern for handling the operation in
      registers.  This copes with the important case of a VIEW_CONVERT
      created by tree-vect-slp.c:duplicate_and_interleave.
      
      It might make sense to tighten the predicates in aarch64-sve.md so
      that such subregs are not allowed as operands to most instructions,
      but that's future work.
      
      This fixes the sve/slp_*.c tests on aarch64_be.
      
      2018-02-01  Richard Sandiford  <richard.sandiford@linaro.org>
      
      gcc/
      	* config/aarch64/aarch64-protos.h (aarch64_split_sve_subreg_move)
      	(aarch64_maybe_expand_sve_subreg_move): Declare.
      	* config/aarch64/aarch64.md (UNSPEC_REV_SUBREG): New unspec.
      	* config/aarch64/predicates.md (aarch64_any_register_operand): New
      	predicate.
      	* config/aarch64/aarch64-sve.md (mov<mode>): Optimize subreg moves
      	that are semantically a reverse operation.
      	(*aarch64_sve_mov<mode>_subreg_be): New pattern.
      	* config/aarch64/aarch64.c (aarch64_maybe_expand_sve_subreg_move):
      	(aarch64_replace_reg_mode, aarch64_split_sve_subreg_move): New
      	functions.
      	(aarch64_can_change_mode_class): For big-endian, forbid changes
      	between two SVE modes if they have different element sizes.
      
      Reviewed-by: James Greenhalgh <james.greenhalgh@arm.com>
      
      From-SVN: r257289
      Richard Sandiford committed
    • [AArch64] Prefer LD1RQ for big-endian SVE · 8179efe0
      This patch deals with cases in which a CONST_VECTOR contains a
      repeating bit pattern that is wider than one element but narrower
      than 128 bits.  The current code:
      
      * treats the repeating pattern as a single element
      * uses the associated LD1R to load and replicate it (such as LD1RD
        for 64-bit patterns)
      * uses a subreg to cast the result back to the original vector type
      
      The problem is that for big-endian targets, the final cast is
      effectively a form of element reverse.  E.g. say we're using LD1RD to load
      16-bit elements, with h being the high parts and l being the low parts:
      
                                     +-----+-----+-----+-----+-----+----
                               lanes |  0  |  1  |  2  |  3  |  4  | ...
                                     +-----+-----+-----+-----+-----+----
           memory              bytes |h0 l0 h1 l1 h2 l2 h3 l3 h0 l0 ....
                                     +----------------------------------
                                       V  V  V  V  V  V  V  V
                           ----------+-----------------------+
          register         ....      |           0           |
           after           ----------+-----------------------+  lsb
           LD1RD           .... h3 l3 h0 l0 h1 l1 h2 l2 h3 l3|
                           ----------------------------------+
      
                           ----+-----+-----+-----+-----+-----+
          expected         ... |  4  |  3  |  2  |  1  |  0  |
          register         ----+-----+-----+-----+-----+-----+  lsb
          contents         .... h0 l0 h3 l3 h2 l2 h1 l1 h0 l0|
                           ----------------------------------+
      
      A later patch fixes the handling of general subregs to account
      for this, but it means that we need to do a REV instruction
      after the load.  It seems better to use LD1RQ[BHW] on a 128-bit
      pattern instead, since that gets the endianness right without
      a separate fixup instruction.
      
      2018-02-01  Richard Sandiford  <richard.sandiford@linaro.org>
      
      gcc/
      	* config/aarch64/aarch64.c (aarch64_expand_sve_const_vector): Prefer
      	the TImode handling for big-endian targets.
      
      gcc/testsuite/
      	* gcc.target/aarch64/sve/slp_2.c: Expect LD1RQ to be used instead
      	of LD1R[HWD] for multi-element constants on big-endian targets.
      	* gcc.target/aarch64/sve/slp_3.c: Likewise.
      	* gcc.target/aarch64/sve/slp_4.c: Likewise.
      
      Reviewed-by: James Greenhalgh <james.greenhalgh@arm.com>
      
      From-SVN: r257288
      Richard Sandiford committed
    • [AArch64] Use all SVE LD1RQ variants · 947b1372
      The fallback way of handling a repeated 128-bit constant vector for SVE
      is to force the 128 bits to the constant pool and use LD1RQ to load it.
      Previously the code always used the byte variant of LD1RQ (LD1RQB),
      with a preceding BSWAP for big-endian targets.  However, that BSWAP
      doesn't handle all cases correctly.
      
      The simplest fix seemed to be to use the LD1RQ appropriate for the
      element size.
      
      This helps to fix some of the sve/slp_*.c tests for aarch64_be,
      although a later patch is needed as well.
      
      2018-02-01  Richard Sandiford  <richard.sandiford@linaro.org>
      
      gcc/
      	* config/aarch64/aarch64-sve.md (sve_ld1rq): Replace with...
      	(*sve_ld1rq<Vesize>): ... this new pattern.  Handle all element sizes,
      	not just bytes.
      	* config/aarch64/aarch64.c (aarch64_expand_sve_widened_duplicate):
      	Remove BSWAP handing for big-endian targets and use the form of
      	LD1RQ appropariate for the mode.
      
      gcc/testsuite/
      	* gcc.target/aarch64/sve/slp_2.c: Expect LD1RQD rather than LD1RQB.
      	* gcc.target/aarch64/sve/slp_3.c: Expect LD1RQW rather than LD1RQB.
      	* gcc.target/aarch64/sve/slp_4.c: Expect LD1RQH rather than LD1RQB.
      
      Reviewed-by: James Greenhalgh <james.greenhalgh@arm.com>
      
      From-SVN: r257287
      Richard Sandiford committed
    • [AArch64] Generalise aarch64_simd_valid_immediate for SVE · f9093f23
      The current aarch64_simd_valid_immediate code predates the move
      to the new CONST_VECTOR representation, so for variable-length SVE
      it only handles duplicates of single elements, rather than duplicates
      of repeating patterns.
      
      This patch removes the restriction.  It means that the validity
      of a duplicated constant depends only on the bit pattern, not on
      the mode used to represent it.
      
      The patch is needed by a later big-endian fix.
      
      2018-02-01  Richard Sandiford  <richard.sandiford@linaro.org>
      
      gcc/
      	* config/aarch64/aarch64.c (aarch64_simd_valid_immediate): Handle
      	all CONST_VECTOR_DUPLICATE_P vectors, not just those with a single
      	duplicated element.
      
      Reviewed-by: James Greenhalgh <james.greenhalgh@arm.com>
      
      From-SVN: r257286
      Richard Sandiford committed
    • [AArch64] Tighten aarch64_secondary_reload condition (PR 83845) · 9a1b9cb4
      aarch64_secondary_reload enforced a secondary reload via
      aarch64_sve_reload_be for memory and pseudo registers, but failed
      to do the same for subregs of pseudo registers.  To avoid this and
      any similar problems, the patch instead tests for things that the move
      patterns handle directly; if the operand isn't one of those, we should
      use the reload pattern instead.
      
      The patch fixes an ICE in sve/mask_struct_store_3.c for aarch64_be,
      where the bogus target description was (rightly) causing LRA to cycle.
      
      2018-02-01  Richard Sandiford  <richard.sandiford@linaro.org>
      
      gcc/
      	PR tearget/83845
      	* config/aarch64/aarch64.c (aarch64_secondary_reload): Tighten
      	check for operands that need to go through aarch64_sve_reload_be.
      
      Reviewed-by: James Greenhalgh <james.greenhalgh@arm.com>
      
      From-SVN: r257285
      Richard Sandiford committed
    • re PR tree-optimization/81661 (ICE in gimplify_modify_expr, at gimplify.c:5638) · 31b6733b
      	PR tree-optimization/81661
      	PR tree-optimization/84117
      	* tree-eh.h (rewrite_to_non_trapping_overflow): Declare.
      	* tree-eh.c: Include gimplify.h.
      	(find_trapping_overflow, replace_trapping_overflow,
      	rewrite_to_non_trapping_overflow): New functions.
      	* tree-vect-loop.c: Include tree-eh.h.
      	(vect_get_loop_niters): Use rewrite_to_non_trapping_overflow.
      	* tree-data-ref.c: Include tree-eh.h.
      	(get_segment_min_max): Use rewrite_to_non_trapping_overflow.
      
      	* gcc.dg/pr81661.c: New test.
      	* gfortran.dg/pr84117.f90: New test.
      
      From-SVN: r257284
      Jakub Jelinek committed
    • PR 83705 Repeat with large values · eae4d8fb
      This patch fixes the regression by increasing the limit where we fall
      back to runtime to 2**28 elements, which is the same limit where
      previous releases failed. The are still bugs in the runtime
      evaluation, so in many cases longer characters will still fail, so
      print a warning message.
      
      Regtested on x86_64-pc-linux-gnu.
      
      gcc/fortran/ChangeLog:
      
      2018-02-01  Janne Blomqvist  <jb@gcc.gnu.org>
      
      	PR fortran/83705
      	* simplify.c (gfc_simplify_repeat): Increase limit for deferring
      	to runtime, print a warning message.
      
      gcc/testsuite/ChangeLog:
      
      2018-02-01  Janne Blomqvist  <jb@gcc.gnu.org>
      
      	PR fortran/83705
      	* gfortran.dg/repeat_7.f90: Catch warning message.
      
      From-SVN: r257281
      Janne Blomqvist committed
    • compiler: check for nil receiver in value method · 22149e37
          
          We already dereference the pointer to copy the value, but if the
          method does not use the value then the pointer dereference may be
          optimized away.  Do an explicit nil check so that we get the panic
          that is required.
          
          Fixes golang/go#19806
          
          Reviewed-on: https://go-review.googlesource.com/91275
      
      	* go.go-torture/execute/printnil.go: New test.
      
      From-SVN: r257280
      Ian Lance Taylor committed
    • Daily bump. · ee249a76
      From-SVN: r257279
      GCC Administrator committed
  3. 31 Jan, 2018 11 commits