Commit e60b1e23 by Tamar Christina

middle-end: Fix logical shift truncation (PR rtl-optimization/91838)

This fixes a fall-out from a patch I had submitted two years ago which started
allowing simplify-rtx to fold logical right shifts by offsets a followed by b
into >> (a + b).

However this can generate inefficient code when the resulting shift count ends
up being the same as the size of the shift mode.  This will create some
undefined behavior on most platforms.

This patch changes to code to truncate to 0 if the shift amount goes out of
range.  Before my older patch this used to happen in combine when it saw the
two shifts.  However since we combine them here combine never gets a chance to
truncate them.

The issue mostly affects GCC 8 and 9 since on 10 the back-end knows how to deal
with this shift constant but it's better to do the right thing in simplify-rtx.

Note that this doesn't take care of the Arithmetic shift where you could replace
the constant with MODE_BITS (mode) - 1, but that's not a regression so punting it.

gcc/ChangeLog:

	PR rtl-optimization/91838
	* simplify-rtx.c (simplify_binary_operation_1): Update LSHIFTRT case
	to truncate if allowed or reject combination.

gcc/testsuite/ChangeLog:

	PR rtl-optimization/91838
	* g++.dg/pr91838.C: New test.
parent c63ae7f0
2020-01-31 Tamar Christina <tamar.christina@arm.com>
PR rtl-optimization/91838
* simplify-rtx.c (simplify_binary_operation_1): Update LSHIFTRT case
to truncate if allowed or reject combination.
2020-01-31 Andrew Stubbs <ams@codesourcery.com>
* tree-ssa-loop-ivopts.c (get_iv): Use sizetype for zero-step.
......
......@@ -3647,9 +3647,21 @@ simplify_binary_operation_1 (enum rtx_code code, machine_mode mode,
{
rtx tmp = gen_int_shift_amount
(inner_mode, INTVAL (XEXP (SUBREG_REG (op0), 1)) + INTVAL (op1));
tmp = simplify_gen_binary (code, inner_mode,
XEXP (SUBREG_REG (op0), 0),
tmp);
/* Combine would usually zero out the value when combining two
local shifts and the range becomes larger or equal to the mode.
However since we fold away one of the shifts here combine won't
see it so we should immediately zero the result if it's out of
range. */
if (code == LSHIFTRT
&& INTVAL (tmp) >= GET_MODE_BITSIZE (inner_mode))
tmp = const0_rtx;
else
tmp = simplify_gen_binary (code,
inner_mode,
XEXP (SUBREG_REG (op0), 0),
tmp);
return lowpart_subreg (int_mode, tmp, inner_mode);
}
......
2020-01-31 Tamar Christina <tamar.christina@arm.com>
PR rtl-optimization/91838
* g++.dg/pr91838.C: New test.
2020-01-30 David Malcolm <dmalcolm@redhat.com>
* gcc.dg/analyzer/malloc-1.c: Remove include of <string.h>.
......
/* { dg-do compile } */
/* { dg-additional-options "-O2" } */
/* { dg-skip-if "" { *-*-* } {-std=c++98} } */
using T = unsigned char; // or ushort, or uint
using V [[gnu::vector_size(8)]] = T;
V f(V x) {
return x >> 8 * sizeof(T);
}
/* { dg-final { scan-assembler {pxor\s+%xmm0,\s+%xmm0} { target x86_64-*-* } } } */
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment