Commit a2f9e6e3 by Richard Earnshaw Committed by Richard Earnshaw

[arm] Avoid using negative offsets for 'immediate' addresses when compiling for Thumb2

Thumb2 code now uses the Arm implementation of legitimize_address.
That code has a case to handle addresses that are absolute CONST_INT
values, which is a common use case in deeply embedded targets (eg:
void *p = (void*)0x12345678).  Since thumb has very limited negative
offsets from a constant, we want to avoid forming a CSE base that will
then be used with a negative value.

This was reported upstream originally in
https://gcc.gnu.org/ml/gcc-help/2019-10/msg00122.html

For example,

void test1(void) {
  volatile uint32_t * const p = (uint32_t *) 0x43fe1800;

  p[3] = 1;
  p[4] = 2;
  p[1] = 3;
  p[7] = 4;
  p[0] = 6;
}

With the new code, instead of 

        ldr     r3, .L2
        subw    r2, r3, #2035
        movs    r1, #1
        str     r1, [r2]
        subw    r2, r3, #2031
        movs    r1, #2
        str     r1, [r2]
        subw    r2, r3, #2043
        movs    r1, #3
        str     r1, [r2]
        subw    r2, r3, #2019
        movs    r1, #4
        subw    r3, r3, #2047
        str     r1, [r2]
        movs    r2, #6
        str     r2, [r3]
        bx      lr


We now get

        ldr     r3, .L2
        movs    r2, #1
        str     r2, [r3, #2060]
        movs    r2, #2
        str     r2, [r3, #2064]
        movs    r2, #3
        str     r2, [r3, #2052]
        movs    r2, #4
        str     r2, [r3, #2076]
        movs    r2, #6
        str     r2, [r3, #2048]
        bx      lr


	* config/arm/arm.c (arm_legitimize_address): Don't form negative
	offsets from a CONST_INT address when TARGET_THUMB2.

From-SVN: r277677
parent d84b9ad5
2019-10-31 Richard Earnshaw <rearnsha@arm.com> 2019-10-31 Richard Earnshaw <rearnsha@arm.com>
* config/arm/arm.c (arm_legitimize_address): Don't form negative offsets
from a CONST_INT address when TARGET_THUMB2.
2019-10-31 Richard Earnshaw <rearnsha@arm.com>
* config/arm/arm.md (add_not_cin): New insn. * config/arm/arm.md (add_not_cin): New insn.
(add_not_shift_cin): Likewise. (add_not_shift_cin): Likewise.
...@@ -9039,17 +9039,20 @@ arm_legitimize_address (rtx x, rtx orig_x, machine_mode mode) ...@@ -9039,17 +9039,20 @@ arm_legitimize_address (rtx x, rtx orig_x, machine_mode mode)
HOST_WIDE_INT mask, base, index; HOST_WIDE_INT mask, base, index;
rtx base_reg; rtx base_reg;
/* ldr and ldrb can use a 12-bit index, ldrsb and the rest can only /* LDR and LDRB can use a 12-bit index, ldrsb and the rest can
use a 8-bit index. So let's use a 12-bit index for SImode only and only use a 8-bit index. So let's use a 12-bit index for
hope that arm_gen_constant will enable ldrb to use more bits. */ SImode only and hope that arm_gen_constant will enable LDRB
to use more bits. */
bits = (mode == SImode) ? 12 : 8; bits = (mode == SImode) ? 12 : 8;
mask = (1 << bits) - 1; mask = (1 << bits) - 1;
base = INTVAL (x) & ~mask; base = INTVAL (x) & ~mask;
index = INTVAL (x) & mask; index = INTVAL (x) & mask;
if (bit_count (base & 0xffffffff) > (32 - bits)/2) if (TARGET_ARM && bit_count (base & 0xffffffff) > (32 - bits)/2)
{ {
/* It'll most probably be more efficient to generate the base /* It'll most probably be more efficient to generate the
with more bits set and use a negative index instead. */ base with more bits set and use a negative index instead.
Don't do this for Thumb as negative offsets are much more
limited. */
base |= mask; base |= mask;
index -= mask; index -= mask;
} }
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment