Commit e0f8b6a0 by Sofiane Naci Committed by Sofiane Naci

[AARCH64] Fix __clear_cache.

From-SVN: r195203
parent 93aea671
2013-01-15 Sofiane Naci <sofiane.naci@arm.com>
* config/aarch64/sync-cache.c (__aarch64_sync_cache_range): Update
loop start address for cache clearing.
2013-01-14 Georg-Johann Lay <avr@gjlay.de> 2013-01-14 Georg-Johann Lay <avr@gjlay.de>
* config/avr/lib1funcs.S: Remove trailing blanks. * config/avr/lib1funcs.S: Remove trailing blanks.
......
...@@ -39,7 +39,10 @@ __aarch64_sync_cache_range (const void *base, const void *end) ...@@ -39,7 +39,10 @@ __aarch64_sync_cache_range (const void *base, const void *end)
instruction cache fetches the updated data. 'end' is exclusive, instruction cache fetches the updated data. 'end' is exclusive,
as per the GNU definition of __clear_cache. */ as per the GNU definition of __clear_cache. */
for (address = base; address < (const char *) end; address += dcache_lsize) /* Make the start address of the loop cache aligned. */
address = (const char*) ((unsigned long) base & ~ (dcache_lsize - 1));
for (address; address < (const char *) end; address += dcache_lsize)
asm volatile ("dc\tcvau, %0" asm volatile ("dc\tcvau, %0"
: :
: "r" (address) : "r" (address)
...@@ -47,7 +50,10 @@ __aarch64_sync_cache_range (const void *base, const void *end) ...@@ -47,7 +50,10 @@ __aarch64_sync_cache_range (const void *base, const void *end)
asm volatile ("dsb\tish" : : : "memory"); asm volatile ("dsb\tish" : : : "memory");
for (address = base; address < (const char *) end; address += icache_lsize) /* Make the start address of the loop cache aligned. */
address = (const char*) ((unsigned long) base & ~ (icache_lsize - 1));
for (address; address < (const char *) end; address += icache_lsize)
asm volatile ("ic\tivau, %0" asm volatile ("ic\tivau, %0"
: :
: "r" (address) : "r" (address)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment