Combining Software Cache Partitioning and Loop Tiling for Effective Shared Cache Management
Abstract
One of the biggest challenges in multicore platforms is shared cache management, especially for data-dominant applications. Two commonly used approaches for increasing shared cache utilization are cache partitioning and loop tiling. However, state-of-the-art compilers lack efficient cache partitioning and loop tiling methods for two reasons. First, cache partitioning and loop tiling are strongly coupled together, and thus addressing them separately is simply not effective. Second, cache partitioning and loop tiling must be tailored to the target shared cache architecture details and the memory characteristics of the corunning workloads. To the best of our knowledge, this is the first time that a methodology provides (1) a theoretical foundation in the above-mentioned cache management mechanisms and (2) a unified framework to orchestrate these two mechanisms in tandem (not separately). Our approach manages to lower the number of main memory accesses by an order of magnitude keeping at the same time the number of arithmetic/addressing instructions to a minimal level. We motivate this work by showcasing that cache partitioning, loop tiling, data array layouts, shared cache architecture details (i.e., cache size and associativity), and the memory reuse patterns of the executing tasks must be addressed together as one problem, when a (near)-optimal solution is requested. To this end, we present a search space exploration analysis where our proposal is able to offer a vast deduction in the required search space.
Publication Date
2018-05-22
Publication Title
ACM Transactions on Embedded Computing Systems
Embargo Period
2023-04-15
Recommended Citation
Kelefouras, V.,
Georgios, K.,
&
Nikolaos, V.
(2018)
'Combining Software Cache Partitioning and Loop Tiling for Effective Shared Cache Management',
ACM Transactions on Embedded Computing Systems, 17(3), pp. 1-25.
Available at: 10.1145/3202663" >https://doi.org/10.1145/3202663