Computing systems frequently have a mix of interactive,
real-time applications and background computation to execute.
In order to guarantee responsiveness, the interactive and
background applications are often run on completely disjoint
sets of resources to ensure performance isolation. These practices
are expensive in terms of battery life, power and capital
expenditures. In this paper, we evaluate the potential of hardware
cache partitioning mechanisms and policies to provide
energy efficient operating environments by running foreground
and background tasks simultaneously while mitigating performance
degradation. We evaluate these tradeoffs using real
multicore hardware that supports cache partitioning and energy
measurement. We find that for modern, multi-threaded
benchmarks there are only a limited number of application
pairings where cache partitioning is more effective than naive
cache sharing at reducing energy in a race-to-halt scenario.
However, in contexts where a constant stream of background
work is available, a dynamically adaptive cache partitioning
policy is effective at increasing background application
throughput while preserving foreground application performance.