Abstract:
Embedded systems across many industries will continue to become more complex, performing new functions and requiring more computing power. The increased demand for processing power can be best met by running applications on a multi-core platform to achieve potential savings on Size, Weight, and Power (SWaP), cost and development effort. However, running applications of different criticalities on a platform remains an open problem due to shared-cache interference. If the shared cache management is not done properly, the performance of applications can be degraded, leading to inconvenience to the user or even catastrophic damage. In this thesis we investigate how to manage shared-cache spatial interference among concurrently executing applications. A dual-criticality system is studied in which applications are either 'critical' or 'non-critical'. Critical applications will be managed to achieve an upper-bound on their execution time. Non-critical applications will be managed to maximise their performance (reduce average execution time). A flexible simulation framework was developed to carry out research on memory-hierarchy management in a multi-core, mixed-criticality application context. A key component of the framework is a highly parameterisable cache simulation model. The cache size and management policy (including address mapping type, write, replacement) can be changed by the system designer to carry out design space exploration. A distinguishing feature of our framework is that it is extensible to allow the co-simulation of abstract models (developed in C++, SystemC) and hardware implementations (developed in Hardware Description Languages). This feature can potentially improve the development lifecycle of new designs. Cache Partitioning was chosen as a means of managing inter-application cache interference. Cache partitioning prevents interference by introducing isolation among applications. Dynamic Cache Partitioning (DCP) allows partition sizes to change based on applications' needs. (As an application requires more cache space its partition can be grown while shrinking the partitions of applications that need less space.) DCP features were added to our framework. DCP's key components are: Partitioning scheme, Hit/Miss Curve Estimator, and capacity allocation algorithm. Capacity allocation algorithms allocate cache space to applications based on application space needs, criticalities and the expected overall system performance. To achieve this goal, a previously developed cache line allocation algorithm was extended to improve the throughput. We also developed a new allocation algorithm to maximise the cache usages for the application.