Abstract:
People interact with a diverse range of embedded systems on a daily basis. Such systems are found in vehicles, robots, and biomedical devices. An embedded system is safety-critical if its failure to operate correctly can lead to dangerous situations that endanger people’s lives or cause financial losses. Safety-critical systems must be certified against strict safety standards before use. Fairly recently, the concept of cyber-physical system (CPS) has emerged that emphasizes the tight integrations between embedded systems and their physical environment. A CPS must deliver its computed outputs on time according to its system specifications. Thus, a key to building a successful CPS is to understand the timing behavior of its computations and physical environment. The use of high-performance and low-power multi-core processors is becoming popular in embedded systems, triggering the need for embedded programmers to become parallel programming experts. Parallel programming is challenging because programmers must have the requisite skills, experience, and knowledge to avoid common parallel programming traps and pitfalls. Programmers need to have a strong comprehension of parallelism to develop robust parallel programs and not be overwhelmed by its complexity. Programmers need to be aware of the data dependencies in their specific program and choose the appropriate solution to manage the dependencies. This thesis makes inroads to the future development of CPSs and attempts to unify various research fields by proposing the ForeC language, ForeMC framework, and ForeCast static timing analyzer. The ForeC language enables the deterministic, parallel, and reactive programming of parallel architectures. The synchronous semantics of ForeC is designed to greatly simplify the understanding and debugging of parallel programs. ForeC allows programmers to express many forms of parallel patterns while ensuring that ForeC programs can be compiled efficiently for parallel execution and be amenable to static timing analysis. ForeC’s main innovation is its shared variable semantics that provides thread isolation and deterministic thread communication. All ForeC programs are correct by construction and deadlock-free because mutual exclusion constructs are not needed. Benchmarks on an Intel desktop quad-core processor reveal that ForeC programs can achieve an average speed up of 3.22⇥ compared with 2.83⇥ for the same programs written in OpenMP (a popular multi-threading framework). On a Xilinx embedded quad-core processor, ForeC programs can achieve an average speed up of 3.49⇥ compared with 2.25⇥ for the same programs written in Esterel (a classic synchronous language). Many CPSs are now comprised of components with various timing requirements, e.g., hard, soft, or non-real-time requirements. The synchronous abstraction, that ForeC is based on, stipulates that programs always react instantaneously to new environmental inputs. This means that all synchronous programs have hard real-time requirements, which precludes the use of soft or non-real-time threads. The ForeMC framework relaxes the synchronous abstraction to allow the definition of threads with hard, soft, or non-real-time requirements. For the first time, the ForeMC framework allows the synchronous language community to venture into the realm of mixed-criticality systems. With size, weight, and power concerns becoming more important, the proposed hybrid (static and dynamic) scheduling approach achieves high processor utilization while maintaining execution fairness among the threads. Simulation-based benchmarks are performed to assess the performance of the hybrid scheduling approach on on multi-processor systems. Benchmarks reveal that the ForeMC framework can schedule up to 15% more task sets and achieve an average of 5.38% better system utilization than a completely dynamic scheduling approach based on earliest deadline first (EDF). Tasks are scheduled fairer under ForeMC and achieve consistently higher execution frequencies, but require more preemptions than an EDF-based approach. The correctness of a CPS depends on its ability to complete computations in a timely manner. It is therefore critical to analyze the execution times of the programs in a CPS. The ForeCast static timing analyzer verifies the timing requirements of parallel programs by using the reachability technique to compute their worst-case execution times (WCETs). While traversing the program’s controlflow graph, ForeCast simultaneously resolves the execution times of each core due to instruction execution, shared bus delays, synchronization delays, and thread scheduling behavior. We describe the ability to trade-o↵ the analysis precision with analysis time by changing the level of abstraction. Benchmarks on a Xilinx embedded multi-core processor reveal that ForeCast can compute the WCETs of large multi-core programs with an average over-estimation of only 2%. By increasing the level of abstraction, the analysis time for the largest benchmark program (with 43,695 reachable states) is reduced by a factor of 342, to only 7 seconds. This thesis concludes with a vision for creating an integrated tool chain with ForeC, ForeMC, and ForeCast that helps to simplify the CPS development process.