Article - Oct 16, 1999.

Next - Previous - Home

The Iridium Experience - Mimosa and the Iridium Kernel

Mimosa is a piece of code, about 500 lines long, that serves as a library for solving static-priority scheduling problems. The name is an acronym, Mimosa is a Mixed-Mode Schedulability Analyzer. Mimosa input is an an array of records; each record describes one static-priority real-time scheduling task. The array is entered in priority order, with the first row being the highest priority task. A task description includes the period, deadline, phase, and a bitmap specifying preemption relationships between this task and lower priority tasks. A few other fields such as "phase-lock" or "drifting" (phase) describe relative task phasing. Mimosa can describe and analye absolutely any preemption relationship between tasks, and thus it analyzes all "mixed" preemption "modes".

Mimosa was written in June-July of 1996, when I arrived at Motorola Satcom in Chandler, AZ. I had no idea that I'd need something like Mimosa, but my supervisor wanted me to analyze the Iridium workload and frankly I was very lucky to get it working in such a short time. If I hadn't gotten Mimosa working I would probably have beend fired! And frankly, since there is no other tool on earth like Mimosa, I wonder if Motorola understood just what I had built for them. I'm not positive that Mimosa is 100% correct, but it has been tested with one of the most thorough rate-monotonic test suites (probably the only rate-monotonic test suite) in existence.

Once a task system array has been constructed, Mimosa can answer many questions. Some of the simplest questions are (a) What is the utilization, (b) Is this set of tasks worst-case schedulable, (c) What scaling factor makes the task system impossible to schedule.

Mimosa applies the ideas developed at the University of York (view some Reference materials here.) Internally, Mimosa has an engine that performs an "icalc", which is a fundamental unit of static priority analysis. An "icalc" or Interference Calculation is the maximum amount of time that a higher priority task could ever delay a lower priority task in some time period [0, t]. By repeated invocation of the icalc subroutine, worst-case bounds can be derived for the response time of any task. A typical task system of N tasks will require O(N^2) icalcs to analyze. Mimosa can perform about 300,000 icalcs per second on a SPARC-5/85 processor, so Mimosa is very very fast. This speed is needed, because with 40 tasks and a steepest-descent optimization algorithm, the iCalc function can easily be called 1000 * 40 * 40 = 16 million times to answer a complex schedulability question.

Mimosa Examples

The only way to learn a new software library is to write programs with it. Our first example verifies the famous Liu-Layland theorem for real-time scheduling.

// liu.cxx - Liu/Layland worst-case system of two tasks.  
#include "Mimosa.hxx"
 
main()
{
   double util;
   int feasible;
   const int tsize = 2;
   Task T[tsize];
 
   Mimosa_overflow = 5000;     // Quit analyzing after 5000 ticks.

   //          C    P     T     D  CPr Pok Loc Shd Name
   T[0].Task(414,   0, 1000, 1000,  1,  1,  1,  1, "T[0]");
   T[1].Task(586,   0, 1414, 1414,  1,  1,  1,  1, "T[1]");
 
   util = Task::Utilization(T, tsize);
   feasible = Task::Feasible(T, tsize);
   Task::PrintSystem(T, tsize, 1);
   printf("Task system U = %f has %d feasible tasks\n", util, feasible);
}
This program prints the following output.
Task Name     Comp    Period   Deadline Utilization    Shed   Response
     T[0]      414     1000       1000   0.414000       1        414
     T[1]      586     1414       1414   0.414427       1       1000
Task system U = 0.828427 has 2 feasible tasks
Mimosa operates on integer values only, to avoid roundoff problems. As you can see, this task system has a utilization of .828427 ~= 2 ln 2, and the response time of the second task, task T[1], is exactly 1000. With some experimentation, you can find that by increasing the computation time of either T[0] or T[1], the response of T[1] becomes -1 (meaning the task T[1] is not schedulable). Thus, this is the 2-task worst-case example as given by the Liu-Layland theorem. The computation time, periods, and deadlines are related by sqrt(2) factors, but these were rounded off to 4-digit values so that Mimosa could operate on them. Mimosa has a 1-dimensional optimizer attached to it, so you can invoke the schedulability routines as a subroutine, and search for knowledge about a task system. The optimizer takes a task system and a scaling function. Some canned scaling functions are included, these include scaling periods, deadlines, or execution times. These scaling function make it easy to find breakdown utilizations [Lehoczky89].

Early during the design of the Iridium payload software, the systems engineers wanted to know if the system would be overloaded. We estimated the real-time workloads ( intra satellite handoffs, inter-satellite handoffs, bus (avionics) operations, etc.) and fed them into Mimosa. Then, using the optimizer, we found out that every piece of iridium code would need to be tuned (i.e. execution would need to be scaled down) by about 2.4x if a cyclic executive was used. If static priority preemptive scheduling was used, a tuning factor of 1.4x would be needed to meet all the deadlines.

Iridium used the PowerPC version of pSOS, which had awful performance. This was the first major real-time kernel for the PowerPC, and it was the "portable" version which was very inefficient. The pSOS preemption time was about 300 microseconds. After finishing Mimosa, I was commissioned to write a static priority non-preemptive kernel to run as a pSOS task. The kernel had a scheduler, and several schedulers (iridium subtasks) could run in the system. In this way, the first 10 tasks could be run non-preemptively, and then a lower priority scheduler could run the last 30 tasks. Preemption and rescheduling inside a scheduler took 1 microsecond instead of 300. Using Mimosa I showed that with just 2 schedulers and properly chosen periods, the system would still need to be tuned by 1.4x - i.e. there would be very little scheduling loss, about 0.4%, even if the system was non-preemptive. My task over the next 12 months would be to write, test, and integrate, from scratch, a non-preemptive real-time kernel that was 100x faster than pSOS, and give it the most advanced real-time capabilities of any kernel that had ever been written. This schedule supports native imprecise computation, sporadic servers, and deadline > period pipelined tasks to improve responsiveness. No other scheduler since that time has all these features, and it is still being used within Motorola.

Testing Mimosa

Mimosa was tested with task systems chosen from the RTSS conference. Most of the task systems appear on my reading list. Extra task systems were chosen from the real-time course I had developed at UBC. In total, about 15 task sets were developed, and Mimosa was debugged to the point where all 15 task sets were properly analyzed. In the end, Mimosa could solve all the homework problems from the UBC course, and thus Mimosa has made students unnecessary (this was important since I had left UBC for greener pastures. ;) )

In using Mimosa, though, we had several problems. Perhaps the biggest problem was that Mimosa has no visualization engine; there is no way to "check your work". If you enter an incorrect number, or if you give the wrong types of preemption or phasing constraints to Mimosa, you will get weird results. In several cases, I proceeded with incorrect results for weeks or months until a problem was discovered, by accident. Although the results are correct, the results are of little or no value if the inputs are wrong. This is a substantial problem with a powerful modeling tool like Mimosa.

References

(C) 1999, Donald W. Gillies. All Rights Reserved.

Next - Previous - Home