Real Time Operating Systems Computer Science Essay

Real Time Operating Systems are Operating Systems for systems operating in Real Time. These Operating Systems have functions dedicated to detecting and responding to tasks from the real world within their deadlines. Depending on whether the real time system is hard or soft, the RTOS is designed to the needs of each system. RTOS differs from General purpose Operating System in function and design and will be discussed in detail in this paper. Among the many functions of the RTOS kernel, task management is one of the most critical function. Managing Tasks entails dynamically or statically assigning tasks and scheduling them accordingly. Given the stringent requirements of the real time world, several factors need to be considered when selecting the right scheduling mechanism. Several Real Time Operating Systems exist commercially that can be used to reduce initial engineering and design costs and time. This paper discusses differences between the RTOS and General Purpose Operating Systems, desired functions of the RTOS, the RTOS kernel and Task Management mechanisms. It concludes with the discussion and analysis of commercially available Real Time Operating Systems.

Real Time Systems are systems that are subject to real time limitations and are expected to deliver services within strict time boundaries [1]. The correctness of such systems depends not only on computational output, but also on the instant at which the system produces the output.

Real Time Systems can be generally categorized into soft and hard real time systems. When a deadline is missed in a hard real time system, i.e. a service is not provided at the instant at which it is expected, the result would be catastrophic. Hence, hard real time systems are generally safety or mission critical. Examples of Hard Real Time Systems are, Air Traffic Control Systems, Pace makers for heart patients and Car Engine Control Systems, etc. In a soft real time system, failure to meet a deadline or latency in meeting deadlines, only causes degradation of system performance and can continue functioning despite the failure.

Real Time Operating Systems are Operating Systems designed to serve real time applications. Features of the RTOS are designed and selected such that output and feedback is provided from the system in a timely fashion. These operating

systems are designed to specifically meet real time needs and constraints and hence these systems are specific to the application in which they are used. Typical features in a Real Time Operating System would include, multi-threading, preemptibility, scheduling, fault tolerance, etc.

This paper is organized as follows. Section II discusses the differences between the Real Time Operating System and General Purpose Operating System. Section III introduces briefly the general desired features of the RTOS. Section IV details the duties of the kernel in the RTOS. Section V details various aspects of Task Management, one of the critical functions of the Operating System. The final section of the paper concludes with the discussion and analysis of the various commercial Real Time Operating Systems available in the market.

Real Time Operating Systems versus General Purpose Operating Systems

Systems that function in real time situations have different and more challenging requirements than other systems. An example would be an Air Traffic Control System, where the correctness of the system not only lies in the right output but timely output. An untimely response could cost human lives. This would be a case of hard real time system where delay in output could be catastrophic. The Operating System would need to manage with several external events and produce output in a timely manner. RTOS is designed with features that would enable it to handle such tough real time situations.

We need to note that not all real time systems require an operating system. Examples would be microwave ovens and washing machines, where there is minimal user interaction and only a combination of a few tasks. The system could be hardwired as implementing an Operating System would be too expensive for a simple system.

RTOS, in comparison with a General Purpose Operating system is designed to enable effective sharing of resources with many processes and producing output quickly. In an RTOS, a scheduling algorithm is implemented for this purpose. A General Purpose operating system on the other hand uses the “fairness” policy whereby each task is given an equal amount of time and resources to every application and task [2]. Hence, there is no differentiation between urgent tasks and other tasks. Moreover, the time at which output can be generated is not predictable in a GPOS. Hence, these qualities make GPOS unsuitable for real time conditions.

Table 1: GPOS vs. RTOS

The above table is an extract from [3]. This table highlights and compares the features of GPOS and RTOS. While GPOS is designed to optimize performance and thus reduce costs, RTOS is designed to give the fastest response to a triggered task depending on its criticality. RTOS tasks have deadline and hence are time bound, whereas GPOS tasks have no time bound. RTOS is designed to be more deterministic than GPOS through its schedulers. The table also cites common examples of GPOS and RTOS.

Basic Features of the RTOS

Though Real Time Operating Systems differ depending on the application for which it is designed, there are a few basic features expected. These features are discussed briefly below.

Timeliness

Timeliness is a crucial factor when designing RTOS especially for Hard Real Time systems which are safety or mission critical. In order to achieve timeliness, the RTOS has to be able to handle multiple tasks simultaneously. Multi threading needs to be used to achieve this.

Multithreading is the efficient management of multiple threads where each thread is assigned a task. All threads share resources from a single core. A scheduler is designed to assign priorities to threads and ensure higher priority threads can preempt the lower priority threads to access the CPU when they need them.

Determinism

In order for the Real time operating system to function in a predictable manner, latencies in operation need to be predicted and delays need to be defined. The common latencies are as follows. Task switching latency is the time between saving the state of the current task and the start of the next task. Interrupt latency is the time taken for an interrupt to be executed. Interrupt dispatch latency is the time taken from the interrupt completion time to the starting time of the next task. The static scheduler predefines the duration taken for each task to execute so that the Real Time System has a more deterministic and controlled behavior.

Read also  Increasing Visual Comfort with Blue Light in Video

Data Integrity

Resources are shared between threads and hence the same data may be modified by two or more strings at the same instance. To ensure data integrity, mutexes are used. A mutex allows exclusive access to the resource. A mutex can be in the locked state or the unlocked state. When a task is in progress with a resource, the mutex is in a locked state. When the task is done, it unlocks the mutex and only then any other task can modify that resource.

POSIX Compliance

POSIX stands for Portable Operating System Interface for Computer Environments. POSIX 1003.1b provides the standard for Real Time Operating System to comply to. Compliance to such standards makes the RTOS easily portable across platforms.

The RTOS Kernel

The Kernel, like in other Operating Systems, is the most important part of the RTOS. It acts as an interface between the top level application layer, which can be accessed by users, and the hardware layer, consisting of the data processing hardware components. This is represented in Figure 1. BSP is the board support package that enables RTOS to be target specific.

The responsibilities of the RTOS kernel are illustrated in Figure 2. The central duty of the kernel is to manage tasks. In an RTOS, a task is a basic unit of execution. Task management involves assigning these threads / tasks priority levels. A task that needs to be completed with a tighter deadline and has a larger impact on the output is normally given a higher priority. Apart from creating priority for threads, the kernel needs to manage priority inversion situations efficiently and ensure that the higher priority threads get resources as soon as they need them.

Figure 1 – The Kernel

The next most important task of the kernel would be the inter task communication and synchronization. Given the multitude of events RTOSes need to deal with, within a short span of time, this feature of the kernel is necessary to ensure that corrupted information is not passed between tasks. Examples of common mechanisms that are used for Inter task communication and synchronization would be semaphores, conditional variables, flags, and message queues.

Figure 2 – RTOS Kernel Service Groups

The next most critical function of the kernel would be timers. They assist the schedulers in determining if a task has met or missed its schedule. Timers can also be used for watch dog functions and to quit a task that has timed out. Timers can be absolute, where they calculate time and date according to the calendar, or relative, where time is calculate by the number of ticks.

Memory management is an important task of the kernel. With many threads using the memory, the system will run out of memory if the threads that have completed do not return the memory back to the system. The memory in an RTOS is usually small in size and only used for user application, hence needs to be managed efficiently. In a few Real time operating systems, temporary memory divided into fixed sizes, is allocated to tasks. Once the tasks have executed, they must return the memory block back to the memory pool. Lower priority tasks are allocated smaller memory blocks while higher priority tasks are allocated larger memory blocks. When a lower priority task requires more memory, it has to wait till a memory block is returned to the pool before continuing execution.

Though the above functionalities are general, features of the kernel differ depending on the usage of the Real Time Operating System. A few of the common categories of kernels would be, Small, proprietary kernels, real time extensions to kernels of general purpose operating systems, and component-based kernels.

Small, proprietary kernels

This category of kernels is used where the real time system has tight deadline to meet. These kernels have very few functionalities and very well defined tasks to handle. Hence, the response time of these kernels are predictable and suitable for use in hard real time systems. These kernels are highly customized and can be designed in-house or commercially available.

A few of the special features of these kernels are the short context switching duration, limited features and hence lower overhead, shorter duration when the interrupts are disabled, priority based preemptive scheduling, etc. These features enable the RTOS to respond quickly to events in a predictable manner.

Real Time extensions

Kernels belonging to this category are extensions to the kernels of existing commercial operating systems. An example would be Real Time Linux from the Linux operating systems. Real time features are included to the kernel of these operating systems. Compared to the small, proprietary kernels, these kernels have many more features and hence much slower. Given that the commercial kernels are popular, there are more comprehensive software development environments and many more developers in the market with the technical skill. Hence, time to design and market these products would be lesser. However, these kernels cannot be used for real time operating systems supporting hard real time environments.

Component Based Kernel

This category of kernels has microkernel architecture. The functionalities of the kernels are broken into components. There is a nucleus whose sole duty is to switch between components. Such a component based design allows flexibility in deciding the functionality of the kernel and can be tailor made to suit the RTOS and the real time environment. This category of kernels is not suitable for hard real time systems as adding components to the system can increase overhead and delay.

Read also  Estimation of Importance of Web Pages for Web Crawlers

Task Management

The Real Time Task

A task instance in a real time operating system is triggered each time an associated event occurs. The task then carries out the response to the event. Important aspects of tasks are computation time, resource requirements, criticality, relationship with other tasks and deadlines.

Tasks can be periodic whereby they occur within fixed intervals of time. Such tasks can be triggered by an internal clock. The clock generates an interrupt at the precise instant of time, which then calls the task. These tasks are termed clock driven tasks. A typical example of a real time system that should have periodic tasks is monitoring devices. Pressure, temperature, and such variables would need to be measured at fixed time instances. Hence, a periodic task can handle this requirement.

Tasks that do not occur in fixed instants of time are aperiodic. Aperiodic tasks can be represented in terms of the worst case execution time, time between two instances of the same task and the deadline of the task. Given limited resources, the time between two instances of the task has to be greater than the worst case execution time.

Task Scheduling

RTOS, unlike General Purpose Operating Systems, do not divide resources and time equally among tasks. The Scheduler dictates when and how long a task can execute depending upon its priority and the scheduling algorithm. The Scheduling Algorithm is integral for tasks to be able to get hold of required resources and CPU time, very swiftly so that they can be executed within their deadlines.

Figure 3 – Real Time Scheduling Algorithms

Categories of existing Scheduling Algorithms in commercial Real Time Operating Systems are represented in the above figure. While there is a multitude of scheduling algorithms in the market, the below sections analyses a few prominent and basic algorithms.

Static scheduling algorithms are used when the priority of tasks do not change throughout the execution. The schedule of the tasks is worked out even before the execution. The schedule is computed with the worst case timings so that the tasks would meet their assigned deadlines in all situations.

An example of a static table driven algorithm would be the List Scheduling Algorithm (Figure 4). Depending on the application (Earliest Deadline First or Shortest Period first), a table of tasks with corresponding values of periods or CPU time is generated. The task that occupies the smallest duration of CPU time or period is selected to go first.

Figure 4 : An example table for List Scheduling

The List Scheduling algorithm is useful when the system carries out very few tasks and in safety critical systems since it ensures predictability. However, in systems with several tasks which are interdependent on each other, share resources, and are a mixture of periodic and aperiodic tasks, the above schedule might not be feasible. Therefore, Static Scheduling could be used for the most critical operations with hard deadlines and for other tasks, a more suitable algorithm could be used.

Priority Driven Preemptive Approach schedules tasks based on their priorities. The algorithm proposed by Liu and Layland, Rate Monotonic Analysis (RMA), is one of the first and most common Priority driven preemptive Scheduling algorithm. In this algorithm, tasks are given priorities based on their execution period. The task with the smallest period is given the highest priority, while that with the largest period is given the lowest priority. A task of a higher priority can preempt a task of a lower priority. Once the higher priority task has finished executing, CPU will be handed back to the lower priority task to complete its execution. CPU Utilization, a key issue in RMA, is defined as the percentage of time spent executing the tasks. They formulate the value of utilization as follows,

where U refers to utilization, and m, the number of tasks. A system that is said to be fully utilized when any further increase in the task computation time will render the schedule unfeasible. A schedule is feasible when all the tasks are executed within their deadlines. The feasibility of RMA is hence defined as follows :

The formula implies that RMA is feasible for a system where the computation time relative to the period of the task takes up less than the maximum CPU utilization.

RMA is based on the following assumptions [11]:

All tasks are periodic

Tasks do not interact with one another

The deadline of each task is the entire period of the task

Task execution time is always constant

All tasks are equal in terms of criticality. Priority is assigned purely based on the period of the task.

Any aperiodic task that the system might have, is low in criticality and do not have hard deadlines.

In most real time systems, tasks are interdependent and share resources. Also, criticality does not always depend on the period alone, and might depend on several other factors. Several systems encounter unexpected safety critical events which need to be handled by aperiodic tasks with tight deadlines. Hence, the applicability of RMA is limited.

The two approaches discussed above are static approaches whereby priorities do not change dynamically and are fixed throughout. The advantages of such an approach are completely predictable functionality, increased reliability due to reduced danger of missed deadlines and reduced complexity in design. However, static approaches are only suitable for periodic tasks. Systems with periodic and aperiodic tasks can be scheduled using dynamic algorithms.

A dynamic priority driven scheduling algorithm would be the Earliest Deadline First (EDF). In dynamic algorithms, the priorities of tasks are re-assigned every time a new task arrives, and during every context switch. EDF is feasible for a system if,

Read also  Roles Of The Operating System Computer Science Essay

where C is the computation, T is the period of the task, and n is the total number of tasks. This formula implies that if computation time exceeds the task period, then the task has missed its deadline and EDF will not be feasible.

Priority Inversion

The Priority Inversion Problem, illustrated in Figure x below, is a common problem in systems with prioritized tasks that share resources. With reference to Figure x, a low priority task A is invoked. This task uses the resource PRNT for its execution. While in progress, another task B of higher priority attempts to preempt Task A. Task B also requires resource PRNT for its execution, and hence is not able to preempt Task A. Meanwhile, other tasks C and D, higher in priority than A preempt task A and are successful in doing so because they do not require the PRNT resource. After task C and D execute, CPU is handed back to Task A to complete. After Task A completed, Task B, which is of the highest priority, executes. This inversion in priority could cause detrimental consequences in hard real time systems.

Figure 5 – Priority Inversion Problem

A common solution to the Priority Inversion Problem is Priority Inheritance. This method utilizes the mutex which allows mutually exclusive task execution. Hence, a task that holds on to the mutex in a locked state cannot be interrupted by any other task unless the task unlocks the mutex. Hence in the above example illustrated in Figure 5, Task A locks the mutex initially when it begins executing. Task B, which is higher in priority and requiring the same resource utilized by A preempts A, the two tasks switch priorities. Hence, Task B takes on the mutex, completed execution of the task and passes the mutex back to Task A. Hence Tasks C and D will not be able to preempt Task A or B.

Another common solution to the Priority Inversion problem is Priority Ceiling. This method assigns all mutexes a ceiling priority. This ceiling priority is the priority of the highest priority task that would use the mutex. Any task that locks the mutex will gain the priority assigned to the mutex. Hence, this task cannot be preempted till it completes execution.

Analysis of commercial RTOS Kernels

This section discusses and analyzes a few of the many commercial Real Time Operating Systems available in the market.

Real Time Linux

Real time Linux (RT Linux) is a real time extension to the Linux operating system. It is a self host and runs with Linux. It sits between the hardware and the actual Linux Operating System. This architecture is illustrated in Figure 6.

Figure 6 – Priority Inversion Problem

In this dual kernel architecture the Real Time kernel receives all interrupts. If it is a real time interrupt requiring one of its tasks to run, necessary tasks are invoked. If otherwise, the interrupts are handed over to the Linux kernel. Hence, Linux runs as if it is a low priority task of the real time Linux.

The advantage of Real Time Linux is the ease in development since Linux is a popular commercial general purpose operating system. It would be easier to find developers for RT Linux and extra cost need not be spent on training.

However, the down side is that the APIs provided by RT Linux are very few and hence hard to program. Moreover, APIs are different for different RT kernels and Linux flavors. Duplicate coding needs to be done since tasks in RT Linux cannot make use of Linux resources to avoid preemption problems.

PSOS

PSOS is a Wind River Systems product. This RTOS has a host-target architecture wherein the host is typically a desktop with editor, cross compiler, debugger and library while the target is a board with ROM, RAM, processor, etc. This architecture is illustrated below in Figure x.

The PNA in the ROM is a component that enables network services between the host computer and the target board. For instance, it provides TCP/IP connection so that Ethernet can be used for connection the host and target. There are also debugging components such as PROBE that can be used once the code has been ported from the host to the hardware. When the code is ported initially, it is run on RAM and then transferred to the ROM once its satisfactory. It uses priority based scheduling and contains 32 priority levels. Priority inheritance and priority ceiling methods are used to prevent priority inversion. Device drivers are external to the kernel and loaded or removed at run time. Interrupts are handled by Interrupt Service routines that are pointed by a vector table and in complete control of the user.

This RTOS is very suitable for small scale embedded applications as the architecture is component based and hence only relevant components can be added. The disadvantage of PSOS is that it is only available for a limited number of processors.

QNX

QNX RTOS is a POSIX compliant Real Time Operating Systems that is used mainly for mission and safety critical real time applications. The RTOS contains several APIs, and has a comprehensive Integrated Development Environment. The fine grained microkernel architecture allows the RTOS to be compact. This is a cost saving advantage especially in high volume devices. QNX also has a micro GUI called the Photon that operates on a small footprint. QNX runs on several processors such as Intel, MIPS, ARM, etc. The obvious advantages of QNX are in its small size and compatibility to many processors making it suitable for many safety and mission critical embedded systems.

Conclusion

. There are several commercial operating systems available now with comprehensive Integrated Development Environments and debuggers. Usage of these commercially available RTOS will enable time and cost savings and reduce time to market. However, whether commercial RTOS is used or an RTOS is developed in house, its features need to be carefully considered and selected with the nature of the real time system in mind.

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)