Navigation :
Untitled Document
Embedded System


An embedded system is a special-purpose system in which the computer is completely encapsulated by the device it controls. Unlike a general-purpose computer, such as a personal computer, an embedded system performs pre-defined tasks, usually with very specific requirements. Since the system is dedicated to a specific task, design engineers can optimize it, reducing the size and cost of the product. Embedded systems are often mass-produced, so the cost savings may be multipled by millions of items.

The core of any embedded system is formed by one or several microprocessors or microcontrollers, programmed to perform a small number of tasks. In contrast to a general purpose computer which can run any software application the user chooses, the software on an embedded system is semi-permanent; so it is often called "firmware".

Examples of embedded systems

  • automatic teller machines (ATMs)
  • avionics, such as inertial guidance systems, flight control hardware/software and other integrated systems in aircraft and missiles
  • cellular telephones and telephone switches
  • computer network equipment, including routers, timeservers and firewalls
  • computer printers
  • copiers
  • disk drives (floppy disk drives and hard disk drives)
  • engine controllers and antilock brake controllers for automobiles
  • home automation products, like thermostats, air conditioners, sprinklers, and security monitoring systems
  • handheld calculators
  • household appliances, including microwave ovens, washing machines, television sets, DVD players/recorders
  • medical equipment
  • measurement equipment such as digital storage oscilloscopes, logic analyzers, and spectrum analyzers
  • multifunction wristwatches
  • multimedia appliances: Internet radio receivers, TV set top boxes, digital satellite receivers
  • Multifunctional printers (MFPs)
  • personal digital assistants (PDAs), that is, small handheld computers with PIMs and other applications
  • mobile phones with additional capabilities, for example, mobile digital assistants with cellphone and PDA and Java (MIDP)
  • programmable logic controllers (PLCs) for industrial automation and monitoring
  • stationary videogame consoles and handheld game consoles
  • wearable computer

History

The Apollo Guidance Computer, the first recognizably modern embedded system.source: The Computer History Museum
The Apollo Guidance Computer, the first recognizably modern embedded system.
source: The Computer History Museum

The first recognizably modern embedded system was the Apollo Guidance Computer, developed by Charles Stark Draper at the MIT Instrumentation Laboratory. Each flight to the moon had two. They ran the inertial guidance systems of both the command module and LEM.

At the project's inception, the Apollo guidance computer was considered the riskiest item in the Apollo project. The use of the then new monolithic integrated circuits, to reduce the size and weight, increased this risk.

The first mass-produced embedded system was the Autonetics D-17 guidance computer for the Minuteman missile, released in 1961. It was built from discrete transistor logic and had a hard disk for main memory. When the Minuteman II went into production in 1966, the D-17 was replaced with a new computer that was the first high-volume use of integrated circuits. This program alone reduced prices on quad nand gate ICs from $1000/each to $3/each, permitting their use in commercial products.

The crucial design features of the Minuteman computer were that its guidance algorithm could be reprogrammed later in the program, to make the missile more accurate, and the computer could also test the missile, saving cable and connector weight.

Since these early applications in the 1960s, where cost was no object, embedded systems have come down in price. There has also been an enormous rise in processing power and functionality. This trend supports Moore's Law.

The Intel 4004, the first microprocessor
The Intel 4004, the first microprocessor

The first microprocessor was the Intel 4004, which found its way into calculators and other small systems. However, it still required external memory chips and other external support logic. More powerful microprocessors, such as the Intel 8080 were developed for military projects, but also sold for other uses.

By the end of the 1970s, 8-bit microprocessors were the norm, but usually needed external memory chips, and logic for decoding and input/output. However, prices rapidly fell and more applications adopted small embedded systems in place of (then more common) custom logic designs. Some of the more visible applications were in instrumentation and expensive devices.

A PIC microcontroller
A PIC microcontroller

By the mid-1980s, most of the previously external system components had been integrated into the same chip as the processor. The result was a dramatic reduction in the size and cost of embedded systems. Such integrated circuits were called microcontrollers rather than microprocessors, and widespread use of embedded systems became feasible.

As the cost of a microcontroller fell below an hour's wage for an engineer, there was an explosion in both the number of embedded systems, and in the number of parts supplied by different manufacturers for use in embedded systems. For example, many new special function ICs started to come with a serial programming interface rather than the more traditional parallel ones, for interfacing to a microcontroller with fewer interconnections. The I2C bus also appeared at this time.

As the cost of a microcontroller fell below $1, it became feasible to replace expensive analog components such as potentiometers and variable capacitors with digital electronics controlled by a small microcontroller.

By the end of the 80s, embedded systems were the norm rather than the exception for almost all electronics devices, a trend which has continued since.

Characteristics

Embedded systems are computer systems in the widest sense. They include all computers other than those specifically intended as general-purpose computers. Examples of embedded systems range from portable music players to real-time controls for subsystems in the space shuttle.

Most commercial embedded systems are designed to do some task at low cost. Most, but not all have real-time system constraints that must be met. They may need to be very fast for some functions, while most other functions will probably not have strict timing requirements. These systems meet their real-time constraints with a combination of special purpose hardware and software tailored to the system requirements.

It is difficult to characterize embedded systems by speed or cost, but for high volume systems, minimizing cost is usually the primary design consideration. Often embedded systems have low performance requirements. This allows the system hardware to be simplified to reduce costs. Engineers typically select hardware that is just “good enough” to implement the necessary functions.

For example, a digital set-top box for satellite television has to process tens of megabits of continuous-data per second, but most of the processing is done by custom integrated circuits that parse, direct, and decode the multi-channel digital video. The embedded CPU "sets up" this process, and displays menu graphics, etc. for the set-top's look and feel. As embedded processors become faster and cheaper, they can take over more of the high-speed data processing.

For low-volume embedded systems, personal computers can often be used, by limiting the programs or by replacing the operating system with a real-time operating system. In this case special purpose hardware may be replaced by one or more high performance CPUs. Still, some embedded systems may require high performance CPUs, special hardware, and large memories to accomplish a required task.

In high volume embedded systems such as portable music players or cell phones, reducing cost becomes a major concern. These systems will often have just a few integrated circuits, a highly integrated CPU that controls all other functions and a single memory chip. In these designs each component is selected and designed to minimize overall system cost.

The software written for many embedded systems, especially those without a disk drive is sometimes called firmware. Firmware is software that is embedded in hardware devices, e.g. in one or more ROM or Flash memory IC chips.

Programs on an embedded system often run with limited hardware resources: often there is no disk drive, operating system, keyboard or screen. The software may not have anything remotely like a file system, or if one is present, a flash drive with a journaling file system may replace rotating media. If a user interface is present, it may be a small keypad and liquid crystal display.

Embedded systems reside in machines that are expected to run continuously for years without errors. Therefore the software is usually developed and tested more carefully than Software for Personal computers. Many embedded systems avoid mechanical moving parts such as Disk drives, switches or buttons because these are unreliable compared to solid-state parts such as Flash memory.

In addition, the embedded system may be outside the reach of humans (down an oil well borehole, launched into outer space, etc.), so the embedded system must be able to restart itself even if catastrophic data corruption has taken place. This is usually accomplished with a standard electronic part called a watchdog timer that resets the computer unless the software periodically resets the timer.

Design of embedded systems

The electronics usually uses either a microprocessor or a microcontroller. Some large or old systems use general-purpose mainframe computers or minicomputers.

User interfaces

User interfaces for embedded systems vary widely, and thus deserve some special comment.

Interface designers at PARC, Apple Computer, Boeing and HP discovered the principle that one should minimize the number of types of user actions. In embedded systems this principle is often combined with a drive to lower costs.

One standard interface, widely used in embedded systems, uses two buttons to control a menu system, with one button allowing the user to scroll through items on the menu and the other to select an item.

Menus are broadly popular because they document themselves, and can be selected with very simple user actions.

Another basic trick is to minimize and simplify the type of output. Designs sometimes use a status light for each interface plug, or failure condition, to tell what failed. A cheap variation is to have two light bars with a printed matrix of errors that they select- the user can glue on the labels for the language that he speaks. For example, most small computer printers use lights labelled with stick-on labels that can be printed in any language. In some markets, these are delivered with several sets of labels, so customers can pick the most comfortable language.

Another common trick is that modes are made absolutely clear on the user's display. If an interface has modes, they are almost always reversible in an obvious way, or reverse themselves automatically.

For example, Boeing's standard test interface is a button and some lights. When you press the button, all the lights turn on. When you release the button, the lights with failures stay on. The labels are in Basic English.

Designers use colors. Red means "danger" or that some error has occurred, causing the entire system to fail. Yellow means something might be wrong. Green means the status is OK or good. This is intentionally like a stop-light, because most people understand those.

Most designs arrange for a display to change immediately after a user action. If the machine is going to do anything, it usually starts within 7 seconds, or gives progress reports.

If a design needs a screen, many designers use plain text. It's preferred because users have been reading signs for years. A GUI is pretty and can do anything, but typically adds a year from design, approval and translator delays, and one or two programmers to a project's cost, without adding any value. Often, an overly-clever GUI actually confuses users, because it can use unfamiliar symbols.

If a design needs to point to parts of the machine (as in copiers), these are often labelled with numbers on the actual machine, that are visible with the doors closed.

A network interface is just a remote screen. It behaves much like any other user interface.

One of the most successful small screen-based interfaces is the two menu buttons and a line of text in the user's native language. It's used in pagers, medium-priced printers, network switches, and other medium-priced situations that require complex behavior from users.

On larger screens, a touch-screen or screen-edge buttons also minimize the types of user actions, and easily control(s) menus. The advantage of this system is that the meaning of the buttons can change with the screen, and selection can be very close to the natural behavior of pointing at what's desired.

When there's text, the designer chooses one or more languages. The default language is usually the one most widely understood by the targeted group of users. Most designers try to use native character sets of the target group to better meet their needs.

Text is usually translated by professional translators, even if native speakers are on staff. Marketing staff have to be able to tell foreign distributors that the translations are professional. A foreign manufacturer may ask the highest-volume distributor to review and correct translations in their native language, in order to help the product's acceptance by native sales people.

Most authorities consider a usability test more important than any number of opinions. Designers recommend testing the user interface for usability at the earliest possible instant. A commonly-used quick, dirty test is to ask an executive secretary to use cardboard models drawn with magic markers, and manipulated by an engineer. The videotaped result is likely to be both humorous and very educational. In the tapes, every time the engineer talks, the interface has failed because it would cause a service call.

In many organizations, one person approves the user interface. Often this is a customer, the major distributor or someone directly responsible for selling the system.

Platform

There are many different CPU architectures used in embedded designs such as ARM, MIPS, Coldfire/68k, PowerPC, X86, PIC, 8051, Atmel AVR, Renesas H8, SH, V850, FR-V, M32R etc.

This in contrast to the desktop computer market, which as of this writing (2003) is limited to just a few competing architectures, mainly the Intel/AMD x86, and the Apple/Motorola/IBM PowerPC, used in the Apple Macintosh. In desktop computers, as acceptance of Java grows, the software is becoming less dependent on a specific execution environment.

Standard PC/104 is a typical base for small, low-volume embedded and ruggedized system design. These often use DOS, Linux, NetBSD, or an embedded real-time operating system such as QNX or Inferno.

A common configuration for very-high-volume embedded systems is the system on a chip, an application-specific integrated circuit, for which the CPU was purchased as intellectual property to add to the IC's design. A related common scheme is to use a field-programmable gate array, and program it with all the logic, including the CPU. Most modern FPGAs are designed for this purpose.

Tools

Like typical computer programmers, embedded system designers use compilers, assemblers, and debuggers to develop embedded system software. However, they also use a few tools that are unfamiliar to most programmers.

Software tools can come from several sources:
  • Software companies that specialize in the embedded market
  • Ported from the GNU software development tools (see cross compiler)
  • Sometimes, development tools for a personal computer can be used if the embedded processor is a close relative to a common PC processor
    Embedded system designers also use a few software tools rarely used by typical computer programmers.
  • One common tool is an "in-circuit emulator" (ICE) or, in more modern designs, an   embedded debugger. This debugging tool is the fundamental trick used to develop embedded code. It replaces or plugs into the microprocessor, and provides facilities to quickly load and debug experimental code in the system. A small pod usually provides the special electronics to plug into the system. Often a personal computer with special software attaches to the pod to provide the debugging interface.
  • The linker is usually quite exotic. In most business-programming, the linker is almost an afterthought, and the defaults are never varied. In contrast, it's common for an embedded linker to have a complete, often complex, command language. There are often multiple types of memory, with particular code and data located in each. Individual data structures may be placed at particular addresses to give the software convenient access to memory-mapped control registers. Embedded linkers often have quite exotic optimization facilities to reduce the code's size and execution times. For example, they may move subroutines so that calls to them can use smaller subroutine call and jump instructions. They often have features to manage data overlays, and bank switching, techniques often used to stretch the inexpensive CPUs in embedded software.
  • Another common tool is a utility program (often home-grown) to add a checksum or CRC to a program, so the embedded system can check its program data before executing it.
  • An embedded programmer that develops software for digital signal processing, often has a math workbench such as MathCad or Mathematica to simulate the mathematics.
  • Less common are utility programs to turn data files into code, so one can include any kind of data in a program.
  • A few projects use Synchronous programming languages for extra reliability or digital signal processing.
    Some programming languages offer specific support for embedded systems programming.
  • For the C language, ISO/IEC TR 18037:2005 specifies
  • Named address spaces
  • Named storage classes
  • Basic I/O hardware addressing

Debugging

Debugging is usually performed with an in-circuit emulator, or some type of debugger that can interrupt the microcontroller's internal microcode.

The microcode interrupt lets the debugger operate in hardware in which only the CPU works. The CPU-based debugger can be used to test and debug the electronics of the computer from the viewpoint of the CPU. This feature was pioneered on the PDP-11.

Developers should insist on debugging which shows the high-level language, with breakpoints and single-stepping, because these features are widely available. Also, developers should write and use simple logging facilities to debug sequences of real-time events.

PC or mainframe programmers first encountering this sort of programming often become confused about design priorities and acceptable methods. Mentoring, code-reviews and egoless programming are recommended.

As the complexity of embedded systems grows, higher level tools and operating systems are migrating into machinery where it makes sense. For example, cellphones, personal digital assistants and other consumer computers often need significant software that is purchased or provided by a person other than the manufacturer of the electronics. In these systems, an open programming environment such as Linux, OSGi or Embedded Java is required so that the third-party software provider can sell to a large market.

Most such open environments have a reference design that runs on a personal computer. Much of the software for such systems can be developed on a conventional PC. However, the porting of the open environment to the specialized electronics, and the development of the device drivers for the electronics are usually still the responsibility of a classic embedded software engineer. In some cases, the engineer works for the integrated circuit manufacturer, but there is still such a person somewhere.

Operating system

Embedded systems often have no operating system, or a specialized embedded operating system (often a real-time operating system), or the programmer is assigned to port one of these to the new system.

Start-up

All embedded systems have start-up code. Usually it disables interrupts, sets up the electronics, tests the computer (RAM, CPU and software), and then starts the application code. Many embedded systems recover from short-term power failures by restarting without recent self-tests. Restart times under a tenth of a second are common.

Many designers have found a few LEDs useful to indicate errors (they help troubleshooting). A common scheme is to have the electronics turn on all of the LED(s) at reset (proving that power and the LEDs work). Then the software changes the LEDs as the Power-On Self Test executes. After that, the software uses the LED(s) to indicate normal operation or errors. This serves to reassure most technicians, engineers and some users. An interesting exception is that on electric power meters and other items on the street, blinking lights are known to attract attention and vandalism.

Built-In Self-Test

Most embedded systems have some degree or amount of built-in self-test. There are several basic types:
  • Testing the computer: CPU, RAM, and program memory. These often run once at power-up. In safety-critical systems, they are also run periodically (within the safety interval), or over time.
  • Tests of peripherals: These simulate inputs and read-back or measure outputs. A surprising number of communication, analog and control systems can have these tests, often very cheaply.
  • Tests of power: These usually measure each rail of the power supply, and may check the input (battery or mains) as well. Power supplies are often highly stressed, with low margins, and testing them is therefore valuable.
  • Communication tests: These verify the receipt of a simple message from connected units. The internet, for example, has the ICMP message "ping."
  • Cabling tests: These usually run a wire in a serpentine arrangement through representative pins of the cables that have to be attached. Synchronous communications systems, like telephone media, often use "sync" tests for this purpose. Cable tests are cheap, and extremely useful when the unit has plugs.
  • Rigging tests: Often a system has to be adjusted when it is installed. Rigging tests provide indicators to the person that installs the system.
  • Consumables tests: These measure what a system uses up, and warn when the quantities are low. The most common example is the fuel gauge of a car. The most complex examples may be the automated medical analysis systems that maintain inventories of chemical reagents.
  • Operational tests: These measure things that a user would care about to operate the system. Notably, these have to run when the system is operating. This includes navigational instruments on aircraft, a car's speedometer, and disk-drive lights.
  • Safety tests: These run within a 'safety interval', and assure that the system is still reliable. The safety interval is usually a time less than the minimum time that can cause harm.

Reliability regimes

Reliability has different definitions depending on why people want it. Interestingly, there are relatively few types of reliability, and systems with similar types employ similar types built-in-self tests and recovery methods:
  • The system is too unsafe, or inaccessible to repair. (Space systems, undersea cables, navigational beacons, bore-hole systems, and oddly, automobiles and mass-produced products) Generally, the embedded system tests subsystems, and switches redundant spares on line, or incorporates "limp modes" that provide partial function. Often mass-produced equipment for consumers (such as cars, PCs or printers) falls in this category because repairs are expensive and repairmen far away, when compared to the initial cost of the unit.
  • The system cannot be safely shut down. (Aircraft navigation, reactor control systems, safety-critical chemical factory controls, train signals, engines on single-engine aircraft) Like the above, but "limp modes" are less tolerable. Often the backups are selected by an operator.
  • The system will lose large amounts of money when shut down. (Telephone switches, factory controls, bridge and elevator controls, funds transfer and market making, automated sales and service) These usually have a few go/no-go tests, with on-line spares or limp-modes using alternative equipment and manual procedures.
  • The system cannot be operated when it is unsafe. Similarly, perhaps a system cannot be operated when it would lose too much money. (Medical equipment, aircraft equipment with hot spares, such as engines, chemical factory controls, automated stock exchanges, gaming systems) The testing can be quite exotic, but the only action is to shut down the whole unit and indicate a failure.

Types of embedded software architectures

There are several basically different types of software architectures in common use.

The control loop

In this design, the software simply has a loop. The loop calls subroutines. Each subroutine manages a part of the hardware or software. Interrupts generally set flags, or update counters that are read by the rest of the software.

A simple API disables and enables interrupts. Done right, it handles nested calls in nested subroutines, and restores the preceding interrupt state in the outermost enable. This is one of the simplest methods of creating an exokernel.

Typically, there's some sort of subroutine in the loop to manage a list of software timers, using a periodic real time interrupt. When a timer expires, an associated subroutine is run, or flag is set.

Any expected hardware event should be backed-up with a software timer. Hardware events fail about once in a trillion times. That's about once a year with modern hardware. With a million mass-produced devices, leaving out a software timer is a business disaster.

Sometimes a set of software-based safety timers may be run by test software that periodically resets a software watchdog implemented in hardware. If the software misses an event, the safety-timer software catches it. If the safety-timer software fails, the watchdog hardware resets the unit.

State machines may be implemented with a function-pointer per state-machine (in C++, C or assembly, anyway). A change of state stores a different function into the pointer. The function pointer is executed every time the loop runs.

Many designers recommend reading each IO device once per loop, and storing the result so the logic acts on consistent values.

Many designers prefer to design their state machines to check only one or two things per state. Usually this is a hardware event, and a software timer.

Designers recommend that hierarchical state machines should run the lower-level state machines before the higher, so the higher run with accurate information.

Complex functions like internal combustion controls are often handled with multi-dimensional tables. Instead of complex calculations, the code looks up the values. The software can interpolate between entries, to keep the tables small and cheap.

In the smallest microcontrollers (esp. the 8051, which has a 128 byte stack) a control loop permits a good linker to use statically allocated data overlays for local variables. In this scheme, variables nearer to the leaves of a subroutine call tree get higher memory addresses. When a new branch starts, its variables can be reallocated in the space deserted by the previous branch.
One major weakness of a simple control loop is that it does not guarantee a time to respond to any particular hardware event.
Careful coding can easily assure that nothing disables interrupts for long. Thus interrupt code can run at very precise timings.
Another major weakness of a control loop is that it can become complex to add new features. Algorithms that take a long time to run must be carefully broken down so only a little piece gets done each time through the main loop.
This system's strength is its simplicity, and on small pieces of software the loop is usually so fast that nobody cares that it is not predictable.
Another advantage is that this system guarantees that the software will run. There is no mysterious operating system to blame for bad behavior.

Nonpreemptive multitasking

A nonpreemptive multitasking system is very similar to the above, except that the loop is hidden in an API. One defines a series of tasks, and each task gets its own subroutine stack. Then, when a task is idle, it calls an idle routine (usually called "pause", "wait", "yield", or etc.).
An architecture with similar properties is to have an event queue, and have a loop that removes events and calls subroutines based on a field in the queue-entry.
The advantages and disadvantages are very similar to the control loop, except that adding new software is easier. One simply writes a new task, or adds to the queue-interpreter.

Preemptive timers

Take any of the above systems, but add a timer system that runs subroutines from a timer interrupt. This adds completely new capabilities to the system. For the first time, the timer routines can occur at a guaranteed time.
Also, for the first time, the code can step on its own data structures at unexpected times. The timer routines must be treated with the same care as interrupt routine(s).

Preemptive tasks

Take the above nonpreemptive task system, and run it from a preemptive timer or other interrupts.
Suddenly the system is quite different. Any piece of task code can damage the data of another task; they must be precisely separated. Access to shared data must be controlled by some synchronization strategy, such as message queues, semaphores or a non-blocking synchronization scheme.

Often, at this stage, the developing organization buys a real-time operating system. This can be a wise decision if the organization lacks people with the skills to write one, or if the port of the operating system to the hardware will be used in several products. It usually adds six to eight weeks to the schedule, and forever after programmers can blame delays on it.

Microkernels and Exokernels

These try to organize the system in a way that's more configurable than a big kernel, while providing similar features.
A microkernel is a logical step up from a real-time OS. The usual arrangement is that the operating system kernel allocates memory and switches the CPU to different threads of execution. User mode processes implement major functions such as file systems, network interfaces, etc.

Microkernels were first tried back in the 1950s, and abandoned in favor of monolithic (MULTICS and UNIX style) kernels because the computers switched tasks and transmitted data between the tasks too slowly. In general, microkernels succeed when the task switching and intertask communication is fast, and fail when they are slow.

Exokernels communicate efficiently by normal subroutine calls. The hardware, and all the software in the system are available to, and extensible by application programmers. A resource kernel (which may be part of the library) allocates or multiplexes access to CPU time, memory and other resources. Big-kernel features such as multitasking, networking and file systems are provided by a library of code. The library may be dynamically linked, extensible, and shared. Different applications can even use different libraries, but all resources come from the resource kernel.

Virtual machines

Some avionic systems use several merchant computers. Then, further, each of these computers simulate several copies of themselves. Critical programs run on several computers, and vote.
The advantage of a simulated environment is that if one computer fails, the different instances of the software can be migrated to software partitions in working computers without changing the number of votes.
Generally the virtualization software runs programs in the computer's user mode. It traps and simulates hardware access, and instructions that are not executable in the user mode.

Checkpointed calculations

Another high-availability scheme has two computers that execute for a bit, then exchange notes about their calculations up to that point. If the one computer's calculations are nuts, it is shut down.

Office-style (big-kernel) operating systems

These are popular for embedded projects that have no systems budget. In the opinion of at least one author of this article, they are usually a mistake. Here's the logic:
  • Operating systems are specially-packaged libraries of reusable code. If the code does something useful, the designer saves time and money. If not, it's worthless.
  • Operating systems for business systems lack interfaces to embedded hardware. For example, if one uses Linux to write a motor controller or telephone switch, most of the real control operations end up as numbered functions in an IOCTL-call. Meanwhile, the normal read, write, and fseek, interface is purposeless. So the operating system actually interferes with development.
  • Most embedded systems perform no office work, so most code of office operating systems is wasted. For example, most embedded systems never use a file system or screen, so file system and GUI logic is wasted. Unused code is just a reliability liability.
  • Office style operating systems protect the hardware from user programs. That is, they profoundly interfere with embedded systems development.
  • Operating systems must invariably be ported to an embedded system. That is, the hardware driver code must always be written anyway. This is the most difficult part of the operating system, so little is saved by using one.
  • The genuinely useful, portable features of operating systems are small pieces of code. For example, a basic TCP/IP interface is about 3,000 lines of C code. Another example is that a simple file system is about the same size. If a design needs these, they can be had for less than 10% of the typical embedded system's development budget, without royalty, just by writing them. And, if the needed code is sufficiently generic, the back of embedded systems magazines typically have vendors selling royalty-free C implementations.
    Nevertheless many engineers disagree. Embedded Linux is increasing in popularity, especially on the more powerful embedded devices such as Wireless Routers and GPS Navigation Systems. Here are some of the reasons:
  • Ports to common embedded chip sets are available.
  • They permit re-use of publicly available code for Device Drivers, Web Servers, Firewalls, and other code.
  • Development systems can start out with broad feature-sets, and then the distribution can be configured to exclude unneeded functionality, and save the expense of the memory that it would consume.
  • Many engineers believe that running application code in user mode is more reliable, easier to debug and that therefore the development process is easier and the code more portable.
  • Many embedded systems lack the tight real time requirements of a control system. A system such as Embedded Linux has fast enough response for many applications.
  • Features requiring faster response than can be guaranteed can often be placed in hardware.
  • Many RTOS systems have a per-unit cost. When used on a product that is or will be come a commodity, that cost is significant.

Exotic custom operating systems

Some systems require safe, timely, reliable or efficient behavior unobtainable with the above architectures. There are well-known tricks to construct these systems:
  • Hire a real system programmer. They cost a little more, but can save years of debugging, and the associated loss of revenue.
  • RMA (rate monotonic analysis), can be used to find whether a set of tasks can run under a defined hardware system. In its simplest form, the designer assures that the quickest-finishing tasks have the highest priorities, and that on average, the CPU has at least 30% of its time free.
  • Harmonic tasks optimize CPU efficiency. Basically, designers assure that everything runs from a heartbeat timer. It's hard to do this with a real-time operating system, because these usually switch tasks when they wait for an I/O device.
  • Systems with exactly two levels of priority (usually running, and interrupts-disabled) cannot have Priority inversion problems in which a higher priority task waits for a lower priority task to release a semaphore or other resource.
  • Systems with monitors can't have deadlocks. A monitor locks a region of code from interrupts or other preemption. If the monitor is only applied to small, fast pieces of code, this can work well. If the monitor API can be proven to run to completion in all cases, (say, if it merely disables interrupts) then no hangs are possible.
This means that systems that use dual priority and monitors are safe and reliable because they lack both deadlocks and priority inversion. If the monitors run to completion, they will never hang. If they use harmonic tasks, they can even be fairly efficient. However, RMA can't characterize these systems, and levels of priority had better not exist anywhere, including in the operating system and hardware.
Last modified at : Thursday, December 11st 2008 13:42:59.
Home
Bahasa Indonesia
122380 Times
1 online
The ideals that lighted my way, and time after time have given me new courage to face life cheerfully have been Kindness, Beauty and Truth.

Albert Einstein

Administrator srugN - Electronic Engineering Polytechnic Institute of Surabaya