Previous system-level power management techniques focused on optimization of CPU power by task scheduling using dynamic voltage scaling (DVS) techniques and power reduction on peripherals by dynamic power power management with deterministic or stochastic models. Effective as they are in dealing with simple systems like a laptop computer that has only a few number of power-manageable resource, they are quickly becoming inadequate in handling more and more complex embedded systems with many subsystems integrated. Boundaries are broken between the digital domain and the analog domain, between the computation and the communication, between layers of bus protocols and network protocols, and between the operating system and applications. Many tradeoffs are possible and required.
When considering power optimization at the system-level, we need to examine carefully the techniques developed at the component level. Transition overhead of power mode changes was mostly neglected in task scheduling areas in which the overhead for the CPU is small. At the system level, it is usually cannot be ignored because in many cases the overhead in term of both time and energy is comparable to those in operational modes. When the component communicate and interact with each other, making decisions of when and how to change power modes of one component may have to consider the state of other components due to the inter-component interactions and dependencies.
System-level power modeling involves modeling the power properties of individual resources (power manageable subsystems) as well as the power mode dependencies among the resources. An optimization algorithm can then be developed to generate a power mode schedule that has minimal energy consumption while satisfying real-time and power constraints. The algorithm has been developed both precise and efficient by using a pruning process that takes advantages of mode dependency graphs (MDGs) that model the complex interaction among resources.
Purely OS-based power management has become unable to handle embedded systems with multiple resources running multiple applications. Polling device drivers to check whether the devices are idle or sending commands to these drivers may keep the power manager busy and be unable to make useful high-level decisions that potentially take advantage of application domain knowledge. Furthermore, the frequent and detailed communication between the power manager and hardware devices would occupy much of the time slots on the buses, generating unwanted traffic that keeps the buses burn power.
A solution to this is decouple power management architecture (DPMA), a software architecture that separate the high-level decision making and the low level device monitoring and control. In this way the power manager is able to focus on high level decision and make best use of the available knowledge from application domains for global power optimization. The low level tasks can be assigned to a component manager residing in the OS, performing routine tasks without knowing application-level knowledge and global power budget and constraints.
The implementation of DPMA is a middleware layer between the application and the OS. It translates the application level requirements into macro commands for global power management. The macro commands are then decomposed into individual commands to control each resource. By implementing DPMA in the middleware layer the user can take advantages of the high modularity, scalability and upgradability of the model for retargetable and even decentralized deployment.
Batteries have very limited and often unsatisfactory capacity for deeply embedded applications like a wireless sensor network. Renewable energy is good alternative and complimentary power source. We take solar energy as an example and studied how to apply it to embedded systems for maximum energy utility.
For those systems whose workload could possibly be adjusted according to the availability of the renewable energy, we tune the power knobs in the system to best match its power consumption with the available power from the renewable power source. This simple technique can achieve amazingly an order of magnitude higher performance than a battery-powered system. Compared with hardware approaches that take more spaces and suffer from efficiency loss in energy conversion, the purposed technique best utilizes the available renewable energy to power the load of the system.
Applications on wireless handheld devices involve both computation and communication. An example is to take picture using an integrated camera and send the pictures to a remote host. Energy usage must balance between the image processing and the image transfer. We take the parameters in an image compression algorithm as power knobs to trade off image quality, transfer bandwidth and power. Furthermore, multiple algorithms themselves are treated as power knobs as well, allowing a much wider dynamic range for tradeoff. This approach results in significant energy savings on the handheld platforms while the cost overhead of storing multiple executables of algorithms on an external memory and energy overhead of switching between the algorithms are negligible considering the data-intensive property of the application.
This ongoing research explores the possibilities and advantages of mapping an power-aware RTOS onto multiple small and simple micro-controllers. A RTOS originally designed for a 32-bit microprocessor is mapped onto four 8-bit micro-controllers. The envisioned value include higher energy efficiency, higher retargetability, applying RTOS to reconfigurable architectures and decentralized power management.