# Introduction eal time applications are usually composed of set of tasks that interact with each other by exchanging messages. These tasks and their corresponding messages are often invoked repeatedly and are required to complete their services by respective deadlines. Examples of such applications include process control automated manufacturing system and delivery of audio/video frames in multimedia [1]. In process control automated manufacturing system finishing beyond deadline can have a catastrophic effect whereas it may be annoying but acceptable without much loss in case of multimedia applications. An application with catastrophic effect is defined as hard real time whereas degraded performance application is soft real time in nature. Besides these hard and soft deadlines, multimedia application such as video conferencing is being referred to as weakly hard real time where missing of some tasks to complete by frame/sec from which at least 24 frames/sec are needed to visualize the movement of the image [17]. When transmitting such frames if sufficient processing power and network bandwidth are available then a high quality video (receiving 30 frames/sec at destination) can be projected whereas degraded but acceptable quality of image is received. In case at least 24 frames/sec reach at the destination within deadline then desired quality is received. For weakly hard real time systems the assurance of minimum acceptable quality result is attained by imprecise concept [17,18] or by (??, ??) model [11]. In imprecise concept a frame has to be received at destination (may be full or portion of it) while a partially received frame is considered as dropped frame in (??, ??). That is, all frames are required to be received for imprecise computation whereas certain frames may be dropped to maintain the minimum quality in (??, ??) constraints. To ensure a deterministic quality of service (QoS) to such systems, Hamdaoui and Ramanathan [1] used the (??, ??) model in which, out of ?? consecutive task instances any ?? instances must meet their respective deadlines. The (??, ??) model scatters the effect of ?? deadline misses over a window of ?? which is different from accepting low miss rate in which a series of frames may be lost in a burst load leading to intolerant behavior in terms of missing a portion. Besides guaranteeing for QoS in terms of (??, ??) designer of real time system has to take care of minimization of energy especially for portable devices. Energy-aware computing has been realized as one of the key area for research in real time systems [20]. Energy-driven scheduling algorithms have been developed to reduce system's energy consumption while satisfying the timing constraints [2,3,4,5,6,19,20,21,22,25] are applicable for system having frequency dependent component (speed of the system varies with variation in its operating frequency) as resource. They will be able to reduce energy for system having frequency dependent components only. Besides frequency dependent component many systems have frequency independent components such as memory where above energy-driven voltage scheduling algorithms are inadequate. For the systems having frequency dependent component energy consumption decreases with R ©2011 Global Journals Inc. (US) because on reducing the frequency, components which are frequency independent may be forced to be active for longer duration leading to more energy consumption. Authors [7,8,9,10] revealed that the frequency dependent component (processor core) consumes around 30% of total energy while frequency independent (memory and peripherals devices) account for the remaining 70% of energy consumption. Thus, the energy consumption of the frequency independent components plays a crucial role in overall energy consumption of a system. Group of researcher [6,28,29,32] are focused for minimization of system energy (energy required by frequency dependent and independent component) rather than minimization of processor energy only. We use the term frequency dependent component to refer a processor and frequency independent for memory or peripheral devices. The three common techniques used for minimization of system energy are dynamic voltage scaling (DVS), dynamic power down (DPD) and Preemption control (PC) which will be discussed in the following subsection. Dynamic Voltage Scaling (DVS), is based on adjusting the processor voltage and frequency on-the-fly [12,13] as energy requirement depends on operating frequency as well as voltage. The DVS attempts to reduce the processor speed to the extent it is possible, to obtain reduction in energy consumption. The speed of a frequency dependent component is said to be reduced if it is either operating at lower voltage or frequency. The task execution time increases with the reduction in processor speed leading to the following consequences: ? a release may miss its deadline while it is feasible at higher speed. ? the longer execution time will be able to decrease the energy consumption of the processor whereas the system energy may be increased ? frequency independent components remain active for longer time and increase the energy consumption. ? longer execution time implies more losses in energy due to leakage current [44]. However, the task execution times do not always scale linearly with the processor speed [13,14,15,16,23,26] because system may have some components (memory and peripheral devices) which do not scale with the operating frequency. Thus, DVS may not be efficient (further reduction in the speed would increase the energy consumption) when the system energy is considered. To solve this problem, authors [27,29,30,31] suggested a lower bound (critical speed which balanced the energy consumption between the processor and peripheral devices to minimize the negative impact of the DVS. Niu and Quan [11] used a combined static/dynamic partitioning strategy for (??, ??) model to reduce the processor energy and are not efficient for system energy. Beside the DVS energy minimization approach authors [35,36] suggested to switch off the system (power down) rather than scale down the speed to reduce the energy requirement which is discussed briefly in next subsection. Dynamic Power Down (DPD) is switching to sleep mode (least power mode) of the unused components since the workload is not constant at all times. Although leaving a component (frequency dependent or independent) in idle/active state consumes power but switching to sleep mode too often may also be counter productive due to heavy context switching overheads. Thus, the DPD technique strives to balance the active and the sleeping time of the components. Authors [32,34,35] used DPD to switch the processor and the peripheral devices into sleep mode based on threshold (minimum time for which the component may sleep for positive energy saving) value to save energy for both hard and soft real time systems. The Niu and Quan [36] proposed a DPD based scheduling method to reduce the system energy consumption for weakly hard real-time systems with (??, ??) constraints. The reduction in energy consumption achieved by the DPD technique would increase with the enlargement of the idle slot length. The increment in the length of the idle slot can be achieved by the preemption control technique which is discussed in the following sub-section. Preemption Control (PC) is allowing a lower priority job to continue execution even when a higher priority job is ready such that none miss their deadline. When a job starts execution on the processor then the associated devices are switched to active state in which they remain till it completes. Thus, if a lower priority job is preempted by the higher priority job then the associated components remain active and consume energy for the time for which the job is preempted. This extra consumption in the energy can be reduced by delaying the higher priority job if possible and completing the lower priority job in the meanwhile (laxity of the higher one). Moreover, each time a job is preempted the context of the job needs to be saved and to be restored when it resumes. This context saving and retrieval would incur an overhead both in terms of time and energy. Thus, reducing number of preemptions reduces the response time of the job and undue energy dissipations due to preemption overhead, longer response time. Agrawal et. al. [29] proposed a preemption control technique where the lower priority job is forced to execute at higher speed levels and complete before the arrival of a higher priority one. The authors themselves say that such a policy may not always lead to energy saving performance. It is observed if only DPD is applied on a system then based on the threshold the components would be allowed to switch into sleep state and gain the energy reduction. Although, increasing the length or accumulating the idle slots further reduces the energy by DPD; DPD technique itself does not suggest any method to do so. While DVS would lower the assigned speed to each job and increase its execution time which in turn increases its response time. An increment in response time of a job not only increases the energy consumption by the associated components which remain active for longer time but also due to additional preemption which may occur. On the other hand, preemption control at the assigned speed may not be able to reduce the response time and/or number of preemptions. To address the shortcoming of each (DVS, DPD and PC) and to enhance the overall reduction in system energy consumption we suggest a judicious combination of all the above techniques. The length of the idle slot can be enhanced by selecting better speed level for DVS (suggested in third phase) or reducing the response time by PC (suggested in second phase) or delaying the execution of a job (suggested in third phase). The priorities are assigned based on the earliest deadline first (EDF) policy in which the job whose absolute deadline is lower has higher priority. The number of preemptions for different jobs of a task may vary as the earliest deadline first scheduling is dynamic at task level and arrival of mandatory jobs depends on the partitioning strategy. Thus, a job level DVS view would increase its efficiency (suggested in third phase). On the other hand, increasing the speed of few jobs (selected based on the greedy technique suggested in phase-2) could reduce energy consumbed by lower priority job with longer execution as well as preemption overhead. Recently, two groups of researcher Agrawal et. al. [29] and Niu and Quan [37] have used a two phase approach for system energy minimization for weakly hard real time system with (??, ??) constraints. The authors have suggested a combination of DVS, DPD and PC techniques however, neither have they taken into the account the preemption overhead nor do they balance the effects of the three techniques. In this paper we aim to minimize the system energy consumption for weakly hard real time system modeled with (??, ??) constraints using a fine balance of DVS, DPD and PC. The reduction in energy consumption is achieved at both, task as well as job level for which we adopt a three phase approach. In the first phase the task level view of the system is taken. The feasibility to each task in the set is ensured keeping in account the preemption overhead. While the second and third phase adopts job level view. A greedy based preemption control technique is proposed at the job level in the second phase. It is further refined in the third phase by adjusting the speed assigned to a job and accumulation of idle slots by delayed start to effectively balance the three approaches. The rest of the paper is organized as follows; the next section provides a system model followed by section III which presents our new approach along with algorithm. The simulation results are enlisted in section IV whereas section V concludes the work. # II. # System Model This paper aims to minimize the system energy consumption for a system having independent periodic task set ?? = {?? 1 , ?? 2 , ?? 3 ? ?? ?? } that assures minimum QoS defined by (??, ??). The priority of a job is assigned based on the earliest deadline first policy. The system consists of two types of components namely, frequency dependent (processor) and frequency independent (memory and peripheral devices). The following considerations are made: 1. The frequency independent components are represented by set ?? = {?? 1 , ?? 2 , ?? 3 ? ?? ?? } where ?? ?? represents a memory or peripheral device. The power management policies reported in [29,37,39] used only two states (active and sleeping) for a frequency independent component and there is no ?? ???????? ?? ???????? + ?? ???????? ?? ???????? + ?? ?????? (?? ? ?? ???????? ? ?? ???????? ) (2) To attain a positive energy gain the energy consumed by switching to sleep state (as measured in equation ( 2)) should be less than that consumed in the idle mode (as measure in equation ( 1)), i.e., (2)<( 1) ? ?? ???????? ?? ???????? + ?? ???????? ?? ???????? + ?? ?????? (?? ? ?? ???????? ? ?? ???????? ) < ?? ???????? ?? In worst case when no energy gain is measured (equation (1)=( 2)) then the threshold ??? can be estimated ??? = ??? ???????? ?? ???????? + ?? ???????? ?? ???????? ? ?? ?????? (?? ???????? + ?? ???????? )? ??? ???????? ??? ?????? ? ?(3) The threshold of each component can be estimated by equation (3). Substituting the value of ?? ???????? ?? ???????? + ?? ???????? ?? ???????? in terms of th in the equation ( 2) we get, ?????? ???????? ? ?? ?????? ? + ?? ?????? (?? ???????? + ?? ???????? ) + ?? ?????? (?? ? ?? ???????? ? ?? ???????? ) (3a) Energy saved by switching to sleep state would be the difference between equations ( 1) and (3a). ???????? = ?? ???????? ?? ? ?????? ???????? ? ?? ?????? ? + ?? ?????? = (?? ? ???)??? ???????? ? ?? ?????? ? If (?? > ???)then the energy gain (????????) would be positive hence, energy consumed in switching to sleep state and remain in it for ?? units of time would reduce energy consumption and hence, is advisable to switch to sleep state. For (?? = ???)the energy consumed to remain idle or sleep are same. # Global Journal of Computer Science and Technology Volume XI Issue X Version I When (?? < ???)then it is recommended to remain in the idle state rather than to switch to sleep state in which it would consume more energy. Thus, the energy consumed by a component for an idle slot of (??)would be: ??(??) = ? ?? ???????? ?? 0 ? ?? ? ??? ?? ???????? ??? + ?? ?????? (?? ? ???) ?? > ??? (4) Critical speed of the task (ð??"ð??" ???? ): The DVS technique advocates that reduction in the speed of the frequency dependent component would reduce the energy consumption. This may not be true when the system is having both frequency dependent and independent components because lower speed leads to longer execution time for which the frequency independent components would remain active and consume energy. That is, on reduction in speed, the system energy consumption first decreases then it starts increasing incase speed is further reduced. The speed at which system energy requirement is least for a task is called the critical speed. Each task in the system has its own critical speed because its computation demand and set of associated components may differ. It can be determined as follows: Consider a task ?? ?? with computation time ?? ?? (??) = ?? ?? ?? ? + ?? ?? where ?? ?? and ?? ?? are the computation requirement for frequency dependent component at the speed s and independent component respectively. Then the energy consumed by the task ?? ?? at speed ?? would be ?? ?? (??) = ?? ?? (??) * ??? ??ð??"ð??" + ? ?? ???????? ,?? ?? ????? ? where ð??"ð??" is the speed index of ??, i.e., ?? ð??"ð??" = ??. Thus, ?? ??ð??"ð??" is the rate of energy consumption of the processor at speed ??(???? ?? ð??"ð??" ), ? ?? ???????? ,?? ?? ????? is the total energy consumption rate of all the frequency independent devices associated with task ?? ?? . In [11,13] authors have used energy model where energy consumed by the processor is directly proportional to the cube of the operating speed i.e., ?? ?? ? ?? 3 hence, ?? ?? = ???? 3 where ?? ? ?? and ?? is the proportionality constant. As the task energy consumption function, ?? ?? (??) is a strictly convex function over speed ?? it can have a single speed at which energy consumption could be minimum, this can be estimated by setting its first derivative to zero followed by the second derivative to be positive. Thus, taking the first derivative of ?? ?? (??) with respect to ?? as ???? (??) ???? = ????? ?? ?? 2 ? ?????? 3 + ? ?? ???????? ,?? ?? ????? ?? + ???? ?? ?? ? + ?? ?? ?(3???? 2 )? = 0 ???? (??) ???? = 3???? ?? ?? 4 + 2???? 3 ?? ?? ? ?? ?? ? ?? ???????? ,?? ?? ????? = 0 (5) By Descartes' Rule of Signs [43], there is only one positive root of the equation since the sign between two consecutive terms changes only once. This root is referred to as the critical speed of the task ?? ?? represented as ?? ???? . In the following subsection we discuss the various methods for partitioning the jobs into mandatory and optional. The partitioning problem is NP-hard [40] hence, various heuristic techniques (Red_Pattern, Even_Pattern, Rev_Pattern, Hyd_Pattern, Mix_Pattern) can be used which are discussed below: Deeply Red-Pattern (Red_Pattern): This pattern was proposed by Koren & Shasha [41]. Mathematically, this can be described as ?? ?? ?? = ? 1, 0 ? ?? ?????? ?? ?? < ?? ?? 0, ????????????????? ?? = 0, 1, ? . . ?? ?? ? 1 When ?? ?? ?? is 1, release ?? ?? ?? is mandatory while it is optional in case 0 is assigned to ?? ?? ?? . We refer this pattern as Red_Pattern. Advantage of applying this pattern to a task set for energy minimization is that it aligns the optional jobs together so that a component has a better opportunity to switch into sleep state to save energy. For a task whose critical speed is higher than or equal to the highest possible speed (?? ?? ) the operating speed should never be scaled down. Assigning Red_Pattern to such a task helps to extend the idle interval for switching to sleep state. However, for a task whose critical speed is lower than ?? ?? Red_Pattern overloads the system leading to large size busy intervals and need more energy to be feasible. Evenly Distributed Pattern (Even_Pattern): Ramanathan [42] used evenly distributed pattern in which the first release is always mandatory and the distribution of mandatory and optional is evenly i.e., alternating. Mathematically, this can be described as ?? ?? ?? = ? 1, ???? ?? = ?? ?? * ?? ?? ?? ?? ? * ?? ?? ?? ?? ? 0, ????????????????? ?????? ?? = 0, 1, ? ?? ?? ? 1 # Reverse Evenly Distributed Pattern (Rev_Pattern): This pattern is a reverse of the Even_Pattern, hence the first release is always optional and the distribution of mandatory and optional is alternating. Mathematically: ?? ?? ?? = ? 0, ???? ?? = ?? ?? * (?? ?? ??? ?? ) ?? ?? ? * ?? ?? (?? ?? ??? ?? ) ? 1, ????????????????? ?? = 0, 1, ? ?? ?? ? 1 This pattern was first proposed by Niu & Quan [11] and we refer it as Rev_Pattern. Hybrid Pattern (Hyd_Pattern): This pattern was proposed by [11] in which instead of assigning same pattern to all the tasks in the task set, they assigned different type of patterns (Red_Pattern or Even_Pattern) to each task. For example, task ?? 1 is partitioned into mandatory and optional according to Red_Pattern while ?? 2 and ?? 3 could be assigned Red_Pattern or Even_Pattern. Thus, yielding 2 ?? possible combination of pattern assignment where ?? is the number of the tasks in the task set. Mixed Pattern (Mix_Pattern): The hybrid pattern allows a task in the task set to be scheduled by Red_Pattern or Even_Pattern. In both cases at least the first release of each task is mandatory (if not more e.g. (??, ??) = {(3, 5), (4, 7)} first two releases of both the task are mandatory with the Hyd_Pattern) and are in phase hence, will overload the system, forcing it to be feasible with high energy requirement. Therefore, to improve the performance of Hyd_Pattern authors [29] suggested a mixed pattern (Mix_Pattern) which combines the Hyd_Pattern with the Rev_Pattern yielding 3 ?? possible combination of pattern assignment. By including the Rev_Pattern the Mix_Pattern would give a task fairer chance to execute at lower speed assignment (the second release of both the task in the above example would be mandatory while the first may or may not be so. Since the second release of a task would usually be out of phase with the other releases and will not overload the system as hybrid pattern does). Thus, Mix_Pattern is the superset of all the above suggested patterns. In this paper we would use Mix_Pattern. # Global Journal of Computer Science and Technology In the following section we propose the energy minimization technique for the weakly hard real time system which was modeled in this section. # III. # Three-phase Energy Minimization Technique This work is refinement of the two phase approach suggested by Agrawal et. al. [29]. In the first phase authors estimate the critical speed for each task and use a static partitioning strategy called Mix_Pattern. Based on the critical speed and the mandatory job distribution authors assigned the speed to each task such that the task set is feasible. While in phase two the authors suggested a preemption control strategy. They suggested increasing the speed of the lower priority job so that it can complete before preemption. However, the reduction in energy due to preemption control may be less than the energy consumed to fit the lower priority job in the slack of the higher priority job, i.e., the technique may be counter productive. In such cases they suggest to execute at the assigned speed as was done by Niu and Quan [37]. In this paper we suggest a three phase technique for system energy minimization. In the first phase we generate a feasible schedule which assigns the speed closest to the critical speed to all the tasks partitioned by Mix_Pattern. In the second phase, we refine the preemption control technique suggested by Agrawal et. al. [29], Niu and Quan [37] after locating their pitfalls. Further, in the third phase we measure the idle slots available on either side of a job execution window. Based on which we adjust the speed of the job or delay the starting of a job so as to combine the two slot. In the following subsection we illustrate the three phases. Phase-1: Task Level Feasibility and Speed Assignment In this phase we first estimate the critical speed of each task according to the equation (5). Further, the jobs of each task are marked mandatory/optional according to Mix_Pattern and speed closest to the critical speed on which the task set is feasible is assigned. The algorithm for speed_fitting as suggested by [29] # Phase-2: Modified Preemption Control Technique The feasible schedule generated after speed_fitting for the task set ?? in the first phase may not be optimal in terms of energy consumption. To further reduce the energy consumption in this phase we suggest a greedy based preemption control followed by speed adjustment and delayed start in third phase. When a job is scheduled on the processor then the associated devices are switched to active state in which they remain till it completes. Thus, if a lower priority job is preempted by the higher priority job then the associated device remain active and consume energy for the time for which the job is preempted. This extra consumption in the energy can be reduced by # Global Journal of Computer Science and Technology Volume XI Issue X Version I 2011 16 # May A Three Phase Scheduling for System Energy Minimization of Weakly Hard Real Time Systems ? ?? = ????? ??ð??"ð??" ?? +1 ?? ?? (?? ð??"ð??" ?? +1 )? ? ??? ??ð??"ð??" ?? ?? ?? (?? ð??"ð??" ?? )?? ?? ?? ? ?? ?? ?? ?? ? ©2011 Global Journals Inc. (US) delaying the higher priority job if possible and completing the execution of the lower priority job in the meanwhile (laxity). The higher priority preempting job can be delayed up to its laxity available so that it does not miss its deadline. This laxity can be estimated as follows: ???????????? ? ?? = ?? ? ?? ? where ?? ???????? is the current time when no higher job is available then ?????? ? ?? = ? . If the time available is sufficient to complete the job ?? ?? ?? non-preemptively as suggested by [37] then we do so. However, when more than one higher priority jobs preempt a single lower priority job then approach suggested in [37] may fail to finish the lower priority job earlier. This is due to the fact that once a higher job finishes and another higher priority job is available in the ready queue then it would be scheduled as it has priority higher than the incomplete preempted job. This can be observed from the example 1. Example1: Consider a task set ?? = {??? ?? (?? ???? ), ?? ?? , ?? ?? ?: ?15, 25, 25?, ?25, 100, 100?}. When scheduled without preemption control then the response time of the lower priority job ?? 2 1 after being preempted by ?? 1 2 and ?? 1 3 would be 70 refer figure 1a. However, as illustrated by figure 1b (obtained by utilizing the concept of preemption control used in [37]) the response time of job ?? 2 1 remains 70 whereas the number of preemptions is reduced from 2 to 1. This is because ?? 2 1 is unable to complete in slack of ?? 1 2 which completes at time 50 after which the scheduler schedules the higher priority job ?? 1 3 since; no job is being preempted so no preemption control is applied. Thus, we refine the preemption control approach suggested in [37] without varying the speed as modified preemption control at assigned speed (MPCAS). Here a lower priority job may be allowed to restart even when higher priority job is ready, provided feasibility of the higher priority is assured. The effectiveness of this approach is seen in figure 1(c) where the response time of the job ?? 2 1 is reduced to 55 from 70. The proposed MPCAS approach is given as below: higher priority is assured. The effectiveness of this approach is seen in figure 1(c The MPCAS algorithm would reduce the response time of the lower priority job (?? 2 1 would finish at time 55 for the example) so the associated devices have better opportunity to switch to sleep state and save energy according to DPD. However, when component's DPD threshold is large than this reduction in response time may not be sufficient to allow the associated components to sleep and save energy. Agrawal et. al. [29] increase the speed of the lower priority job and hence, reduce its execution time so that it can fit in the slack available before it could be preempted (speed of the job ?? 2 1 would be increased such that it would finish by 35 in the example). The authors themselves state that this may be counter productive. That is, increment in energy consumption by executing the lower priority job at higher speed is more than the energy reduction gained due to early switching to sleep state for some components. To overcome this drawback we suggest a speed refinement for the preempted lower priority job as well as preempting higher priority jobs. This speed combination is predicted by greedy based preemption control (GBPC) which utilizes right and left idle slot (refer figure 2 and definition 1, 2, 3, 4) of the processor and the devices. Energy estimation for processor during the idle slots: For a processor idle slot ?????????? = ?? ???? ?? ?? ? ???? ????????? = ?? ???? ?? ?? ??, if this idle time is greater than threshold of the processor ????? then the processor would switch to sleep state otherwise remain in idle state. Thus, the energy consumption rate for the processor can be estimated from the equation ( 4) is ?? ???? ?? (????????) = ? ?? ?????????? ???????? 0 ? ???????? ? ????? ?? ?????????? ????? + ?? ???????? (???????? ? ?????) ???????? > ?????(7) In the next subsection we estimate the energy consumed by the frequency dependent and independent components during job execution. Energy (refer equation ( 7)). The energy consumed during the execution of the job would be ?? ?? ?? ??? ?? ?? ? as estimated in the equation ( 8). Thus, the energy consumption of the job ?? ?? ?? along with its left and the right idle slots is ?? ?? ?? ??? ?? ?? ? = ?? ???? ?? ??? ???? ?? ?? ? + ?? ???? ?? ??? ???? ?? ?? ? + ?? ?? ?? ??? ?? ?? ? + ?? ???? ?? ??? ???? ?? ?? ? + ?? ???? ?? ??? ???? ?? ?? ? (9) After estimating the energy we now discuss the technique for greedy based preemption control. If the lower priority job is preempted then the preemption control at the assigned speed is done (using PCAS algorithm) and the energy is estimated for the preempting higher priority jobs and the preempted lower priority job. If the lower priority job is still preempted by one or more, higher priority jobs then the response time of the lower priority can be further reduced. The reduction in the response time of the lower priority job can be achieved by increasing the speed of either the higher priority bottleneck job ?? ? ?? (such that ???? ?? ?? = ?(?????? ? ?? ? ?? ???????? ) + ?????????? ?? ?? ?) or the preempted lower priority job. The choice between the two is made based on the minimum increment in energy, i.e., ?????? ???? ? ?? (?? ? ?? ), ??? ?? ?? ??? ?? ?? ?? where ??? ? ?? (?? ? ?? ) = ?? ? ??? ð??"ð??" ? +1 ???? ??ð??"ð??" ? +1 + ? ?? ???????? ,?? ? ????? ? ? ?? ? ??? ð??"ð??" ? +1 ???? ??ð??"ð??" ? + ? ?? ???????? ,?? ? ????? ? (ð??"ð??" ? is the speed index of ?? ? ?? ) similarly, estimate ??? ?? ?? ??? ?? ?? ?. The speed of the chosen job is incremented and the energy is estimated. The process of further reduction in response time of the lower priority job is repeated and the energy for different combinations is estimated till either a) the lower priority job is no longer preempted; b) all the jobs are assigned maximum available speed level. The speed combination which requires minimum energy is assigned and the schedule is updated. Further, energy minimization is achieved by improving the schedule obtained in phase-2. # Phase-3: Speed Adjustment and Delay Start (SADS) After assigning speeds to each task in the phase-1 ensuring feasibility followed by the reduction in energy consumption by preemption control in the second phase. This phase will adjust the speed assigned (increase or decrease) and accumulate the idle slot (delay start a job if possible) to reduce the energy consumption. In this phase detail analysis of the preemption controlled schedule is done where right and left idle slot (refer figure 1 and definition 1, 2, 3, 4) of the processor and the device are re-estimated. After estimating the energy we now propose the new technique for improvement namely, speed adjustment and delay start which we finally combine to provide overall reduction in energy. The next subsection discusses the speed adjustment. # Global # Speed adjustment In the phase-1 the feasibility of the task set was to be ensured by assigning the speed at the task level, while the phase-2 increases the speed of some jobs to decrease loss in energy due to preemption. In this phase the speed is adjusted by considering each job separately to reduce the energy consumption based on the left and right idle slots. The philosophy for this approach is that speed fitting was done at the task level to make all the jobs feasible. Executing job at higher speed may favor switching to sleep state by more components (sleeping for more time) in some cases while executing at lower speed may favor the idea of DVS. Thus, depending on the left and the right idle slots we estimate the optimal speed for each job which may be different from that of the task. In the next subsection we measure the energy consumption at the job level after adjusting the speed. Energy estimation of a job after adjusting the speed i. Energy ?? ?? ?? ??? ?? ? = ?? ???? ?? ??? ???? ?? ?? ? + ?? ???? ?? ??? ???? ?? ?? ? + ?? ?? ?? ??? ?? ? + ?? ???? ?? ??? ???? ?? ?? ? ? ?? ?? ??? ?? ?? + ?? ???? ?? ??? ???? ?? ?? ? ? ?? ?? ??? ?? ??. Thus, in general the energy estimated at any speed s can be stated as: ?? ?? ?? (??) = ?? ???? ?? ??? ???? ?? ?? ? + ?? ???? ?? ??? ???? ?? ?? ? + ?? ?? ?? (??) + ?? ???? ?? ??? ???? ?? ?? ? ???????+?????????????????? ????????(10) Where ? ?? ?? (??) = ?? ?? ?? (??) ? ?? ?? ?? ??? ???? ?? ? In the next subsection we discuss the technique for accumulation of idle slots by delaying the task execution window. # Delay Start Technique In this part of the third phase we aim to assemble the idle slots fragmented on the two sides of a job by delaying its execution if the schedule permits i.e. shift the job execution towards its deadline. This may enable the associated components to sleep or sleep for longer time to save energy. A job may delay its execution up to its deadline so as to be feasible. But extending the job up to its deadline may force the up coming job to miss their deadlines. Thus, a job would be allowed to consume only the processor right idle slot so that it may not push the upcoming jobs. Thus, a job ?? ?? ?? may shift up to ?????? ??? ?? ?? , ?? ???? ?? ?? ?, without missing its own deadline or modifying the schedule of the subsequent jobs. Hence, a delay will move the task execution by an amount ?? ?? ?? (??) = ?????? ??? ?? ?? , ?? ???? ?? ?? ? ? ?? ?? ?? (??). In the next subsection we measure the energy consumption at the job level after delaying its execution and adjusting its speed. # Global Journal of Computer Science and Technology Volume XI Issue X Version I Energy estimation of a job with delayed start i. Energy estimation due to delayed start at assigned speed ð??"ð??" ???? ?? : Delaying a job ?? ?? ?? would shift it towards right will elongate the left idle slot of the components hence provide better opportunity to the components to switch to sleep state. Thus, when execution of a job is shifted then its left idle slot will increase by ?? ?? ?? ??? ???? ?? ? while the right idle slot will decrease by the same. The energy consumption of the job ?? ?? ?? along with its left and the right idle slots is ii. Energy estimation due to delayed start at speed ð??"ð??" ?? < ð??"ð??" ???? ?? : In a scenario where some of the components may not be able to switch to the sleep state (depending on their thresholds) then executing the job at a lower speed (?? ?? ) than the assigned speed ??? ???? ?? ? may save energy and reduce length of the right idle slot. Further, delaying the job would add the remaining right idle slot to the left idle slot, hence save energy. The energy consumption will be ?? ?? ?? ??? ???? ?? ? =?? ?? ?? (?? ?? ) = ?? ???? ?? ??? ???? ?? ?? + ?? ?? ?? (?? ?? )? + ?? ???? ?? ??? ???? ?? ?? + ?? ?? ?? (?? ?? )? + ?? ?? (?? ?? ) + ?? ???? ?? ??? ???? ?? ?? ? ?? ?? ?? (?? ?? )? + ?? ???? ?? ??? ???? ?? ?? ? ?? ?? ?? (?? ?? )?. iii. Energy estimation due to delayed start at speed ð??"ð??" ?? > ð??"ð??" ???? ?? : When some components are unable to sleep in the left idle slot generated after accumulation with speed ?? ???? ?? then increasing the speed would reduce the response time. Thus, a combination of higher speed and shift would elongate the left idle slot to provide room for switching into the sleep state. Hence, the energy consumption will be ?? ?? ?? ??? ?? ? = ?? ???? ?? ??? ???? ?? ?? + ?? ?? ?? ??? ?? ?? + ?? ???? ?? ??? ???? ?? ?? + ?? ?? ?? ??? ?? ?? + ?? ?? ?? ??? ?? ? + ?? ???? ?? ??? ???? ?? ?? ? ?? ?? ?? ??? ?? ?? + ?? ???? ?? ??? ???? ?? ?? ? ?? ?? ?? ??? ?? ??. Thus, in general the energy estimated at any speed s and shifting can be stated as ?? ?? ?? (??) = ?? ???? ?? ??? ???? ?? ?? + ?? ?? ?? (??)? + ?? ???? ?? ??? ???? ?? ?? + ?? ?? ?? (??)? + ?? ?? (??) + ?? ???? ?? ??? ???? ?? ?? ? ?? ?? ?? (??)? + ?? ???? ?? ??? ???? ?? ?? ? ?? ?? ?? (??)?(11) Combining adjusting the speed and delayed start concept Finally, combining the two concepts the speed adjustment (equation ( 10)) and delayed (equation ( 11 In the following section we present the results obtained by implementation of the approach discussed in this section. # IV. # Simulation Results This section compares the performance of our proposed three phase scheduling algorithm (in which we apply greedy based preemption control, speed adjustment and delayed start) referred to as GBSADS with the higher speed preemption control (HSPC) approach suggested by [29]. All simulation results are computed on a DVS processor with operating speed level set as ?? = {0, ?? 1 , ?? 2 , ?? 3 ? ?? 10 } where ?? ?? is a uniform random number generated in the interval [10,200]. We consider ten types of devices with multiple instances forming a pool of devices. For a task, devices are randomly selected from this pool. Rate of energy consumption for a device is computed based on the energy required by the processor at the maximum speed, i.e., ?? ???????? ,?? = Þ?? ??10 where Þ is a uniform random number in the range [0. 1,20]. The task set ?? = {?? 1 , ?? 2 , ?? 3 ? ?? ?? } with (??, ??) utilization U (i.e. ? ?? ?? ?? ?? ?? ?? ?? ?? ? a uniform random number in the range (0, 1]). The preemption overhead and energy required during preemption are uniform random number in the range (0, 1] and (0, 100] respectively. Similar type of considerations where taken in [29]. The other parameters are summarized in the table 3. The key parameter, measured for simulation is energy consumed during one MK_hyperperiod. The result reported is the average value of results obtained for hundred task sets. The following section deals with the variation in energy with component threshold, task set utilization and device to processor energy proportionality constant. Effect of component threshold on Energy consumption: The value of the threshold of a component indicates the length of the idle slot for which the component will consume same energy in active state as it would do so in sleep state. Thus, as the threshold increases the requirement for long idle slots increases in absence of which energy consumption increases. However, increment in threshold will affect the energy requirement up to a certain value (length of the longest idle slot) beyond which no component would switch to sleep state, so any further increment in the threshold will not increase the energy consumption of the system. The effect of the increment in threshold for frequency independent and dependent components can be seen in the figure 3 The effect of the variation of the device threshold is shown in the figure 3. When the device threshold is lower (0-80) it can be observed that the energy consumption by GBSADS approach is almost 23% lower than that of the HSPC approach, while this reduction in the energy consumption is more prominent (approximately 32%) at higher values of the threshold range (90-140). Beyond 130 it is constant due to the fact that at lower threshold value both GBSADS and HSPC control preemption around the assigned speed. But as this threshold increases the shorter idle slots become inadequate to switch the device into sleep state, the greedy based preemption control in second phase and delay start done in the third phase of the GBSADS approach assembles these idle slots efficiently and hence, provide better opportunity to switch the device into sleep state. Similar trends are seen for the variation in the processor threshold (refer figure 4) in which we get an overall gain of approximately 30%. EFFECT OF RATE OF PROCESSOR TO DEVICE ENERGY (Þ) ON ENERGY CONSUMPTION: The rate of energy consumption by a frequency independent component is a constant. This constant could be less than the rate of the energy consumption in the processor (for processor dominant system Þ ? 1) while for device dominant systems this would be greater than one. This variation in the ratio (0.1-10) for both processor and device dominant systems is observed in the figure 5 for which task sets of utilization U=[0.5, 0.6]. For lower value of the ratio (0.1-1) processor dominated system the GBSADS approach saves approximately 20% of the energy and this saving increase up to 26% for device dominant systems. A sudden rise in the energy consumption is observed for a value of Þ = 2 which indicates the dominance of the devices energy consumption and as more devices are added to such a system this rise is even more prominent. At lower level the DVS approach is more prominent due to the fact the processor energy consumption is dominant, the HSPC approach applies DVS and high speed preemption control which would be inadequate. On the other hand, GBSADS approach applies the concept of DVS at three levels (speed assignment, greedy based preemption control and speed adjustment) thus, a gain of # May 20% is received. However, at the higher ratio the device energy consumption is dominant and hence DVS is less effective compared to the DPD technique. The GBSADS approach is able to accumulate the idle slots efficiently as it does delayed start along with speed adjustment while controlling the preemption based on greedy approach whereas HSPC only controlled preemption. EFFECT OF SYSTEM UTILIZATION ON THE ENERGY CONSUMPTION: The energy consumption is measured as the system utilization increases for different values of Þ. The value of Þ indicates the dominance of the device energy consumption on the overall energy consumption of the system (higher its value more the system is device dominated). It can be observed from all the following figures (6, 7 and 8) that when the utilization is high (0.8-1) then the reduction in the energy consumption is substantial. This is because for such utilizations the system is overloaded hence, the speed assignment for the feasibility in the first phase is at higher speeds. The HSPC approach does not slow the once assigned speed while GBSADS approach may reduce the speed assigned to the out of phase jobs substantially leading to reduction in the energy consumption. Besides speed adjustment it also delays the start and controls the preemption of lower priority jobs to accumulate the idle slots favoring the sleeping off the components. EFFECT OF ONLY PROCESSOR ENERGY CONSUMPTION (WHEN NO DEVICES ARE ATTACHED Þ=0): When no frequency independent components are associated with the system then the effect of the utilization on the system energy consumption can be seen in the figure 6. For lower utilization (0.1-0.3) the GBSADS approach consumes around 18% less energy while this reduction improves up to around 24% for medium utilizations (0.4-0.7) and still further up to approximately 30% for higher utilizations. For lower utilizations the speed assigned by both approaches in first phase is close to the critical speed and hence, energy saving by GBSADS is only due to the delayed start in the third phase which accumulates the fragmented idle slots and favor the processor to switch into the sleep mode (or sleep for longer time). For task sets with higher utilization, the speed assigned to a task in the first phase are generally higher than its critical speed due to overloading of the system by both the approaches. For reducing the energy consumption the HSPC approach the execution of the preempted jobs at either higher or at the same assigned speed. Executing preempted jobs at higher speed of such systems having no devices attached would be counter productive while execution at the assigned speed would not incur any reduction in energy. On the other hand, GBSADS would adjust the speed (may reduce the assigned speed) taking into the account the thresholds and the idle slots in the second and the third phase so as to balance the impact of DVS, DPD and PC techniques. # WHEN THE DEVICE TO PROCESSOR RATE OF ENERGY CONSUMPTION IS COMPARABLE Þ=1: The effect of the utilization on the overall energy consumption can be seen from the figure 7. The trend of the energy consumption is similar to that observed in the figure 6. But for higher utilizations the reduction in the energy consumption is less (approximately 26%) as compared to 30% in figure 6. This is due to the fact; when higher speeds are assigned in the phase-1 reducing the speed by the GBSADS approach increases the response time of a job which would in turn force the devices to remain active for longer period and consume energy hence, lower gain is observed when compared to system comprising of frequency dependent components only. dominated system can be observed in the figure 8 which is similar to the trend seen in figure 6 and 7 in which at higher utilizations GBSADS approach performs better than HSPC approach. V. # Conclusion In this paper we presented a three phase scheduling algorithm which minimizes the system energy consumption for weakly hard real-time system while maintaining the (??, ??) guarantee. The system consists of a DVS processor (capable of operating at various frequencies) and frequency independent peripheral devices. We proposed a three phase scheduling algorithm where in the first phase a mixed pattern based partitioning is used to determine the mandatory and optional jobs of a task and assign speed levels to ensure the feasibility of the task set. However, the major contribution of the work lays in the second and third phase which analyses and refines the first phase schedule at job level. In the second phase we formulated a greedy based preemption control technique which adjusted the speed of the preempted/preempting jobs based on the laxity to further reduce the energy consumption. The third phase focused on accumulation of the idle slots through utilizing the concept of delay start and speed adjustment. The speed adjustment is a method of assigning an optimal speed to individual job based on the availability of idle slot on the either side of the execution window of a job and the threshold of the components. While delayed start technique delays the execution of a job up to its available slack time to assimilate the idle slots fragmented on the either side of a job's execution window. The effectiveness of the proposed algorithm has been discussed through examples and extensive simulation results. The proposed three phase scheduling algorithm is compared with [29] where the authors have adopted similar scenario. The simulation results indicate that the three phase scheduling algorithm consumes approximately 30% less energy for task set at higher utilizations (0.8-1) while it is 24% better for lower utilization systems (0.1-0.7). The reduction in the energy consumption is 30% for higher values of the threshold of a component while lesser improvement is observed approximately 23% for lower threshold value. The proposed algorithm was targeted for device dominant systems for which it performed 26% better. However, the simulations indicate that the approach is valid for processor dominant systems as well for which an improvement of about 20% is received. Thus, the proposed algorithm is capable of performing better in the system/process energy constrained systems when the system is overloaded (utilization is high) or the threshold of the components are high. 14![©2011 Global Journals Inc. (US)](image-2.png "2011 14 May") 1?? ??,?? Computation required by the frequency dependentcomponents of task ?? ???? ??,?? Computation required by the frequency?????? ?? ?? ?? ?? ?? ???? ?? ??independent components of task ?? ?? Release time of a job ?? ?? ?? = ?? * ?? ?? ?? , i.e., ?????? ?? Absolute deadline of a job ?? ?? ?? = ?? * ?? ?? + ?? ?? ?? , i.e., ?? ?? Finish time of a job ?? ?? ???? ????Critical speed of the processor for the task ?? ???? ????Speed of the processor assigned to the task ?? ???? ???? ?? ?? ?? ???? Speed of the processor assigned to the job ?? ?? Frequency independent component ?? ?? is associated with task ?? ???? ?? ???????? ,??Energy consumed per unit time by the device ?? ?? associated with task ?? ?? in sleep state?? ???????? ,?? ??Energy consumed per recourse conflicts. Same consideration is takenswitching the system has to save the state of the task atin this work.the beginning and restore the saved status at the end of2. The frequency dependent components (DVSsleep state (switching from sleep state to active state).processor) can operate at ?? + 1 discreteThese two activities incur an overhead called the DPDvoltage levels, i.e., The symbols used in this paper are summarizedoverhead. To have a positive energy saving the component should not be switched to sleep state for duration (??) less than the DPD threshold ??? which can be estimated as follows: Energy consumed by a component when it remains idle during idle slot ?? is ?? ???????? ?? (1) Energy consumed in sleep state during ?? Energy consumed by the component to go into sleep state isin the table1 while the terms used are discussed in thenext subsection.a) Terms UsedMK_hyperperiod (??): It can be defined as thepoint after which all the task in the set are in phase and(??, ??) pattern for each task is restarted i.e. the situationat time t = 0 is restored, mathematically,?? = ??????((?? ?? *?? ?? ) ????????? ?? = 1, 2 ? ??) where LCM stands for leastcommon multiple.in ?? : It is the sum ?? (??)? of a job ?? ?? Response time ??? ??the sleep state Energy consumed per unit time by the processor when running at a speed ?? ?? (?? ???? = ???? ?? 3 where ?? is constant) ?? and higher priority of the time requirement of the job ?? ?? ?? ???? preempting jobs. Mathematically, ?? ?? ?? (??) = ?? ?? (??) + ? (?? ? (?? ??? ?? ) + ð?"?ð?"?) ??? ? ?? ??? ( ??,?? ) where ?? (??,?? ) is the set of????? DPD threshold of the processor ?? ?? mandatory jobs preempting ?? ?? MK_hyperperiod ð?"?ð?"? ??????? ?? ?? , ?????? ?? ?? + ?? ?? ?? ,???1 (?? ?? )?. The equation ?? ?? during the time ?? (??) is an Preemption Overhead is context switching time required when a higher priority preempts a lower iterative equation which can be solved using differentpriority task iterations represented by ?? = 0, 1, 2 ? ?. For the first?? ð?"?ð?"? iteration ?? ?? Energy consumed during each preemption ?? ,0 (??) = ?? ?? (??). The iterative equation ?? ?? ?? ,?? (??) terminates when either of the two conditions is satisfied:a) value of the two consecutive iteration is same i.e.,?? ?? ?? ,???1 (??) = ?? ?? ?? ,?? (??) or b) value exceeds its relative deadline i.e., ?? ?? ?? ,?? (??) > ?? ?? .DPD Threshold (????): In DPD policyacomponent is switched to a sleep state on theoccurrence of idle slot to save energy. For such a // Greedy approach based speed fitting algorithmAlgorithm speed_fitting(task set ??)Begin1. Repeat2. While (not feasible)Doa. For all task ?? ?? ? ??Doi. If (ð??"ð??" ð??"ð??" ??iii. Goto step 2Elsei. Goto step 2b. to select next smallest ? ??RepeatEndIn the following subsection we describe the job levelsecond phase.?? < ??) 1. Compute Else 1. ? ?? = ? Repeat b. Select a task ?? ?? with smallest ? ?? c. If (ð??"ð??" ?? < ??) i. ð??"ð??" ?? = ð??"ð??" ?? + 1 ii. ?? ???? = ?? ?? of mandatory jobs which preempts ?? ?? ?????? ? ?? > ?????? ?? ?? . Hence, the time available for execution non-such thatpreemptively by the lower job would be???? ?? ?? = ?????? ??????? ??? ? ?? ??? (??,?? ) :??????? ? ?? +???????????? ? ?? ? ð??"ð??" ???? ?? : When somecomponents are unable to switch to sleep state thenif a job executes at a higher speed then it willcomplete earlier. This would improve the possibility toswitch the components into sleep state and increasethe sleeping time of the already sleeping?? ??? ?? ? = components. The time thus saved is ? ?? ?? ?? ?? ??? ?? ? ? ?? ?? ?? ??? ???? ?? ? which will increase length of theright idle slot. Hence, the total energy consumptionwill be?? : The energy estimation at speed ð??"ð??" ???? consumed by the device during the left (or right) idleslot would ?? ???? ?? ??? ???? ?? ?? ? (or ?? ???? ?? ??? ???? ?? ?? ?) (refer equation (6))while the energy consumption by the processor duringthe left(or right) idle slot would be ?? ???? ?? ??? ???? ?? ?? ? (or?? ???? ?? ??? ???? ?? ?? ? ) (refer equation (7)). The energy consumed during the execution of the job would be ?? ?? ?? ??? ???? ?? ? asestimated in the equation (8). Thus, the energyconsumption of the job ?? ?? ?? along with its left and theright idle slots is?? ?? ?? ??? ???? ?? ? = ?? ???? ?? ??? ???? ?? ?? ? + ?? ???? ?? ??? ???? ?? ?? ? + ?? ?? ?? ??? ???? ?? ? + ?? ???? ?? ??? ???? ?? ?? ?+ ?? ???? ?? ??? ???? ?? ?? ?ii. Energy estimation at speed ð??"ð??" ?? < ð??"ð??" ?????? : In a scenario where some of the components may not be able to switch to the sleep state (depending on their thresholds) then executing the job at a lower speed (?? ?? ) than the assigned speed ??? ???? ?? ? may save the processor energy. But this execution is subject to the availability of the right idle slot since this reduction in speed will force longer response time. 1. Estimate ?? ?? ?? (??, 0) according to the equation (12)?? ?? ?? (??, ð?"?ð?"?) = ?? ???? ?? ??? ???? ?? ?? + ð?"?ð?"??? ?? ?? (??)? + ?? ???? ?? ??? ???? ?? ?? + ð?"?ð?"??? ?? ?? (??)? +2. If ??????? > ?? ?? ?? (??, 0)? ?? (??, 0) a. Update ?????? = ?? ?? b. Update ?? ???? ?? = ?? and shifting as ð?"?ð?"? ?? ?? = 0 End if ?? (??, 1) according to the equation (12) 3. Estimate ?? ?? 4. If ??????? > ?? ?? ?? (??, 1)? ?? (??, 1) a. Update ?????? = ?? ?? b. Update ?? ???? ?? = ?? and shifting as ð?"?ð?"? ?? ?? = 1 End if End for End for 2. Estimate the total energy for a MK_hyperperiod (??) End while End?? ?? ?? (??) + ?? ???? ?? ??? ???? ?? ?? ? ð?"?ð?"??? ?? ?? (??) ? ð?"?ð?"? ? ? ?? ?? (??)? + ?? ???? ?? ??? ???? ?? ?? ? ð?"?ð?"??? ?? ?? (??) ? ð?"?ð?"? ? ? ?? ?? (??)? Where s ? S set of speed levels available, ð?"?ð?"? is a binary (12) number which has a value 1 if a shift operation is made and ð?"?ð?"? ? is its complement. Thus, for minimum energy consumption of a job ?? ?? ?? must be assigned a speed s and delayed start operation ð?"?ð?"? such that ?? ?? , ?? ???????? ,?? ?? >: < ?? 1 1 , 10, 54687, 0.0 >, < ?? 2 2 , 30, 262285, 0.0 >. The preemption overhead is ð?"?ð?"? = 5 and energy it consumes for preemption is E ð?"?ð?"? = 8568937. The critical speed (?? ??1 , ?? ??2 ) as estimated from equation (5) would be 25 and 30 respectively. The MK_hyperperiod will be 100. // Speed adjustment and delay start based third phase algorithm //Algorithm SADS(task set T) //input is the feasible schedule generated after speed fitting and GBPC after phase-2 Begin 1. While (no further reduction in energy) Do a. For each job ?? ?? ?? ? ?? where ?? is the set of mandatory ?? (??, ??2011 May Global Journal of Computer Science and Technology 21 Volume XI Issue X Version Ijobs in the task set ?? arriving during anyMK_hyperperiod (??)Doi. Estimate the left and the right idle time for device??? ???? ?? ?? , ?? ???? ?? ?? ?andtheprocessor??? ???? ?? ?? , ?? ???? ?? ?? ?according to definitions 1, 2, 3 and 4.ii. Assign speed to job ?? ?? ?? as ?? ???? ?? = ?? ???? and shifting as ?? = 0 ð?"?ð?"? ?? iii. Assign ?????? = ?? ?? ?? ??? ???? ?? , ð?"?ð?"? ?? ?? ? according to the))equation (12)for considering each job for improvement individually weiv. For every speed ?? ? ??getDo©2011 Global Journals Inc. (US) 2ð??"ð??" ???? ??ð??"ð??" ???? ??ð??"ð??" ???? ??ð??"ð??" ???? ??ð??"ð??" ???? ?????? ?? ??EnergyRemark2525 25253010056393661 Uncontrolled Preemption technique with DVS and DPD2525 2525307533473217 Preemption control as suggested by [37]. Reduction in energy consumption is 40.6%.2525 25251053536900380 Preemption control by increasing the speed of the lower priority job as suggested by[29].Phase 2: GBPC2525 2525306029538942 Performing preemption control at the assigned speed (ASPC). This incapable ofpreventing preemption but reduces the response time. Reduction in energy from [37]11.7% and 47.6% from uncontrolled preemption technique.2530 25253058.3 29126514 Increasing the speed of ?? 1 2 based on the ??? 1 2 (?? 1 2 ) = 31757.1, ??? 2 1 (?? 2 1 ) = 91715.Preemption could not be prevented but the energy consumption is decreased.2530 25253557.3 29219229 ?E 1 2 (s 1 2 ) = 94063.1, ?E 2 1 (s 2 1 ) = 91715. Increasing the speed of ? 2 12530 2525375729312320 ?E 1 2 (s 1 2 ) = 94063.1, ?E 2 1 (s 2 1 ) = 92790. Increasing the speed of ?? 2 12535 25253755.8 29092841 ?E 1 2 (s 1 2 ) = 94063.1, ?E 2 1 (s 2 1 ) = 185809. Increasing the speed of ? 1 22537 25253755.5 29076167 ?E 1 2 (s 1 2 ) = 62511, ?E 2 1 (s 2 1 ) = 185809. Increasing the speed of ? 1 22540 25253738.7 16205293 ?E 1 2 (s 1 2 ) = 98151, E 2 1 (s 2 1 ) = 185809. Increasing the speed of ? 1 2 . Preemption isavoided. Reduction in energy consumption by 51.59% from [29, 37] and 71.3% fromuncontrolled preemption is received.Phase -3: SADSDelaying job ? 1 4 for 10 units15648423 Reduction of 53.3% from [29, 37] and 72.3% from uncontrolled preemption isreceived. 3ParameterConditionRangeUTh UtilizationIs assigned0.01Threshold?? ?? UtilizationIf ?? ? ? ?? ???1 ? ????? the select a uniform(0, ?? ? ? ?? ???1 ]random numberIf ?? ? ? ?? ???1 < ????? then assign?? ?? = ?? ? ? ?? ???1?? ?? worst caseselect a uniform(0,100]execution timerandom number?? ?? periodselect a uniform(0,1000]random number?? ?? deadlineselect a uniform[?? ?? , ?? ?? ]random number?? ??Is a random integer[1,10]selected uniformly?? ?? is the numberAssigned a value??? ?? ?? ?? ?? ?? ?? ?? ? ?of mandatory jobs in ?? ??thp processorselect a uniform[0, 200]thresholdrandom number A Three Phase Scheduling for System Energy Minimization of Weakly Hard Real Time Systems ©2011 Global Journals Inc. (US) * A dynamic priority assignment technique for streams with (m, k)-firm deadlines MHamdaoui PRamanathan IEEE Trans. Compute 44 12 Dec. 1995 * Compiler-Assisted Dynamic Power-Aware Scheduling for Real-Time Applications DMoss´e HAydin BChilders RMelhem Workshop on Compiler and OS for Low Power 2000 * Dynamic Power Management in a Mobile Multimedia System with Guaranteed Quality-of-Service QQiu QWu MPedram ACM/IEEE Design Automation Conference 2001 * Power Minimization Using System-Level partitioning of Applications with Quality of Service Requirements GQu MPotkonjak IEEE/ACM International Conference on Computer-Aided Design 1999 * Energy Efficient Fixed-Priority Scheduling for Real-Time Systems on Variable Voltage Processors GQuan XHu 38th IEEE/ACM Design Automation Conference 2001 * Energy-Efficient Dual-Voltage Soft Real-Time System with (m, k)-Firm Deadline Guarantee SHua GQu CASES'04 Washington, DC, USA September 22-25, 2004 * Energy and performance considerations for smart dust LDoherty BWarneke BBoser KPister International Journal of Parallel Distributed Systems and Networks 2001 * The power-hungry disk FDouglis PKrishnan BMarsh Thwarting proceedings of the Winter USENIX Conference the Winter USENIX Conference 1994 * Power evaluation of a handheld computer MAViredaz DAWallach 2003 IEEE Micro * Modeling harddisk power consumption JZedlewski SSobti NGarg FZheng AKrishnamurthy RWang 2003 * Energy minimization for real time systems with (m, k)-guarantee LNiu GQuan IEEE Tans. On Very large scale integrated (VLSI) systems July 2006 14 * Scheduling for Reduced CPU energy MWeiser BWelch ADemers SShenker USENIX Symposium on Operating Systems Design and Implementation 1994 * System-level Energy Management for Periodic Real-Time Tasks HAydin VDevadas DZhu Proceedings of the 27th IEEE ©2011 Global Journals Inc. (US) International Real-Time Systems Symposium (RTSS'06) the 27th IEEE ©2011 Global Journals Inc. (US) International Real-Time Systems Symposium (RTSS'06) * Speed Modulation in Energy-Aware Real-Time Systems EBini GCButtazzo GLipari Proc. of the 17th Euromicro Conference on Real-Time Systems (ECRTS) of the 17th Euromicro Conference on Real-Time Systems (ECRTS) 2005 * Fine-grained dynamic voltage and frequency scaling for precise energy and performance trade-off based on the ratio of off-chip access to on-chip computation times KChoi RSoma MPedram Proceedings of Design, Automation and Test in Europe Design, Automation and Test in Europe 2004 * FAST: Frequency-Aware Static Timing Analysis KSeth AAnantaraman FMueller ERotenberg Proc. of the 24th IEEE Real-Time System Symposium of the 24th IEEE Real-Time System Symposium 2003 * Applying Imprecise Algorithms to Real-Time Image and Video Transmission XHuang AM KCheng Real-Time Technology and Applications Symposium Chicago, USA May 1995 * An Imprecise Algorithm for Real-Time Compressed Image and Video Transmission XiaoChen Albert Mo KimCheng Sixth International Conference on Computer Communications and Networks, Proceedings Las Vegas, NV, USA * A scheduling model for reduced CPU energy AFYao ADemers SShenker Proc. AFCS AFCS 1995 * Energy-Aware Scheduling for Real-Time Systems With (m; k)-Guarantee LNiu GQuan TR-2005-005 Dept. Comput. Sci. Eng., Univ. South Carolina 2005 Tech. Rep. * Combining (n;m)-hard deadlines and dual priority scheduling GBernat ABurns Proc. RTSS RTSS Dec. 1997 * A dynamic voltage scaling algorithm for dynamic-priority hard real-time systems using slack analysis WKim JKim SLMin Proc. DATE DATE 2002 788 * Practical Voltage-Scaling for Fixed-Priority Real-time Systems SSaewong RRajkumar Proceedings of the IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS'03) the IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS'03) 2003 * An Unified approach for fault-tolerance and dynamic power management in fixed-priority real-time embedded systems YZhang KChakraborty IEEE Transactions on Computer-Aided Design of Integrated Circuit and Systems 25 1 January 2006 * Dynamic and Aggressive Power-Aware Scheduling Techniques for Real-Time Systems HAydin RMelhem DMoss´e PMejia-Alvarez Proceedings of the 22nd IEEE Real-time Systems Symposium (RTSS'01) the 22nd IEEE Real-time Systems Symposium (RTSS'01) 2001 * The Synergy between Power aware Memory systems and Processor Voltage XFan CEllis ALebeck Workshop on Power-Aware Computing Systems December 2003 * Dynamic voltage scaling for system-wide energy minimization in real-time embedded systems RJejurikar RGupta 2004 * System level energy efficient dynamic task scheduling JZhuo CChakrabarti 2005 DAC * A Preemption Control Technique for System Energy Minimization of Weakly Hard Real-time Systems SAgrawal RSYadav Ranvijay 2008 * Hybrid run-time power management technique for real-time embedded system with voltage scalable processor MKim SHa OM'01 2001 * Dynamic voltage scaling for system-wide energy minimization in real-time embedded systems RJejurikar RGupta 2004 ISLPED * Online energy-aware i/o device scheduling for hard real-time systems HCheng SGoddard 2006 DATE * Hierarchical power management with application to scheduling PRong MPedram 2005 ISLPED * Pruningbased, energy optimal, deterministic i/o device scheduling for hard real-time systems VSwaminathan KChakrabarty Trans. on Embedded Computing Sys., ACM Transactions on Embedded Computing Systems 4 1 February 2005 * System-wide dynamic power management for multimedia portable devices LNiu GQuan IEEE International Symposium on Multimedia (ISM'06) 2006 * Peripheral-Conscious Scheduling on Energy Minimization for Weakly Hard Real-time Systems LNiu GQuan E07 * Feasibility Intervals for Multiprocessor Fixed-Priority Scheduling of Arbitrary deadline Periodic Systems LilianaCucu Jo¨elGoossens E07 * Comparing systemlevel power management YHLu GDMicheli IEEE Design and Test of Computers March-April 2001 * Enhanced fixed-priority scheduling with (m, k)-firm guarantee GQuan XHu RTSS 2000 * Skip-over: Algorithms and complexity for overloaded systems that allow skips GKoren DShasha Proc. RTSS RTSS 1995 110 * Overload management in realtime control applications using (m; k)-firm guarantee PRamanathan IEEE Trans. Parallel. Distrib. Syst 10 6 Jun. 1999 * Leakage aware dynamic voltage scaling for real-time embedded systems RJejurikar CPereira RGupta Proc. of the Design Automation Conf of the Design Automation Conf 2004 * Global Journal of Computer Science and Technology Volume XI Issue X Version I 2011 27 May