Home > Processor Scheduling > Chapter Summary >
     
Processor Scheduling
Chapter Summary

When a system has a choice of processes to execute, it must have a strategy—called a processor scheduling policy (or discipline)—for deciding which process to run at a given time. High-level scheduling—sometimes called job scheduling or long-term scheduling—determines which jobs the system allows to compete actively for system resources. The high-level scheduling policy dictates the degree of multiprogramming—the total number of processes in a system at a given time. After the high-level scheduling policy has admitted a job (which may contain one or more processes) to the system, the intermediate-level scheduling policy determines which processes shall be allowed to compete for a processor. This policy responds to short-term fluctuations in system load. A system's low-level scheduling policy determines which ready process the system will assign to a processor when one next becomes available. Low-level scheduling policies often assign a priority to each process, reflecting its importance—the more important a process, the more likely the scheduling policy is to select it to execute next.

A scheduling discipline can be preemptive or nonpreemptive. To prevent users from monopolizing the system (either maliciously or accidentally), preemptive schedulers set an interrupting clock or interval timer that periodically generates an interrupt. Priorities may be statically assigned or be changed dynamically during the course of execution.

When developing a scheduling discipline, a system designer must consider a variety of objectives, such as the type of system and the users' needs. These objectives may include maximizing throughput, maximizing the number of interactive users receiving "acceptable" response times, maximizing resource utilization, avoiding indefinite postponement, enforcing priorities, minimizing overhead and ensuring predictability of response times. To accomplish these goals, a system can use techniques such as aging processes and favoring processes whose requests can be satisfied quickly. Many scheduling disciplines exhibit fairness, predictability and scalability.

Operating system designers can use system objectives to determine the criteria on which scheduling decisions are made. Perhaps the most important concern is how a process uses the processor (i.e., whether it is processor bound or I/O bound). A scheduling discipline might also consider whether a process is batch or interactive. In a system that employs priorities, the scheduler should favor processes with higher priorities.

Schedulers employ algorithms that make choices about preemptibility, priorities, running time and other process characteristics. FIFO (also called FCFS) is a nonpreemptive algorithm that dispatches processes according to their arrival time at the ready queue. In round-robin (RR) scheduling, processes are dispatched FIFO but are given a limited amount of processor time called a time slice or a quantum. A variant of round-robin called selfish round-robin (SRR) initially places a process in a holding queue until its priority reaches the level of processes in the active queue, at which point, the process is placed in the active queue and scheduled round-robin with others in the queue. Determination of quantum size is critical to the effective operation of a computer system. The "optimal" quantum is large enough so that the vast majority of I/O-bound and interactive requests require less time than the duration of the quantum. The optimal quantum size varies from system to system and under different loads.

Shortest-process-first (SPF) is a nonpreemptive scheduling discipline in which the scheduler selects the waiting process with the smallest estimated run-time-to-completion. SPF reduces average waiting time over FIFO but increases the variance in response times. Shortest-remaining-time (SRT) scheduling is the preemptive counterpart of SPF that selects the process with the smallest estimated run-time-to-completion. The SRT algorithm offers minimum wait times in theory, but in certain situations, due to preemption overhead, SPF might actually perform better. Highest-response-ratio-next (HRRN) is a nonpreemptive scheduling discipline in which the priority of each process is a function not only of its service time but also of the amount of time it has been waiting for service.

Multilevel feedback queues allow a scheduler to dynamically adjust to process behavior. The process that gets a processor next is the one that reaches the head of the highest nonempty queue in the multilevel feedback queuing network. Processor-bound processes are placed at the lowest-level queue, and I/O-bound processes tend to be located in the higher-level queues. A running process is preempted by a process arriving in a higher queue. Multilevel feedback queuing is a good example of an adaptive mechanism.

Fair share scheduling (FSS) supports scheduling across related processes and threads. It enables a system to ensure fairness across groups of processes by restricting each group to a certain subset of system resources. In the UNIX environment, for example, FSS was developed specifically to "give a prespecified rate of system resources ... to a related set of users."

In deadline scheduling, certain processes are scheduled to be completed by a specific time or deadline. These processes may have high value if delivered on time and be worthless otherwise. Deadline scheduling can be difficult to implement.

Real-time scheduling must repeatedly meet processes' deadlines as they execute. Soft real-time scheduling ensures that real-time processes are dispatched before other processes in the system. Hard real-time scheduling guarantees that a process's deadlines are met. Hard real-time processes must meet their deadlines; failing to do so could be catastrophic, resulting in invalid work, system failure or even harm to the system's users. The rate-monotonic (RM) algorithm is a static scheduling priority-based round-robin algorithm in which priorities are increase (monotonically) with the rate at which a process must be scheduled. The earliest-deadline-first (EDF) is a preemptive scheduling algorithm that favors the process with the earliest deadline. The minimum-laxity-first algorithm is similar to EDF, but bases priority on a process's laxity, which measures the difference between the time a process requires to complete and the time remaining until that its deadline.

The operating system may dispatch a process's threads individually or it may schedule a multithreaded process as a unit, requiring user-level libraries to schedule their threads. System designers must determine how to allocate quanta to threads, in what order, and what priorities to assign in scheduling threads within a process. In Java, each thread is assigned a priority in the range 1-10. Depending on the platform, Java implements threads in either user or kernel space (see Section 4.6, Threading Models). When implementing threads in user space, the Java runtime relies on timeslicing to perform preemptive thread scheduling. The Java thread scheduler ensures that the highest-priority thread in the Java virtual machine is running at all times. If there are multiple threads at the same priority level, those threads execute using round-robin. A thread can call the yield method to give other threads of an equal priority a chance to run.



Copyright © 1995-2010, Pearson Education, Inc., publishing as Pearson Prentice Hall Legal and Privacy Terms