Tuesday, 23 August 2011

os questions


--------------------------------------------------------------------------------
Definitions:
--------------------------------------------------------------------------------
Process - A program loaded into memory and executing is commonly
                  referred to as a process.  That is, a process has one or more
                  threads of execution.

--------------------------------------------------------------------------------
Chapter 1
--------------------------------------------------------------------------------
1.1:  What are the two main purposes of an Operating system?

The purpose of an operating system is to provide an environment in
which a user can execute programs in a convenient and efficient
        manner.

1.3:  What is the main advantage of multiprogramming?

Multiprogramming increases CPU utilization by organizing jobs so that
        the CPU always has one to execute.

1.6:  Define the essential properties of the following systems?

a. Batch - To batch similar jobs together so that they can run at the
           same time.

b. Interactive - reduce response time so that users get a quick response
           while interacting with the system.

c. Time Sharing - the CPU will execute multiple jobs by switching
           among them, but the switches occur so frequently that the users
           can interact with each program while its running.

d. Real Time - used when rigid time requirements are placed on the
           operation of a processor or the flow of data.

e. Network - this is simply a communication path between two or more
           systems. Sharing file systems or other resources across the net.

f. Parallel - this system gathers multiple CPUs together to perform
           computational work.  This is just a way of saying multiprocessor
           system.

g. Distributed - allow users to share resources across geographically
           dispersed locations via a network.

h. Clustered - two or more individual systems that are coupled together
           that make a resource or service redundant or highly-available.  This
           type of system also gathers multiple CPUs together to perform
           computational work.

i. Handheld - these devices are usually palms or cell-phones and have
           restraints on CPU speed and power consumption.

1.7:  We have stressed the need for operating systems to make efficient use of
      computing hardware.  When is it appropriate for operating systems to
      forsake this principal and to "waste" resources?  Why is such a system
      not really wasteful?

It is perfectly normal for a hard real time system to violate this
        principal.  Because events are time critical, any such event must be
        taken care of at the expense of efficiency.  This is not wasteful
        because such behavior is the goal of a hard real time system.

--------------------------------------------------------------------------------
Chapter 2
--------------------------------------------------------------------------------
2.2:  How does the distinction between monitor mode and user mode function as
      a rudimentary form of protection (security) system?

Monitor or privileged mode is implemented in hardware as a bit that
        indicates the operating system is performing a privileged command.  In
        user mode the bit is set to 1 and privileged operations cannot be
        performed on certain hardware.  But putting the operating system in
        control of the monitor we can validate a user's request to access
        hardware.  That is, the user must request hardware access through the
        operating system since it is only this system that can run in monitor
        mode.  Thus, in theory a user cannot access hardware without first
        having access to that hardware.

2.5:  Which of the following instructions should be privileged?

a. Set value of timer - privileged otherwise the user program can
           play w/ it and the OS never gains control
b. Read the Clock - not privileged because user program can't do
           anything harmful (unless crypto depends on it)
c. Clear Memory - privileged because user program shouldn't be able
           to clear arbitrary memory
d. Turn off interrupts - privileged, reason is same as a
e. Switch from user to monitor mode - privileged, otherwise the kernel
           is useless

2.6:  Some computer systems do not provide a privileged mode of operation in
      the hardware.  Is is possible to construct a secure operating system for
      these computers?  Give arguments both that it is and that it is not.

(YES). because you can regulate all action through the operating
        system.  That is, all requests for resources can be made to go through
        the operating system (and usually are).  Whatever you can do in
        hardware you can do in software.
(NO).  There are always a way to exploit complex operating systems and
        many have argued that there are never ways to be 100% sure about the
security of a system against attackers.  In deed our systems today have
hardware protection and it would be foolish to think that they are
secure.

Secure OS = the user program can't crash the OS. Secure OS != the OS
        is never infected by virus.

POSSIBLE: it is possible by building a virtual machine on top of this
        machine and export another pseudo instruction set to the user program.
        That is we emulate a dual mode CPU and check every pseudo instructions.
        Or we can limit all programs to be written and compiled by certain
        program language (e.g. Java/MESA), and let compiler and loader do all
        the checking of privileged access.

IMPOSSIBLE: the trade-off of the above method is performance penalty,
        checking every instruction can hurt the performance dramatically.

*Note: single user OS still needs this protection to make it secure,
         because it can still crash the OS without it (DOS), you won't get more
         credit if you assume it can be done by limiting the usage to one user
         a time.

--------------------------------------------------------------------------------
Chapter 3
--------------------------------------------------------------------------------
3.1:  What are the 5 major activities of an operating system with regards to
      process management?

  - creating and deleting both user and system processes
  - suspending and resuming processes
  - providing mechanisms for process synchronization
  - providing mechanisms for process communication
  - providing mechanisms for deadlock handling

3.2:  What are the three major activities of an operating system with regards
      to memory management?

- keeping track of which parts of memory are currently being used
          and by whom
        - deciding which processes are to be loaded into memory when memory
          becomes available
        - allocating and deallocating memory space as needed

3.3:  What are the three major activities of an operating system in regards to
      secondary-storage management?

- free space management
- storage allocation
- disk scheduling

3.4:  What are the five major activities with regards to file management?

- creating and deleting files
- creating and deleting directories
- supporting primitives for manipulating files and directories
- mapping files onto secondary storage
- backing up files on stable (nonvolatile) storage media

3.6:  List five services provided by an operating system.  Explain how each
      provides convenience to the users.  In what cases would it be impossible
      for user-level programs to provide these services? Explain.

1. program execution - the operating system will schedule on behalf of
                               the user.  This service could not be handled by
                               the user because you need access to the hardware.
2. I/O operations - This makes it easy for user's to access I/O streams.
                            this means the user does not need to know the
                            physical access of data in the machine.  If there
                            were not interface provided the user could not
                            do this on their own.
3. file-system manipulation - This means the user does not need to
                                      worry about accessing and updating the
                                      file system table.  Such access is best
                                      handled by the operating system because
                                      of this complexity.
4. communications - in the case of memory mapping this is extremely
                           beneficial for the OS to handle access and control
                           to memory regions.  The user could not in this case
                           access such a system to share the map.
5. error detection - If there is some error on one of the lower levels
                             the user is notified so that they can take action.
                             if there is no memory left on the heap for
                             instance.  The user could not do this because it
                             is simply too much work for the user.

3.11:  What is the main advantage to the layered approach to system design?

The main advantage to the layered approach is modularity.

3.12:  What is the main advantage of the micro-kernel approach to system design?

Because the system is modular it is very easy to expand and extend the
        OS.  Security and reliability are also huge advantages since most
        services are running as user, rather than kernel, processes.

--------------------------------------------------------------------------------
Chapter 4
--------------------------------------------------------------------------------
4.1:  Palm OS provided no means of concurrent processing. Discuss three major
      complications that concurrent processing adds to an operating system.

A method of time sharing must be implemented to allow each of
several processes to have access to the system. This method involves
the preemption of processes that do not voluntarily give up the CPU
(by using a system call, for instance) and the kernel being reentrant
(so more than one process may be executing kernel code concurrently).
Processes and system resources must have protections and must be
protected from each other. Any given process must be limited in the
amount of memory it can use and the operations it can perform on
devices like disks.  Care must be taken in the kernel to prevent
deadlocks between processes, so processes aren't waiting for each
other's allocated resources.

4.5:  What are the benefits and detriments of each of the following?
      Consider both the systems and the programmers' levels.
      a. Symmetric and asymmetric communication
      b. Automatic and explicit buffering
      c. Send by copy and send by reference
      d. Fixed-sized and variable-sized messages

a) Symmetric direct communication is a pain since both sides need the
  name of the other process.  This makes it hard to build a server.
b) Automatic makes programming easier but is a harder system to build.
c) Send by copy is better for network generalization and
  synchronization issues.  Send by reference is more efficient for big
  data structures but harder to code because of the shared memory
  implications.
d) Variable sized makes programming easier but is a harder system to
  build.

--------------------------------------------------------------------------------
Chapter 5
--------------------------------------------------------------------------------
5.1:  Provide two programming examples of multi-threading that improve
      performance over a single-threaded solution.

Programs that have high parallelism - multi-threading kernel on
        multiprocessors, parallel scientific computations, etc.

Programs that share lots resources between different internal entities
        - web browsers, web servers, database access, on-line multi-user games.

Programs that easier to program/structure/debug in multi-threading model
        - network servers, GUI systems

5.2:  Provide two programming examples of multi-threading that do not improve
      performance over a single-threaded solution.

Programs that require sequential processing - Shell programs, printer
        driver.

Simple Programs - Hello world, embedded programs running on simple
        hardware/chips.

5.7:  Assume an operating system maps user-level threads to the kernel using
      the many-to-many model where the mapping is done through LWPs.
      Furthermore, the system allows the developers to create real-time threads.
      Is it necessary to bind a real-time thread to an LWP? Explain.

You shouldn't have a mix of general threads and a real-time
thread all bound to a single LWP.  The general threads may make a
blocking system call causing the LWP to wait and possibly miss a time
guarantee.  However, this can be avoided with the many-to-many model
since there can be many LWP associated with the process.

--------------------------------------------------------------------------------
Chapter 6
--------------------------------------------------------------------------------
6.2:  Define the difference between preemptive and non-preemptive scheduling.

In non-preemptive scheduling, once a process has been allocated the
        CPU, the process keeps the CPU until it releases the CPU either by
        terminating or by switching to the waiting state.

In preemptive scheduling, CPU is switched from an active process to a
        process in the ready queue as a result of an interrupt or end of time
        quantum.

6.3:  Consider the following set of processes, with the length of the
      CPU-burst time given in milliseconds:

      Process          Burst Time              Priority
        P1                  10                    3
        P2                   1                    1
        P3                   2                    3
        P4                   1                    4
        P5                   5                    2

    The processes are assumed to have arrived in the order P1, P2, P3,
    P4, P5, all at time 0.

     a. Draw four Gantt charts illustrating the execution of these
         processes using FCFS, SJF, a non-preemptive priority (a
         smaller priority number implies a higher priority), and RR
         (quantum = 1) scheduling.

     b. What is the turnaround time of each process for each of the
         scheduling algorithms in part a?

     c. What is the waiting time of each process for each of the
         scheduling algorithms in part a?

     d. Which of the schedules in part a results in the minimal
         average waiting time (over all processes)?

    Answer:
     a. The four Gantt charts are

     |       1       |2|3 |4|       5       |  FCS

     |1|2|3|4|5|1|3|5|1|5|1|5|1|5|     1    |  RR

     |2|4|3 |    5   |           1          |  SJF

     |2|    5    |           1         | 3|4|  Priority

     b. Turnaround time

          FCFS            RR          SJF            Priority
P1          10             19           19                16
P2          11              2            1                 1
P3          13              7            4                18
P4          14              4            2                19
P5          19             14            9                 6

     c. Waiting time (turnaround time minus burst time)


           FCFS           RR           SJF          Priority
P1            0             9            9               6
P2           10             1            0               0
P3           11             5            2              16
P4           13             3            1              18
P5           14             9            4               1

     d. Shortest Job First

6.4:  Suppose that the following processes arrive for execution at the
      times indicated. Each process will run the listed amount of
      time. In answering the questions, use non-preemptive scheduling and
      base all decisions on the information you have at the time the
      decision must be made.

        Process            Arrival Time            Burst Time
          P1                   0.0                    8
          P2                   0.4                    4
          P3                   1.0                    1

      a. What is the average turnaround time for these processes with
         the FCFS scheduling algorithm?

      b. What is the average turnaround time for these processes with
         the SJF scheduling algorithm?

      c. The SJF algorithm is supposed to improve performance, but
         notice that we chose to run process P1 at time 0 because we
         did not know that two shorter processes would arrive
         soon. Compute what the average turnaround time will be if the
         CPU is left idle for the first 1 unit and then SJF scheduling
         is used. Remember that processes P1 and P2 are waiting during
         this idle time, so their waiting time may increase. This
         algorithm could be known as future-knowledge scheduling.

   Answer:
      a. 10.53
      b. 9.53
      c. 6.86

   Remember that turnaround time is finishing time minus arrival time,
   so you have to sub- tract the arrival times to compute the
   turnaround times. FCFS is 11 if you forget to subtract arrival
   time.

6.10:  Explain the differences in the degree to which the following
       scheduling algorithms discriminate in favor of short processes:
         a. FCFS
         b. RR
         c. Multilevel feedback queues

     Answer:
       a. FCFS--discriminates against short jobs since any short jobs
          arriving after long jobs will have a longer waiting time.
       b. RR--treats all jobs equally (giving them equal bursts of CPU
          time) so short jobs will be able to leave the system faster
          since they will finish first.
       c. Multilevel feedback queues--work similar to the RR
          algorithm--they discriminate favorably toward short jobs.

--------------------------------------------------------------------------------
Chapter 7
--------------------------------------------------------------------------------
7.1:  What is the meaning of the term busy waiting? What other kinds of
      waiting are there in an operating system? Can busy waiting be
      avoided altogether? Explain your answer.

Busy waiting is waiting without giving up the CPU.  A preferred
type of waiting is to wait on a non-running queue.  The techniques
in the book will still give some busy waiting on the critical
sections of a semaphore.  It might be possible to avoid busy
waiting by changing the critical section code of 7.2 and 7.3 so
that it blocks processes.

7.7:  The wait() statement in all java program examples was part of a while
      loop. Explain why you would always need to use a while statement when
      using wait() and why you would never use an if statement.

You do this to guard against spurious wake-ups.  That is, the user of
        wait() is made no guarantee that once the code is upped again the
        condition which we went to sleep for will be met.  A while loop makes
        sure that the user checks the condition again and decides if should go
        back onto the wait queue.  If this were just an if() statement the
        upped code would proceed even if the condition were not safe to do so.


--------------------------------------------------------------------------------
-                 ANSWERS TO PROBLEMS FROM CHAPTERS 8-14                       -
--------------------------------------------------------------------------------
8.2 Is it possible to have a deadlock involving only one process? Explain your
answer.

Answer:
    Recall that there are four necessary conditions for deadlock.
Number 4 is circular-wait. Deadlock with one process is not possible, because it
is not possible to have circular wait with only one process, thus failing a
necessary condition. There is no second process to form a circle with the first
one. For the same reason, it is not possible to have a deadlock involving only
one resource. This follows directly from the hold-and-wait condition.
--------------------------------------------------------------------------------
8.3 Consider the traffic deadlock depicted in the Figure 8.8.
 A. Show that the four necessary conditions for deadlock indeed hold in this
    example.
 B. State a simple rule that will avoid deadlocks in this system.

Answer:

    A. The four necessary conditions for deadlock hold in this example for the
following reasons
   (i) Mutual Exclusion : Each of the vehicles present in the streets hold a
non-sharable resource: the part of the road they occupy, which they cannot share
with the other vehicles.
   (ii) Hold and Wait :  Each of the vehicles hold the space resource they
occupy and are waiting the space in front of them to be freed by other waiting
vehicles.
   (iii) No Preemption :  There is no possibility of preemption as none of the
vehicles can give up their resource.  In this situation preemption would have to
take the form of a vehicle pulling into a parking lot, or a crane reaching down
and lifting a vehicle off the road.
   (iv) Circular Wait : Circular wait is present because each vehicle is waiting
for the space in front of it, and some vehicles occupy spaces where two vehicles
wait on them. It is thus possible to trace a cycle of waiting cars.  This is the
weakest  assertion in the set, though, and  is clearly untrue out at the edges
somewhere, since  some  car  can clearly move  someplace in the city.If you have
ever experienced grid-lock, though you know that this is small comfort, and tht
a rule to avoid even "local" deadlock is extremely desirable.
    B. The simple rule that could be used to avoid traffic deadlocks in such a
system is that intersections should always  remain clear as lights  change.  In
this way,  the resource of space  in the intersection is  freed for use at
periodic intervals (light changes).
--------------------------------------------------------------------------------
8.10    Consider a computer system that runs 5,000 jobs per month with no
deadlock prevention or deadlock avoidance scheme. Deadlocks occur about twice
per month, and the operator must terminate and rerun about 10 jobs per deadlock.
Each job is worth about $2 (in CPU time) , and the jobs  terminated tend to be
about half–done when they are aborted. A systems programmer has estimated that
a deadlock–avoidance algorithm(like the Banker's algorithm ) could be
installed in the system with an increase in the average execution time per job
of about 10 percent. Since the machine currently has 30-percent idle time, all
5000 jobs per month could still be run, although turnaround time would increase
by about 20 percent on an average.
 A. What are the arguments for installing the deadlock-avoidance algorithm?
 B. What are the arguments against installing the deadlock-avoidance algorithm?

Answer:
    A. The arguments in favour of installing the deadlock-avoidance
algorithm are : Avoids deadlocks and the costs of reruns. Avoids waste of
resources as the reruns require duplicate use of resources and time to run the
aborted processes all over again Increases useful utilization of the resources
as the system’s idle-time is reduced.
    B. The arguments against installing the deadlock avoidance algorithm are :
Increases total execution time for processes. Increases the total cost as the
total execution time of processes increases by 10%. The processes take more time
to complete due to an increase in the turnaround time by 20 %. It introduces an
overhead on the execution of every processDeciding between the arguments in
favor and against deadlock control requires an examination of the costs
involved. In other words, it depends on the details of the situation.The problem
states that the computing load represents about $10K per month in computing
time, and that about 20 jobs having used $1 each in CPU time are terminated per
month (this information is easily derived fromt he facts stated in the problem).
That means that not controlling deadlock costs about $20 per month. The problem
also  states that controlling deadlock introduces  overhad costing about 10% on
every job,  or about $500. The proposal to control deadlock thus proposes to
spend about $500  (overhead) to save about $20 (deadlock loss).
--------------------------------------------------------------------------------
9.1 Name two differences between logical and physical addresses.

Answer:
    Logical address is an address seen by the CPU while a physical
address is seen by the memory.  A physical address is limited to the amount
of installed memory while a logical address is limited by the address size of
the processor.
--------------------------------------------------------------------------------
9.2 Explain the difference between internal and external fragmentation.

    Answer: Internal Fragmentation is the area in a region or a page that is not
used by the job occupying that region or page. This space is unavailable for use
by the system until that job is finished and the page or region is released.
External fragmentation is the area or region that is not used because it is
free.
--------------------------------------------------------------------------------
9.3 Descibe the following allocation algorithms:
(A) First-fit: search the list of available memory and allocate the first block
    that is big enough.
(B) Best-fit: search the entire list of available memory and allocate the
    smallest block that is big enough.
(C) Worst-fit: search the entire list of available memory and allocate the
    largest block.
--------------------------------------------------------------------------------
9.5 Given memory partitions of 100K, 500K, 200K, 300K, and 600K (in
    order), how would each of the First-fit, Best-fit, and Worst-fit
    algorithms place processes of 212K, 417K, 112K, and 426K (in
    order)? Which algorithm makes the most efficient use of memory?

    Answer:
      a. First-fit:
         212K is put in 500K partition
         417K is put in 600K partition
         112K is put in 288K partition (new partition 288K = 500K - 212K)
         426K must wait
      b. Best-fit:
         212K is put in 300K partition
         417K is put in 500K partition
         112K is put in 200K partition
         426K is put in 600K partition
      c. Worst-fit:
         212K is put in 600K partition
         417K is put in 500K partition
         112K is put in 388K partition
         426K must wait
 In this example, Best-fit turns out to be the best.
--------------------------------------------------------------------------------
9.8 Consider a logical address space of eight pages of 1024 words each,
mapped onto a physical memory of 32 frames.
       A. How many bits are there in the logical address?
       B. How many bits are there in the physical address?

    Answer:
       A. Logical address: 13 bits
       B. Physical address: 15 bits
--------------------------------------------------------------------------------
9.16 Consider the following segment table:
       Segment    Base    Length
             0     219       600
    1    2300        14
    2      90       100
    3    1327       580
    4    1952        96

  What are the physical addresses for the following logical addresses?
  a. 0,430 b. 1,10 c. 2,500 d. 3,400 e. 4,112

  Answer:

  A. 219 + 430 = 649
  B. 2300 + 10 = 2310
  C. illegal reference, trap to operating system
  D. 1327 + 400 = 1727
  E. illegal reference, trap to operating system
--------------------------------------------------------------------------------
10.2  Assume that you have a page-reference string for a process with m
frames (initially all empty). The page-reference string has length p; n distinct
page numbers occur in it. Answer these questions for any page-replacement
algorithms:

 A. What is a lower bound on the number of page faults?
 B. What is an upper bound on the number of page faults?

Answer:
 A. N, because a distinct page numbers always generates a page fault, even if
    two difference pages refer to the same physical page frame.
 B. P, if we reference pages P times, we will generate at most P page faults.
--------------------------------------------------------------------------------
Problem 10.2: Assume that you have a page-reference string for a process with
m frames (initially all empty).  The page-reference string has length p; n
distinct page numbers occur in it.  Answer these questions for any page
replacement algorithm:
  A. What is a lower bound on the number of page faults?
  B. What is an upper bound on the number of page faults?

Answer:
  A. Lower Bound: n. The reference string indicates that the program actually
references n distinct pages, and the best that any page replacement algorithm
can do is make the first page fault for a page the last.
  B. Upper Bound: p. The worst possible situation is one where the working set
is so small, and the page replacement algorithm so stupid, that every reference
to a new page in the reference string is a page fault.
--------------------------------------------------------------------------------
10.4 Which
of the following programming techniques and structures are "good" for a demand-
  paged environment ? Which are "not good"? Explain your answers.
       a. Stack
       b. Hashed symbol table
       c. Sequential search
       d. Binary search
       e. Pure code
       f. Vector operations
       g. Indirection

    Answer:
       a. Stack--good.
       b. Hashed symbol table--not good.
       c. Sequential search--good.
       d. Binary search--not good.
       e. Pure code--good.
       f. Vector operations--good.
       g. Indirection--not good.
--------------------------------------------------------------------------------
10.9 Consider a demand-paging system with the following time-measured
     utilizations:
       CPU utilization           20%
       Paging disk             97.7%
       Other I/O devices          5%
--------------------------------------------------------------------------------
11.5 What are the advantages and disadvantages of recording the name
     of the creating pro- gram with the file's attributes (as is done
     in the Macintosh Operating System)?

     Answer: By recording the name of the creating program, the
     operating system is able to implement features (such as automatic
     program invocation when the file is accessed) based on this
     information. It does add overhead in the operating system and
     require space in the file descriptor, however.
--------------------------------------------------------------------------------
     11.5  What are the advantages and disadvantages of recording the name of
the program that creates a file with the file's attributes (as is done in the
Macintosh operating system)?

Answer:
    Advantages: A file tagged with a program name
indirectly points to the default program to open/execute the file. Firstly, this
makes file dependency on application programs very explicit. This visibility is
useful (more than that in having extensions in DOS and Windows) to the user in
determining the type associated with the file. Furthermore, this is more
risk-free in determining the program to open a file. A weird program cannot
accidentally corrupt (unless forced by the user) the internal format of a file
in trying to open it since here each file is named with the only program(s) that
has/have access to it. Thus this makes the system more secure.Disadvantages:
This strategy gives birth to long file names. Systems which deal with large
number of files waste a considerable amount of space in storing the names of
these files. Moreover, this reduces portability of the file. If a file is moved
from one environment to another, then it would become useless if that particular
program is unavailable to open it unless the file name is modified. A subtle
error creeping inside the file-name may also render it completely paralyzed.
Also, some files are used by multiple applications, so the one-to-one mapping
with the embedded filename is much less useful in reality.
--------------------------------------------------------------------------------
11.7 Explain the
purpose of the open and close operations.Answer:   The open operation informs
the system that the named file is about to become active.   The close operation
informs the system that the named file is no longer in active use by the user
who issued the close operation.
--------------------------------------------------------------------------------
11.9  Give an example of an application in
which data in a file should be accessed in the following order:

A.  Sequentially: A movie player that plays a movie file sequentially.
B.  Randomly: A movie player that allows user to start playing movie at random
              locations of the file.
--------------------------------------------------------------------------------
11.13 Researchers have suggested that, instead of having an access list
associated with each file(specifying which users can access the file, and how),
we should have a user control listassociated with each user (specifying which
files a user can access, and how). Discuss therelative merits of these two
schemes.

Answer:
  - File control list. Since the access control information is concentrated in
    one single place, it is easier to change access control information and
    this requires less space.
  - User control list. This requires less overhead when opening a file.
--------------------------------------------------------------------------------
12.2 Consider a system where free space is kept in a free-space list.

A. Suppose that the pointer to the free-space list is lost. Can the
system reconstruct the free-space list? Explain your answer.
B. Suggest a scheme to ensure that the pointer is never lost as
result of memory failure.

Answer:
    A. In order to reconstruct the free list, it would be necessary to
perform "garbage collection." This would entail searching the entire
directory structure to determine which pages are already allocated to
jobs. Those remaining unallocated pages could be relinked as the
free-space list.
    B. The free-space list pointer could be stored on the disk, perhaps in
several places.
--------------------------------------------------------------------------------
12.4 Why must the bit map for file allocation be kept on mass storage, rather
than in main memory?

Answer:
    In case of system crash (memory failure) the free-space list would not
be lost as it would be if the bit map had been stored in main memory.
--------------------------------------------------------------------------------
12.5 Consider a system that supports the strategies of contiguous, linked, and
indexed allo-cation. What criteria should be used in deciding which strategy is
best utilized for a particular file?
Answer:
            Contiguous --  if file is usually accessed sequentially, if file is
relatively small.            Linked --  if file is large and usually accessed
sequentially.            Indexed  -- if file is large and usually accessed
randomly.
--------------------------------------------------------------------------------
13.1 State three advantages of placing functionality in a device controller,
rather than in the kernel. State three disadvantages.

Answer:
    Three advantages: Bugs are less likely to cause an operating system
crash. Performance can be improved by utilizing dedicated hardware and
hard-coded algo-rithms. The kernel is simplified by moving algorithms out of it.
    Three disadvantages: Bugs are harder to fix - a new firmware version or
new hardware is needed. Improving algorithms likewise require a hardware updatei
rather than just kernel or device driver update. Embedded algorithms could
conflict with application's use of the device, causing de-creased performance.
--------------------------------------------------------------------------------
 13.2 Consider the following I/O scenarios on a single-user PC.

  A. A mouse used with a graphical user interface
  B. A tape drive on a multitasking operating system (assume no device
     preallocation is available)
  C. A disk drive containing user files
  D. A graphics card with direct bus connection, accessible through
     memory-mapped I/O

For each of these I/O scenarios, would you design the operating system
to use buffering, spooling, caching, or a combination? Would you use
polled I/O, or interrupt-driven I/O?  Give reasons for your choices.

Answer:

    A. A mouse used with a graphical user interface Buffering may be
needed to record mouse movement during times when higher- priority
operations are taking place. Spooling and caching are
inappropriate. Inter- rupt driven I/O is most appropriate.

    B. A tape drive on a multitasking operating system (assume no device
preallocation is available) Buffering may be needed to manage
throughput difference between the tape drive and the source or
destination of the I/O, Caching can be used to hold copies of data
that resides on the tape, for faster access. Spooling could be used to
stage data to the device when multiple users desire to read from or
write to it. Interrupt driven I/O is likely to allow the best
performance.

    C. A disk drive containing user files.  Buffering can be used to hold
data while in transit from user space to the disk, and visa
versa. Caching can be used to hold disk-resident data for improved
perfor- mance. Spooling is not necessary because disks are
shared-access devices. Interrupt- driven I/O is best for devices such
as disks that transfer data at slow rates.

    D. A graphics card with direct bus connection, accessible through
memory-mapped I/O Buffering may be needed to control multiple access
and for performance (double- buffering can be used to hold the next
screen image while displaying the current one). Caching and spooling
are not necessary due to the fast and shared-access natures of the
device. Polling and interrupts are only useful for input and for I/O
completion detection, neither of which is needed for a memory-mapped
device.
--------------------------------------------------------------------------------
13.5 Why might a system use interrupt-driven I/O to manage a single
serial port, but polling I/O to manage a front-end processor, such as
a terminal concentrator?

Answer:
    Polling can be more efficient than interrupt-driven I/O. This
is the case when the I/O is frequent and of short duration. Even
though a single serial port will perform I/O relatively infrequently
and should thus use interrupts, a collection of serial ports such as
those in a terminal concentrator can produce a lot of short I/O
operations, and interrupting for each one could create a heavy load on
the system. A well-timed polling loop could alleviate that load
without wasting many resources through looping with no I/O needed.
--------------------------------------------------------------------------------
14.1 None of the disk-scheduling disciplines, except FCFS, is truly fair
     (starvation may occur).
     A. Explain why this assertion is true.
     B. Describe a way to modify algorithms such as SCAN to ensure fairness.
     C. Explain why fairness is an important goal in a time-sharing system.
     D. Give three or more examples of circumstances in which it is important
        that the op- erating system be unfair in serving I/O requests.

Answer:
   A. New requests for the track over which the head currently resides
      can theoretically arrive as quickly as these requests are being
      serviced.
   B. All requests older than some predetermined age could be "forced"
      to the top of the queue, and an associated bit for each could be
      set to indicate that no new request could be moved ahead of
      these requests. For SSTF, the rest of the queue would have to be
      reorganized with respect to the last of these "old" requests.
   C. To prevent unusually long response times.
   D. Paging and swapping should take priority over user requests. It
      may be desirable for other kernel-initiated I/O, such as the
      writing of file system metadata, to take precedence over user
      I/O. If the kernel supports real-time process priorities, the
      I/O requests of those processes should be favored.
--------------------------------------------------------------------------------
14.2 Suppose that a disk drive has 5000 cylinders, numbered 0 to
     4999. The drive is currently serving a request at cylinder 143,
     and the previous request was at cylinder 125. The queue of
     pending requests, in FIFO order, is

       86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130

    Starting from the current head position, what is the total
    distance (in cylinders) that the disk arm moves to satisfy all the
    pending requests, for each of the following disk- scheduling
    algorithms?
       A. FCFS
       B. SSTF
       C. SCAN
       D. LOOK
       E. C-SCAN
       F. C-LOOK

    Answer:
       A. The FCFS schedule is 143, 86, 1470, 913, 1774, 948, 1509,
          1022, 1750, 130. The total seek distance is 7081.
       B. The SSTF schedule is 143, 130, 86, 913, 948, 1022, 1470,
          1509, 1750, 1774. The total seek distance is 1745.
       C. The SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750,
          1774, 4999, 130, 86. The total seek distance is 9769.
       D. The LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750,
          1774, 130, 86. The total seek distance is 3319.
       E. The C-SCAN schedule is 143, 913, 948, 1022, 1470, 1509,
          1750, 1774, 4999, 0, 86, 130. The total seek distance is
          9985.
       F. The C-LOOK schedule is 143, 913, 948, 1022, 1470, 1509,
          1750, 1774, 86, 130.  The total seek distance is 3363.
--------------------------------------------------------------------------------
14.7 Compare the performance of C-SCAN and SCAN scheduling, assuming a uniform
distributionof requests. Consider the average response time (the time between
the arrival of arequest and the completion of that requestÂ’s service), the
variation in response time, andthe effective bandwidth. How does performance
depend on the relative sizes of seek timeand rotational latency?

Answer:
   There is no simple analytical argument to answer the first part of this
question. It wouldmake a good small simulation experiment for the students. The
answer can be found inFigure 2 ofWorthington et al. [1994]. (Worthington et al.
studied the LOOK algorithm, butsimilar results obtain for SCAN. Figure 2 in
Worthington et al. shows that C-LOOK has anaverage response time just a few
percent higher than LOOK but that C-LOOK has a significantlylower variance in
response time for medium and heavy workloads. The intuitivereason for the
difference in variance is that LOOK (and SCAN) tend to favor requests nearthe
middle cylinders, whereas the C-versions do not have this imbalance. The
intuitive reasonfor the slower response time of C-LOOK is the “circular” seek
from one end of the disk to the farthestrequest at the other end. This seek
satisfies no requests. It only causes a small performancedegradation because the
square-root dependency of seek time on distance implies that a long seekisnÂ’t
terribly expensive by comparison with moderate length seeks.For the second part
of the question, we observe that these algorithms do not schedule to
improverotational latency; therefore, as seek times decrease relative to
rotational latency, the performancedifferences between the algorithms will
decrease.
--------------------------------------------------------------------------------
14.9 Explain why SSTF scheduling tends to favor the middle cylinders of a disk
     over the innermost and outermost cylinders.

Answer:
    The SSTF algorithm is biased toward the middle cylinders in much the same
way the SCAN algorithm is.  Since, in the SSTF algorithm, the closest cylinder
is always chosen, then all middle cylinder references will serviced on the way
to the end of the disk. I present the book's answer to this question, even
though I think it is more muddled... just FYI:The center of the disk is the
location having the smallest average distance to all other tracks. Thus the
disk head tends to move away from the edges of the disk.Here is another way to
think of it. The current location of the head divides the cylinders into two
groups. If the head is not in the center of the disk and a new request arrives,
the new request is more likely to be in the group that includes the center of
the disk; thus, the head is more likely to move in that direction.
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
NOTES FROM CHAPTER 8 Deadlock
--------------------------------------------------------------------------------
A set of processes is in a deadlock state when every process in the set is waiting for an event that can be caused only by another process in the set.

8.2.1 Necessary Conditions
A deadlock situation can arise if the following four conditions hold simultane­ously in a system:                                                
1.   Mutual exclusion: At least one resource must be held in a nonsharable mode; that is, only one process at a time can use the resource. If another process re
quests that resource, the requesting process must be delayed until the resource has been released.
2.   Hold and wait:   A process must be holding at least one resource and waiting to acquire additional resources that are currently being held by other processes.
3.   No preemption: Resources cannot be preempted; that is, a resource can be released only voluntarily by the process holding it, after that process has completed its task.
4. Circular wait: A set {Po,Pi,..., Pn} of waiting processes must exist such that PO is waiting for a resource that is held by P\, P\ is waiting for a resourcethat is held by P2, ..., Pn-\ is waiting for a resource that is held by P,,, and P,, is waiting for a resource that is held by PQ.

8.3   -   Methods for Handling Deadlocks
Principally, we can deal with the deadlock problem in one of three ways:
-  We can use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlock state.
-  We can allow the system to enter a deadlock state, detect it, and recover.
-  We can ignore the problem altogether and pretend that deadlocks never occur in the system.

To ensure that deadlocks never occur, the system can use either a deadlock-prevention or a deadlock-avoidance scheme. Deadlock prevention is a set of methods for ensuring that at least one of the necessary conditions (Section 8.2.1) cannot hold. These methods prevent deadlocks by constraining how requests for resources can be made.

Deadlock avoidance requires that the operating system be given in advance additional information concerning which resources a process will request and use during its lifetime. With this additional knowledge, we can decide for each request whether or not the process should wait. To decide whether the current request can be satisfied or must be delayed, the system must consider the resources currently available, the resources currently allocated to each process, and the future requests and releases of each process.

If a system does not employ either a deadlock-prevention or a deadlock-avoidance algorithm, then a deadlock situation may occur. In this environment, the system can provide an algorithm that examines the state of the system to determine whe ther a deadlock has occurred and an algorithm to recover from the deadlock (if a deadlock has indeed occurred).

Deadlock Prevention
As we noted in Section 8.2.1, for a deadlock to occur, each of the four necessary conditions must hold. By ensuring that at least one of these conditions cannot hold, we can prevent the occurrence of a deadlock. We elaborate on this approach by examining each of the four necessary conditions separately.

8.4.1 Mutual Exclusion
In general, however, we cannot prevent deadlocks by denying the mutual-exclusion condition, because some resources are intrinsically nonsha rable.

8.4.2 Hold and Wait
To ensure that the hold-and-wait condition never occurs in the system, we must guarantee that, whenever a process requests a resource, it does not hold any other resources. One protocol that can be used requires each process to request and be allocated all its resources before it begins execution. We can implement this provision by requiring that system calls requesting resources for a process precede all other system calls.  An alternative protocol allows a process to request resources only when it has none. A process may request some resources and use them. Before it can request any additional resources, however, it must release all the resources that it is currently allocated.  Both these protocols have two main disadvantages. First, resource utiliza­tion may be low, since resources may be allocated but unused for a long period. In the example given, for instance, we can release the tape drive and disk file, and then again request the disk file and printer, only if we can be sure that our data will remain on the disk file. If we cannot be assured that they will, then we must request all resources at the beginning for both protocols.  Second, starvation is possible. A process that needs several popular resources may have to wait indefinitely, because at least one of the resources that it needs is always allocated to some other process.

8.4.3  No Preemption
The third necessary condition is that there be no preemption of resources that have already been allocated. To ensure that this condition does not hold, we can use the following protocol. If a process is holding some resources and requests another resource that cannot be immediately allocated to it (that is, the process must wait), then all resources currently being held are preempted. In other words, these resources are implicitly released. The preempted resources are added to the list of resources for which the process is waiting. The process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting.
Alternatively, if a process requests some resources, we first check whether they are available. If they are, we allocate them. If they are not, we check whether they are allocated to some other process that is waiting for additional resources. If so, we preempt the desired resources from the waiting process and allocate them to the requesting process. If the resources are neither available nor held by a waiting process, the requesting process must wait. While it is waiting, some of its resources may be preempted, but only if another process requests them. A process can be restarted only when it is allocated the new resources it is requesting and recovers any resources that were preempted while it was waiting.

8.4.4  Circular Wait
The fourth and final condition for deadlocks is the circular-wait condition. One way to ensure that this condition never holds is to impose a total ordering of all resource types and to require that each process requests resources in anincreasing order of enumeration.

8.5 Deadlock Avoidance                                              
Deadlock-prevention algorithms, as discussed in Section 8.4, prevent deadlocks by restraining how requests can be made. The restraints ensure that at least one of the necessary conditions for deadlock cannot occur and, hence, that deadlocks cannot hold. Possible side effects of preventing deadlocks by this method, however, are low device utilization and reduced system throughput.

An alternative method for avoiding deadlocks is to require additional information about how resources are to be requested. For example, in a system with one tape drive and one printer, we might be told that process P will request first the tape drive, and later the printer, before releasing both resources. Process Q, however, will request first the printer and then the tape drive. With this knowledge of the complete sequence of requests and releases for each process, we can decide for each request whether or not the process should wait in order to avoid a possible future deadlock. Each request requires that in making this decision the system consider the resources currently available, the resources currently allocated to each process, and the future requests and releases of each process.  The various algorithms differ in the amount and type of information required. The simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need. Given this a priori information, it is possible to construct an algorithm that ensures that the system will never enter a deadlocked state. Such an algorithm defines the deadlock-avoidance approach. A deadlock-avoidance algorithm dynami­cally examines the resource-allocation state to ensure that a circular-wait condi­tion can never exist.

8.6 Deadlock Detection
If a system does not employ either a deadlock-prevention or a deadlock-avoidance algorithm, then a deadlock situation may occur. In this environment, the system must provide:
-  An algorithm that examines the state of the system to determine whether a deadlock has occurred                                              
-  An algorithm to recover from the deadlock

8.7  -   Recovery from Deadlock
When a detection algorithm determines that a deadlock exists, several alterna­tives are available. One possibility is to inform the operator that a deadlock has occurred and to let the operator deal with the deadlock manually. The other possibility is to let the system recover from the deadlock automatically. There are two options for breaking a deadlock. One is simply to abort one or more processes to break the circular wait. The other is to preempt some resources from one or more of the deadlocked processes.

8.7.1 Process Termination
To eliminate deadlocks by aborting a process, we use one of two methods. In both methods, the system reclaims all resources allocated to the terminated processes.

-  Abort all deadlocked processes: This method clearly will break the dead­lock cycle, but at great expense; the deadlocked processes may have com­puted for a long time, and the results of these partial computations must be discarded and probably will have to be recomputed later.
-  Abort one process at a time until the deadlock cycle is eliminated: This method incurs considerable overhead, since, after each process is aborted, a deadlock-detection algorithm must be invoked to determine whether any processes are still deadlocked.                                                  

8.7.2 Resource Preemption
To eliminate deadlocks using resource preemption, we successively preempt some resources from processes and give these resources to other processes until the deadlock cycle is broken.

If preemption is required to deal with deadlocks, then three issues need to be addressed:                                                        
1.   Selecting a victim: Which resources and which processes are to be pre­empted? As in process termination, we must determine the order of pre­emption to minimize cost. Cost factors may include such parameters as the number of resources a deadlocked process is holding and the amount of time the process has thus far consumed during its execution.
2.   Rollback: If we preempt a resource from a process, what should be done with that process? Clearly, it cannot continue with its normal execution; it is missing some needed resource. We must roll back the process to some safe state and restart it from that state.
Since, in general, it is difficult to determine what a safe state is, the simplest solution is a total rollback: Abort the process and then restart it. Although it is more effective to roll back the process only as far as necessary to break the deadlock, this method requires the system to keep more in formation about the state of all running processes.
3. Starvation: How do we ensure that starvation will not occur? That is, how can we guarantee that resources will not always be preempted from the same process? In a system where victim selection is based primarily on cost factors, it may happen that the same process is always picked as a victim. As a result, this process never completes its designated task, a starvation situation that needs to bedealt with in any practical system. Clearly, we must ensure that a process can be picked as a victim only a (small) finite number of times. The most common solution is to include the number of rollbacks in the cost factor.

--------------------------------------------------------------------------------
NOTES FROM CHAPTER 9 Memory Manegement
--------------------------------------------------------------------------------
Classically, the binding of instructions and data to memory addresses can be done at any step along the way:
-   Compile time: If you know at compile time where the process will reside in memory, then absolute code can be generated. For example, if you know that a user process will reside starting at location R, then the generated compiler code will start at that location and extend up from there. If, at some later time, the starting location changes, then it will be necessary to recompile this code. The MS-DOS .COM-format programs are bound at compile time.
-  Load time: If it is not known at compile time where the process will reside in memory, then the compiler must generate relocatable code. In this case, final binding is delayed until load time. If the starting address changes, we need only reload the user code to incorporate this changed value.
- Execution time: If the process can be moved during its execution from one memory segment to another, then binding must be delayed until run time. Special hardware must be available for this scheme to work, as will be discussed in Section 9.1.2. Most general-purpose operating systems use this method.

With dynamic linking, a stub is included in the image for each library-routine reference. The stub is a small piece of code that indicates how to locate the appropriate memory-resident library routine or how to load the library if the routine is not already present. When the stub is executed, it checks to see whether the needed routine is already in memory. If not, the program loads the routine into memory.

9.1.5 Overlays
To enable a process to be larger than the amount of memory allocated to it, we can use overlays. The idea of overlays is to keep in memory only those instructions and data that are needed at any given time. When other instructions are needed, they are loaded into space occupied previously by instructions that are no longer needed.
The programmer, however, must design and program the overlay structure properly.  This task can be a major undertaking, requiring complete knowledge of the structure of the program, its code, and its data structures. Because the program is, by definition, large-small programs do not need to be overlaid- obtaining a sufficient understanding of the program may be difficult. For these reasons, the use of overlays is currently limited to microcomputer and other systems that have limited amounts of physical memory and that lack hard­ware support for more advanced techniques.

9.2   -   Swapping
A process must be in memory to be executed. A process, however, can be swapped temporarily out of memory to a backing store and then brought back into memory for continued execution. For example, assume a multipro­gramming environment with a round-robin CPU-scheduling algorithm. When a quantum expires, the memory manager will start to swap out the process that just finished and to swap another process into the memory space that has been freed (Figure 9.4). In the meantime, the CPU scheduler will allocate a time slice to some other process in memory.
A variant of this swapping policy is used for priority-based scheduling algorithms. If a higher-priority process arrives and wants service, the memory manager can swap out the lower-priority process and then load and execute the higher-priority process. When the higher-priority process finishes, the lower-priority process can be swapped back in and continued. This variant of swapping is sometimes called roll out, roll in.
Normally, a process that is swapped out will be swapped back into the same memory space that it occupied previously. This restriction is dictated by the method of address binding. If binding is done at assembly or load time, then the process cannot be easily moved to a different location. If execution-time binding is being used, however, then a process can be swapped into a different memory space, because the physical addresses are computed during execution time.
Whenever the CPU scheduler decides to execute a process, it calls the dispatcher. The dispatcher checks to see whether the next process in the queue is in memory. If it is not, and if there is no free memory region, the dispatcher swaps out a process currently in memory and swaps in the desired process. It then reloads registers and transfers control to the selected process.
The context-switch time in such a swapping system is fairly high. To get an idea of the context-switch time, let us assume that the user process is of size 1 MB and the backing store is a standard hard disk with a transfer rate of 5 MB per second. The actual transfer of the 1-MB process to or from main memory takes
1000KB/5000KB per second = 1/5 second = 200 milliseconds.
Assuming that no seeks are necessary and an average latency of 8 millisec­onds,the swap time is 208 milliseconds. Since we must both swap out and swap in, the total swap time is then about 416 milliseconds.

9.3   -   Contiguous-Memory Allocation
The main memory must accommodate both the operating system and the various user
processes. We therefore need to allocate the parts of the main memory in the mos
t efficient way possible. This section explains one common method, contiguous me
mory allocation.
The major factor affecting this decision is the location of the interrupt vector. Since the interrupt vector is often in low memory, programmers usually place the operating system in low memory as well. Thus, in this text, we discuss only the situation where the operating system resides in low memory. The development of the other situation is similar.  We usually want several user processes to reside in memory at the same time. We therefore need to consider how to allocate available memory to the processes that are in the input queue waiting to be brought into memory. In this contiguous-memory allocation, each process is contained in a single contiguous section of memory.

9.3.1 Memory Protection
Before discussing memory allocation further, we must discuss the issue of memory protection-protecting the operating system from user processes and protecting user processes from one another. We can provide this protection by using a relocation register, as discussed in Section 9.1.2, with a limit register, as discussed in Section 2.5.3. The relocation register contains the value ofthe smallest physical address; the limit register contains the range of logical addresses (for example, relocation = 100040 and limit = 74600). With relocation and limit registers, each logical address must be less than the limit register; the MMU maps the logical address dynamically by adding the value in the relocati on register. This mapped address is sent to memory (Figure 9.5).
When the CPU scheduler selects a process for execution, the dispatcher loads the relocation and limit registers with the correct values as part of the context switch. Because every address generated by the CPU is checked against these registers, we can protect both the operating system and the other users' programs and data from being modified by this running process.
For example, the operating system contains code and buffer space for device drivers. If a device driver (or other operating-system service) is not commonly used, we do not want to keep the code and data in memory, as we might be able to use that space for other purposes. Such code is some­times called transient operating-system code; it comes and goes as needed. Thus, using this code changes the size of the operating system during program execution.

9.3.2 Memory Allocation                                  
Now we are ready to turn to memory allocation. One of the simplest methods for memory allocation is to divide memory into several fixed-sized partitions. Each partition may contain exactly one process. Thus, the degree of multipro­gramming is bound by the number of partitions. In this multiple-partition method, when a partition is free, a process is selected from the input queue and is loaded into the free partition.
In the fixed-partition scheme, the operating system keeps a table indicating which parts of memory are available and which are occupied. Initially, all memory is available for user processes and is considered as one large block of availablememory, a hole. When a process arrives and needs memory, we search for a hole large enough for this process. If we find one, we allocate only as much memory as is needed, keeping the rest available to satisfy future requests.
This procedure is a particular instance of the general dynamic storage-allocation problem, which concerns how to satisfy a request of size n from a list of free holes. There are many solutions to this problem. The first-fit, best-fit, and worst-fit strategies are the ones most commonly used to select a free hole from the set of available holes.  First fit: Allocate the first hole that is big enough. Searching can start either at the beginning of the set of holes or where the previous first-fit search ended. We can stop searching as soon as we find a free hole that is large enough.
-   Best fit: Allocate the smallest hole that is big enough. We must search the entire list, unless the list is ordered by size.   This strategy produces the smallest leftover hole.
-   Worst jit: Allocate the largest hole. Again, we must search the entire list, unless it is sorted by size. This strategy produces the largest leftover hole, which may be more useful than the smaller leftover hole from a best-fit approach.
Simulations have shown that both first fit and best fit are better than worst fit in terms of decreasing time and storage utilization. Neither first fit nor best fit is clearly better than the other in terms of storage utilization, but first fit is generally faster.

9.4   -   Paging
Paging is a memory-management scheme that permits the physical-address space of a process to be noncontiguous. Paging avoids the considerable prob­lem of fitting memory chunks of varying sizes onto the backing store; most memory-management schemes used before the introduction of paging suffered from this problem. The problem arises because, when some code fragments or data residing in main memory need to be swapped out, space must be found on the backing store. The backing store also has the fragmentation problems discussed in connection with main memory, except that access is much slower, so compaction is impossible. Because of its advantages over earlier methods, paging in its various forms is commonly used in most operating systems.

9.4.1 Basic Method                                              
The basic method for implementing paging involves breaking physical memory into fixed-sized blocks called frames and breaking logical memory into blocks of the same size called pages. When a process is to be executed, its pages are loaded into any available memory frames from the backing store. The backing store is divided into fixed-sized blocks that are of the same size as the memory
 frames.  The hardware support for paging is illustrated in Figure 9.6. Every address gene rated by the CPU is divided into two parts: a page number (p) and a page offset (d). The page number is used as an index into a page table. The page table contains the base address of each page in physical memory. This base address is combined with the page offset to define the physical memory address that is sent to the memory unit. The paging model of memory is shown in Figure 9.7.  The page size (like the frame size) is defined by the hardware. The size of a page is typically a power of 2, varying between 512 bytes and 16 MB per page, depending on the computer architecture. The selection of a power of 2 as a page size makes the translation of a logical address into a page number and page offset particularly easy.


     

1 comment: