TDT4186 Operating system




  • Multiprocessor schedulers
  • foiler
  • Mutex, semaphores
  • IO handling
  • Adresseberegning (page, segmentation)
  • Buddy system
  • Unlock


These are my notes for TDT4186. These are my own understandings of the subject and it may not be completely correct or accurate. It is also written in slightly below average Norwegian-English.

Why do I post them here?

There are a couple of reasons. The first is for convenience. I like to write Markdown as it is very easy to structure text easily and it does not take any focus away from the text, it also has great readability both uncompiled and compiled. It is also very practical to have this available on the web as I like to read my notes on my iPad, and it is faster to access here. The last reason is that someone may find them useful.

Chapter 1 - Computer Overview

Instruction Execution

When a Instruction is to be Executed, the instruction cycle is as follows:

Start -> Fetch Next -> Execute -> Halt


An interrupt is a mechanism which is used to tell a CPU that the task you are currently doing, must be paused and put back to the queue so the next process can get CPU time.

Program: Fault in the program, such as division by zero and exceptions. Timer: Generated by a timer within the processor. I/O: Generated by an I/O controller to signal completion of task or error. Hardware fail: Generated by hardware failures, such as power failure or memory parity error.

In the instruction execution cycle, checking for interrupts are added at the end after the Execute step. The interrupt is checked between every instruction.

The Memory Hierarchy

The most important parts of the memory hierarchy are the inboard memory and outboard storage.

Inboard memory: Registers, cache, main memory Outboard storage: SSD, HDD, Optical Drive Offline storage: Magnetic tape

The top of the hierarchy is the fastest and it is typically more expensive per byte. The closer the memory is to the CPU the faster, sometimes, like registers and cache, its onboard the CPU.

Cache Memory

Cache is a memory storage inside or very close to the CPU. It is very fast, and usually operates on the same bandwidth as the CPU. Typically the needed information is transferred from the main memory to the cache and then to the CPU/registers. If the system has multiple levels of cache, it will go the entier latter from slowest to fastest before hitting the CPU.

The CPU and sometimes software, has implemented algorithms to try to predict the next parts of needed information.

Direct Memory Access

DMA is a technique where the CPU uses the DMA module on the system bus is used to move data from one point to the other. It does not send any interrupt signal until the job is finished or fails.

The CPU tells the DMA module the following info:

  • Read or write
  • Address of the I/O device involved
  • Starting location in memory to read/write
  • The number of words to be written

Programmed I/O

Programmed I/O does not interrupt the processor when an instruction related to I/O is executed. It sets the I/O status register bits, but does not talk to the processor.

Interrupt-driven I/O

Will make the I/O module work on, and then put the process into the blocking queue. The CPU will then keep on doing something productive. The I/O module will then interrupt the processor when the data is ready.

Interrupted I/O is more efficient than Programmed I/O, but does require the CPU to transfer data between the memory and the I/O module.

Chapter 2 - OS Overview

Virtual Machines

Chapter 3 - Processes

What is a Process?

A program can be defined as:

  • A program in execution
  • An instance of a program running on a computer
  • The entity that can be executed on a processor
  • A unit of activity characterized by the execution of a sequence of instructions, a current state, and an associated set of system resources.

Two essential elements of a process are program code and a set of data.

While a process is executed, it includes the following:

Identifier: Unique identifier

State: The state, if executing, its in the running state.

Priority: Priority level relative to other processes.

Program counter: The address of the next instruction.

Memory pointers: Pointers to the code, associated data and shared blocks.

Context data: Data present in the registers while executing.

I/O status information: Information about used devices and files, outstanding I/O requests.

Accounting information: Information about used time, clock time and time limits.

Process States

A process can be in several different states, mostly five different states are used:

Running: The process is being executed.

Ready: A process that is ready to be executed.

Blocked: A process that cannot execute until some event occurs, e.g. completion of I/O.

New: A newly created process that are not in the queue, probably not loaded into memory.

Exit: A process that is not in the pool, either halted or aborted.

Process Description

Process Control

Process control block

A data structure in the operating system kernel containing the information needed to manage a particular process.

Usually contains:

  • Process identification data
  • Processor state data
  • Process control data

From the earlier list of information regarding processes, everything except Identifier and State is control data.

Chapter 4 - Threads

Single thread process vs Multi-thread process

Process vs Threads

Benefits of a thread:

  • Faster creation
  • Faster termination
  • Faster switching between two threads
  • Faster communication between two threads (no need to involve the kernel)


  • More isolated activities
  • Has more context
  • Can be split into threads


  • Tasks that are more connected
  • Less context
  • Multiple threads within a process
  • Natural parallel within program
  • Multiple inputs within program

Types of Threads

There are two main categories of thread implementation; the user-level threads and the kernel-level threads.

User-level threads

All work done related to the thread, the management, and the existence is not known to the kernel. The management is done purely by the application running the thread. User threads are usually handled via libraries.


  • Thread switching does not require kernel mode
  • Scheduling can be application specific
  • ULTs can run on any OS


  • OS system calls are typically blocking, the one thread blocks the entire process
  • Pure ULT cannot take advantage of multiprocessing, one process one core.


Technique uses to convert blocking system calls into a nonblocking system call. A thread uses application-level I/O jacket routine to check if the I/O device is busy.

Kernel-level threads

All the thread management is done by the kernel. The main disadvantage of using a KLT is that a mode switch to give the kernel control is needed.

Chapter 5 - Concurrency: Mutex and sync

Principles of Concurrency

Mutual Exclusion: Hardware

On a uniprocessor system, the simplest solution to mutual exclusion is to disable interrupts in the CPU when a process critical section is running. This way, no other process can take over the CPU and modify the data. This solution, however, is not optimal as jobs with long critical parts will choke the system. Also, if the job fails during the critical system, the CPU will not be given to another process, halting the whole system.


A semaphore can be used to control access to a unit of particular resources. Semaphores which allows an arbitrary resource count are called counting semaphores and semaphores which only allows locked/unlocked or 0 and 1 is called binary semaphores.


Monitors or Synchronization(Java) is a construct that allows threads to have both mutual exclusion and the ability to wait/block for a certain condition to become true.

Monitors also have mechanism for signaling other threads that their condition has been met.

Monitors provide a mechanism for threads to temporarily give up exclusive access in order to wait for some condition to be met.

Message Passing

Is basically communication between processes. Can be used in building distributed systems, or inside a multiprogrammed system.

It is important to consider what do to in two cases when handling message passing between processes.

When a message is sent, should the process that has sent the message block until the message is received, or should it continue executing.

The same should be considered when receiving a message, if the receive functionality is executed, should it block/wait for it to receive anything or should it try to receive and then move on and continue executing.

Blocking send, blocking receive: Tight synchronization between processes.

Nonblocking send, blocking receive: Probably most useful. An example is a process that exists to provide information to other processes.

Nonblocking send, nonblocking receive: No waiting.


When sending a message one can have direct addressing, this is when the message is directly sent to one process.

One can also have indirect addressing, where the message is put into a shared data structure consisting of queues. The queue is generally referred to as mailboxes.

Indirect addressing can be used to achieve one to one, one to many and many to many.

Chapter 6 - Concurrency: Deadlock and Starvation


Deadlock can be defined as the permanent blocking of a set of processes that either competes for system resource or communicate with each other.

Typically processes are deadlocked when each process is blocked and waiting for one of the deadlocked processes to trigger an event or release a resource. The deadlock is permanent as nothing is going to be released since everybody waits for the next one.


To have a possibility of Deadlock, the first three conditions below must be met, if the forth is also met, the deadlock exist.

Mutual exclusion: Only one process may use a resource at a time.

Hold and wait: A process may hold the allocated resource while awaiting assignment of other resources.

No preemption: No resource can be forcibly removed from a process holding it.

Circular wait: A closed chain of processes exists, such that each process holds at least one resource needed by the next process in the chain.


Deadlock prevention is based on the idea of designing the system in a way that prevents one of the deadlock conditions to become true. There is two approaches, first, prevent one of the three first conditions or second, prevent circular wait.

Mutual exclusion: Cannot typically be disallowed.

Hold and wait: Can be prevented by requiring a process to request all resources at one time, and blocking the process until all requests has been granted simultaneously. This method is inefficient in two ways, first, waiting for all resources can block the process in a long time, and second it may not know all the required resources upon execution.

No preemtion: Can be prevented in several ways. First, if a process holding a resource is denied a further request, that process must release its original resources and, if necessary, request them again later. Alternaltively, if a process requests a resource that is currently held by another process, the OS may preempt the second process and require it to release its resources. This can only prevent deadlock if no two processes has the same priority.

Cirular wait: Can be prevented by defining a linear ordering of resource types. If a process has been allocated resources of type R, then it may subsequently request only those resources of types following R in the ordering.


Deadlock avoidance is an approach that allows the three necessary conditions, but makes choices to assure that the deadlock point is never reached.

Avoidance therefore allows more concurrency than prevention. The decision is made dynamically wheter the current resource allocation will, if granted potentially lead to a deadlock.

Two approaches to deadlock avoidance :

  • Do not start a process if its demands might lead to deadlock.
  • Do not grant an incremental resource request to a process if this allocation might lead to deadlock.

Banker's algorithm

Banker's algortihm is a deadlock avoidance and resource allocation algorithm.

For the algorithm to work, we need to know three things:

  • How much of each resource each process could possibly request [MAX]
  • How much of each resource each process is currently holding [ALLOCATED]
  • How much of each resource the system currently has available [AVAILABLE]

Resources may be allocated to a process only if it satisfies the following conditions:

  1. request <= max, else set error condition as process has crossed maximum claim made by it
  2. request <= available, else process waits until resource are available.


While prevention and avoidance strategies are very conservative, they solve the problem.

On the opposite extreme, deadlock detection strategies do not limit resource access. Instead it will grant all requests whenever possible. Then periodically, the OS performs an algorithm that allows it to detect the circular wait condition.

Recovery from Deadlock

Possible approaches, listed in order of increasing sophistication:

  1. Abort all deadlocked prcoess. Most common implemented solution.
  2. Back up each deadlocked process to defined checkpoint and restart. This requires the checkpoint and restart functionality to be implemented.
  3. Successively abort deadlocked processes until deadlock no longer exists. The order should be based on the least cost. After each abortion, call detection again to see if it still exists.
  4. Successively preempt resources until deadlock no longer exists. Order should be baed on least cost. A process that has been preemptied, must be rolled back to a point prior to its acquisition.

For approach 3 and 4, the selection criteria could be one of the following:

  • Least amount of processor time consumed so far
  • least amount of output produced so far
  • most estimated time remaining
  • least total resource allocated so far
  • lowest priority

Chapter 7 - Memory Management

The memory management system has the following requirements:

Relocation: Relocate the memory used by an application. May be inside the main memory or to hard drive.

Protection: Processes must not be allowed to access (read nor write) to memory that it is not assigned. Ususally implemented in hardware.

Sharing: The protection mechanism must have the flexability to share between processes. E.g. Multiple instances of the same program should not occupie memory for every instance, it should use the already exisiting data.

Logical organization: Segmentation.

Physical organization: The flow between the main and secondary storage. The essence of memory management.

Things done in software:

  • Getting (access)
  • Put (edit)
  • Placement (if not full)
  • Swapping (if full)
  • Mass - How much for each (response time)
  • Amount - How many (throughput)

Things done in software:

  • Address translation (virtual to physical)
  • Address control
  • Interrupt handling
  • Fill the cache

Memory Partitioning

Fixed Partitioning

Main memory is divided into a number of static partitions at system generation time. A process may be loaded into a partition of equal or greater size. Simple to implement. Inefficient use of memory, the maximum number of active processes is fixed.

Dynamic Partitioning

Partitions are created dynamically, so that each process loaded is fit into a perfect sized partition. More effective than Fixed. Inefficient use of processor due to the need for compaction to counter external fragmentation.


Chooses the block that is closest in size to the request Best at preventing fragmentation, but worst runtime.


Chooses the first block from the start that is large enough for the request. Ususally the simplest, fastest and best.


Chooses the next block after the last inserted that is large enough for the request. Usually slightly slower than first-fit.


Paging can be looked at as splitting the memory into small fixed size frames that is the same size as a chunk of a process, known as a page. The operating system ususally maintains a page table for each process. The page table shows the frame location for each page of the process.

Simple Paging

Main memory is devided into a number of equal-size frames. Each process is devided into a number of equal-size pages of the same length as frames. A process is loaded by loading all of its pages into available, not necessarily contiguous, frames. No external fragmentation. A small amount of internal fragmentation.

Virtual Memory Paging

As simple, but do not load all the pages of a process. Nonresident pages that are needed are brought in later automatically. No external fragmentation, higher degree of multiprogramming, large virtualaddress space. It has a overhead of complex memory management.


A process and its data can be subdivided into segments. All of the segments does not need to be the same lenght, but it has an upper bound. Segmentation kinda looks like Dynamic Partitioning, but it does not require the whole process to fit inside one partition, and the location of the different segments does not need to be contiguous.

Segmentation is also visible to the programmer and is provided as a convenience for organizing programs and data.

Simple Segmentation

Each process is divided into a number of segments. A process is loaded by loading all of its segments into dynamic partitions that need not be contiguous. No internal fragmentation, improved memory utilization and reduced overhead compared to dynamic partitioning. Has a problem with external fragmentation.

Virtual Memory Segmentation

As with simple segmentation, except that it is not necessary to load all of the segments of a process. Nonresident segments that are needed are brought in later automatically. No internal fragmentation, higher degree of multiprogramming, large virtual address space, protection and sharing support. As virtual memory paging, it has a overhead of complex memory management.

Chapter 8 - Virtual Memory

A storage allocation scheme in which secondary memory can be addresset as though it were part of main memory.

Virtual Address: The address assigned to a location in virtual memory to allow that location to be accessed as though it were part of main memory.

Virtual Address Space: The virtual storage assigned to a process.

Address Space: The range of memory addresses available to a process.

Real Address: The address of a storage location in main memory.

Operating System Software

Fetch Policy

Demand paging

A page is brought into main memory only when a reference is made to a location on that page. The idea is also that when a process is first started, there will be a flurry of page faults. As more and more pages are brought in, the principle of locality suggests that most future references will be to pages that have been brought in.


Prepaging brings in additional pages than just the demanded one. It will exploit the fact that many of the secondary storage saves the data in sequences and therefore fetches pages that is stored around the demanded page. This is inefficient if the extra pages will never be used.

Placement policy

See chapter 7 for best-fit, next-fit and first-fit.

Replacement Policy

Frame Locking

A frame in the memory may be locked. This is typically frames that contains the OS kernel and other key control structure.

Basic Algorithms: Optimal

Selects the frame for which the time to the next reference is the longest. This policy is impossible to implement, because it would require the OS to have perfect knowledge of future events.

Basic Algorithms: Least recently used (LRU)

Selects the frame in memory that has not been referenced for the longest time. This should be the page least likely to be referenced in the near future.

It does almost performe as well as the optimal policy.

Basic Algorithms: FIFO

Selects the frame that has been in the memory for the longest time. Simple to implement. This reasoning will often be wrong, because there will often be regions of program or data that are heavily used throughout the life of a program.

Basic Algorithms: Clock

A scheme of different algorithms that tries to implement LRU with less overhead. Clock is a more efficient version of FIFO.

The Clock keeps a circular list of pages in memory.

Chapter 9 - Uniprocessor Scheduling

Types of Processor Scheduling

Scheduling and Process State Transitions

Long-term scheduling: The decision to add to the pool of processes to be executed.

Medium-term scheduling: The decision to add the number of processes that are partially or fully in main memory.

Short-term scheduling: The decision as to which available process will be executed by the processor.

IO scheduling:The decision as to which process's pending IO request shall be handled by an available IO device.

Scheduling Algorithms

Result focus Systemfocus
Performance Low responsetime High throughput
Low cycle time High system utilisation
Meet deadlines Low overhead
Non performance Avoid starvation Fair
High predictability Ensure priority
Resource balance

FCFS - First-Come, First-Served

FIFO queue where the first job will get the first time. Can be very bad if short jobs come after the long jobs.

Round Robin

Every process will take turn for a time quantum. It is a fair algorithm, but not necessary the best. Bad when all jobs are the same length.

SJF - Shortest-Job first

The CPU will try to run the short jobs first. Big effect on short jobs, a smaller effect on long. Better average response time.

One of the problems is to predict which jobs are the shorts. The possibility of starvation can also occur if the system always gets shorter jobs than a waiting long job.


If a new job arrives with less remaining time than the current job, swap them. Also called Shortest-Remaining-Time-First(SRTF). Very good average response time. Optimal.


Once a job starts executing, it runs to completion

HRRN - Highest Response Ratio Next

$R = \frac{w+s}{s}$


R = response ratio

w = time spent waiting for the processor

s = expected service time (time spent on the clock)

This policy will favor short jobs, as they will typically have a high ratio. This is while long jobs don't get starved as they get a high wait and will get through.

As SJF/SRT and SPN, HRRN will need to estimate the service time for a process, giving it a higher overhead.


If we have no indication of how long a job will take, SJF/SRT, SPN and HRRT cannot be used.

Another way to choose the shorter jobs, is to penalize the jobs that have been running longer. This is done by preemptive (at time quantum) basis. After a process has first been preempted, it will be placed in RQ1, if the process doesn't finish in the second run, and is preempted again, it will be placed in RQ2 and so on. The RQ with the lowest number is emptied first, and the queue itself uses FIFO.

Starvation of processes is possible if new processes keep entering the system frequently.

Chapter 10 - Multiprocessor and RT Scheduling

Multiprocessor Scheduling

Real-Time Scheduling

Real-time operating systems will prioritize processes that are triggered from events that are coming from outside the system.

Unique requirements areas:

Determinism: Measure of the maximum delay from the arrival of a high-priority device interrupt to when servicing begins. Within a real-time system, the upper bound of this delay can be anywhere from a few microseconds to a millisecond.

Responsiveness: The amount of time the system spends to initially handle the interrupt and begin execution. Should not require a process switch if it not must.

User control: The system should allow fine-grained user prioritization. The user should be able to distinguish between hard and soft tasks. The user can sometimes also control what kind of algorithms are used. E.g. the disk transfer algorithms.

Reliability: Much more important than in non-realtime system. Loss of computing power will result in delayed responses.

Fail-soft operation: When errors occur, the system should not fail, but instead try to fix them and keep on going. It is important that as much as possible resources are available for incoming requests.

To meet this requirement, real-time systems typically include the following features:

  • Fast process or thread switch
  • Small size/minimal functionality
  • Ability to respond to external interrupts quickly
  • Multitasking with interprocess communication tools such as semaphores, signals, and events
  • Use of special sequential files that can accumulate data at a fast rate
  • Preemptive scheduling based on priority
  • Minimization of intervals during which interrupts are disabled
  • Primitives to delay tasks and to pause/resume tasks.
  • Special alarms and timeouts.


Rate-monotonic scheduling

  • No resource sharing (processes do not share resources, no semaphore blocking)
  • Deterministic deadlines are exactly equal to periods
  • Static priorities (highest priwill immediately preempts other tasks)
  • Static priorities assigned according to the rate monotonic conventions (tasks with shorter periods/deadlines are given higher priorities)
  • Context switch times and other thread operations are free

Chapter 11 - I/O Management and Disk Scheduling



OS Design Issues

Disk Scheduling


Elevator (SCAN and LOOK)

Drive head only moves one way. Obsolete.



Possible starvation. Obsolete.

Anticipatory scheduler

Bad for high-performance disks.

Native Command Queuing

Deadline Scheduler

Completely Fair Queuing


Disk Cache

Chapter 12 - File Management

File Organization and Access


File Directories

File Sharing

Record Blocking

Secondary Storage Management

File System Security

Appendix A - Topics in Concurrency

Mutual Exclusion: Software Approaches

Dekker's Algorithm

Peterson's Algorithm

Spørsmål og svar fra tidligere eksamner (norsk)


Oppgave 1: Synkronisering av prosesser/tråder

a) Angi kort forskjeller mellom tråder og prosesser


  • Tilsvarer mer isolerte aktiviteter
  • Har mer kontekst knyttet til seg
  • Kan deles i tråder


  • Tilsvarer mer samhørende aktiviteter
  • Har mindre kontekst knyttet til seg
  • Kan samles i prosesser

b) Beskriv og diskuter noen konkrete tilfeller hvor tråder er mer anvendelig enn prosesser

  • Ved naturlig parallellitet innen program / applikasjon eks. Matrise operasjoner
  • Ved flere parallelle inn-enheter eks. tastatur og mus

c) Angi kort hva meldinsutveksling (Message Passing) er og hvordan meldingsutveksling brukes i synkroniserings sammenheng (Process / Thread Synchronization)

Meldingsutveksling er en kommunikasjonsform. Den kan brukes både i sentraliserte og distribuerte systemer. Basiskonseptene er et dataobject, melding, med to tilhørende dataoperasjoner, send og receive.

I synkroniserings sammenheng brukes en gitt melding som et token. Denne meldingen sendes frem og tilbake mellom prosesser / tråder. En gitt process / tråd har tillatelse til å utføre visse aktiviteter kun når den er i besittelse av token meldingen.

d) Beskriv og diskuter helt konkret hvordan meldingsutveksling kan implementeres med semaforer


var Slot: Semaphore = N
    Buffer: Semaphore = 1
    Port: Array[1..m] of Semaphore = m*0
    BufferQ: List of Item
    PortQ: Array[1..m] of List of Item

for I = 1 to n
    &lt;Link Item into BufferQ&gt;

Data operasjoner:

Send(Message, Destination):
    &lt;Unlink Item from BufferQ&gt;
    &lt;Copy Message into Item&gt;
    &lt;Link Item into PortQ[Destination]&gt;

Receive(Message, Source):
    &lt;Unlink Item from PortQ[this]&gt;
    &lt;Copy Message from Item&gt;
    &lt;Link Item into BufferQ&gt;

Oppgave 2: Bruk av lager

a) Angi kort hvilke aspekter av virtuelt lager (Virtual Memory) som kan håndteres i programvare og hvilke aspekter som må håndteres i maskinvare.


  • Innhenting - når ? (ved aksess)
  • Tilbakeføring - når ? (ved endring)
  • Plassering - hvor ? (hvis ikke fullt)
  • Utbytting - hvor ? (hvis fullt)
  • Mengde - hvor mye til hver enkelt ? (responstidsfokusert)
  • Antall - hvor mange totalt sett ? (gjennomstrømningsfokusert)


  • Avbilding: oversetter adresser
  • Sjekking: Kontrollere addresser
  • Reagere med avbrudd: På manglende "info"
  • Hjelpe via caching: Med viktig "info"

b) Illustrer og beskriv hvordan adresse beregning (Address mapping) skjer ved segmentering Svar: segmentering

c) Illustrer og beskriv hvordan adresseberegning (Address mapping) skjer ved sidedeling Svar: sidedeling

d) Diskuter kort bruk av nærlagring (Caching) av ulike data/info i denne sammenheng Svar: cache

Oppgave 3: Tidsstyring av prosesser

a) Angi kort relevante mål (objectives) for tidsstyring Svar: tidstyring

b) Beskriv spesifikt hvorfor andre tidsstyrings algoritmer blir nødvendige for multiprosessorer enn for single prosessorer

Med over to prosessorer:

  • Bet å unngå at en prosess / tråd legger beslag på en prosessor blir mindre viktig
  • Det å oppnå at gitte tråder / prosesser kjører samtidig på flere prosessorer blir mer viktig.

c) Beskriv likheter og uligheter mellom noen slike multiprosessor algoritmer