COMS W4118 Operating Systems I

Interrupts, Spin Locks, and Preemption

Interrupts

Review

Recall discussion of interrupts in L09-syscalls:

Kernel Execution Context

Kernel execution occurs in one of two modes:

Process context

Interrupt context

Interrupt Handler

Only time-critical work should be dealt with in the handler so that we can return to the interrupted task ASAP. Push remainder of the work to “bottom half”

For example, the two halves of dealing with network packet arrival could look like:

Single interrupt will not nest, so handler need not be reentrant

Spin Locks

Mutual Exclusion

We’ve seen two synchronization primitives so far that allow us to enforce mutual exclusion:

semaphore: L05-ipc
pthread_mutex: L06-thread

For example, L06-thread’s bank demo showed that current increments/decrements on a shared integer needed synchronization to prevent data corruption due to the interleaving of non-atomic operations:

pthread_mutex_lock(&balance_lock);
++balance;
pthread_mutex_unlock(&balance_lock);

Semaphore and pthread_mutex are examples of sleeping locks. The calling task is put to sleep while it waits for the critical section to become available.

Mutex can also be achieved using a spinning lock. Instead of sleeping until the critical section is free, spin locks poll the critical section until it is free.

Spin Lock Implementation

High-level idea: lock() polls until flag == 0, then sets flag = 1. unlock() sets flag = 0.

int flag = 0;

lock() {
    while (flag == 1)
        ;

    // This gap between testing and setting the variable
    // creates a race condition!

    flag = 1;
}

unlock() {
    flag = 0;
}

Race condition: what if task 1 is about to set flag = 1 but before it could, task 2 sees flag == 0?

Correct implementation using atomic test_and_set hardware instruction:

int flag = 0;

lock() {
    while(test_and_set(&flag))
        ;
}

unlock() {
    flag = 0;
}

In C pseudocode, test_and_set hardware instruction looks like:

int test_and_set(int *lock) {
    int old = *lock;
    *lock = 1;
    return old;
}

Linux Kernel Spin Locks

Spinning vs. Sleeping Lock

Sleeping lock incurs cost of context-switch to put caller to sleep:

Spinning lock consumes CPU time by polling:

Can only use spin locks in interrupt context

Can’t sleep while holding spin lock

Preemption

Recall from L10-run-wait-queues: there exists a state transition from running task to runnable task because of preemption (e.g. by timer interrupt).

Kernel cannot always rely on tasks to willingly yield the CPU – programs could run indefinitely otherwise.

Instead, kernel tracks a per-process TIF_NEED_RESCHED flag. If set, preemption occurs by calling schedule() in the following cases:

  1. Returning to user space:

    • from a system call

    • from an interrupt handler

  2. Returning to kernel from an interrupt handler, only if preempt_count is zero

  3. preempt_count just became zero – right after spin_unlock(), for example

  4. Task running in kernel mode calls schedule() itself – blocking syscall, for example


Last updated: 2022-02-28