Memory Management

35
1 Memory Management Chapter 4 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms 4.6 Design issues for paging systems 4.7 Implementation issues 4.8 Segmentation

description

Memory Management. 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms 4.6 Design issues for paging systems 4.7 Implementation issues 4.8 Segmentation. Chapter 4. - PowerPoint PPT Presentation

Transcript of Memory Management

Page 1: Memory Management

1

Memory Management

Chapter 4

4.1 Basic memory management4.2 Swapping4.3 Virtual memory4.4 Page replacement algorithms4.5 Modeling page replacement algorithms4.6 Design issues for paging systems4.7 Implementation issues4.8 Segmentation

Page 2: Memory Management

2

Page Replacement AlgorithmsΑλγόριθμοι αντικατάστασης σελίδων

• Page fault forces choice – which page must be removed

– make room for incoming page

• Modified page must first be saved– unmodified just overwritten

• Better not to choose an often used page– will probably need to be brought back in soon

• “Page replacement” problem occurs in:– memory caches

– web pages

Page 3: Memory Management

3

Optimal Page Replacement AlgorithmΒέλτιστης αντικατάστασης

• When a page fault occurs, set of pages in memory– Replace page needed at the farthest point in future

• Optimal but unrealizable (why?)

• Estimate by:– logging page use on previous runs of process – use

results on subsequent runs– although this is impractical: one program, one specific

set of inputs.

Page 4: Memory Management

4

Not Recently Used Page Replacement AlgorithmΑντικατάστασης σελίδας που δε χρησιμοποιήθηκε πρόσφατα

• Each page has Reference bit, Modified bit– bits are set when page is referenced, modified– This must be done by hardware, otherwise simulated

• Pages are classified:– Class 0: not referenced, not modified (is it possible?)– Class 1: not referenced, modified– Class 2: referenced, not modified– Class 3: referenced, modified

• NRU removes page at random– from lowest numbered non empty class– easy to understand, implement, quite efficient

Page 5: Memory Management

5

FIFO Page Replacement AlgorithmΠρώτη μέσα πρώτη εξώ

• Maintain a linked list of all pages – in order they came into memory, at the head of the list is the

oldest, at tail the most recent

• Page at beginning of the list replaced (the oldest)• Disadvantage

– page in memory the longest may be often used

• Modification?– inspect the R bit of the oldest page

• If R=0, old and unused => replace

• If R=1, old but used => set R to 0; place it at the end of the list

Page 6: Memory Management

6

Second Chance Page Replacement AlgorithmΑλγόριθμος αντικατάστασης δεύτερης ευκαρίας

• Operation of a second chance– pages sorted in FIFO order– Page list if fault occurs at time 20, A has R bit set

(numbers above pages are loading times)

Page 7: Memory Management

7

The Clock Page Replacement AlgorithmΑλγόριθμος ρολογιού

• Second chance is unnecessarily inefficient• Same algorithm, different implementation

Page 8: Memory Management

8

Least Recently Used (LRU)Λιγότερο πρόσφατα χρησιμοποιημένης σελίδας

• Assume pages used recently will be used again soon– throw out page that has been unused for longest time– realizable but not cheap

• Must keep a linked list of pages, sorted by usage– most recently used at front, least at rear– update this list every memory reference: finding a page in the list,

deleting it and move it to the front

• Hardware approach 1:keep counter in each page table entry– equip the hardware with a 64-bit counter, increment at each instr– after each mem reference the value of counter is copied to entry– choose page with lowest value counter at page fault– periodically zero the counter

Page 9: Memory Management

9

Least Recently Used (LRU)

LRU using a matrix – pages referenced in order 0,1,2,3,2,1,0,3,2,3

Hardware approach #2: n page frames, a matrix of n x n bits, initially all 0s. When reference page k, set row k to 1s, then column k to 0.

Page 10: Memory Management

10

Simulating LRU in Software

• Few machines have special hardware => software

• NFU (Not Frequently Used - μη-συχνά χρησιμοποιημένης ):– a counter associated with each page, initally 0– at each clock interrupt,the R bit is added to the counter– at page fault, page with lowest counter is chosen– problem: never forgets anything (e.g. compilation)

• Aging: modification of NFU (γήρανσης)– before adding the R bit, shift right counters 1 bit ( / 2)– R bit is added to the leftmost, not the rightmost bit

Page 11: Memory Management

11

Simulating LRU in Software

• The aging algorithm simulates LRU in software• Note 6 pages for 5 clock ticks, (a) – (e)

Page 12: Memory Management

12

Simulating LRU in Software

• Aging differs from LRU in 2 ways:– Counters are incremented at clock ticks, not during

memory references: loose the ability to distinguish references early in the clock interval from those occurring later (e.g. pages 3,5 at (e)).

– Counters have a finite number of bits (8 in this example). If two counters are 0, we pick one in random, but one page may be referenced 9 ticks ago, the other 1000 ticks ago.

Page 13: Memory Management

13

The Working Set Page Replacement AlgorithmΑλγόριθμος συνόλου εργασίας

• Demand paging: pages are loaded as needed.• Locality of reference: a process during a phase of

execution references a small fraction of its pages.• Working set: pages a process is currently using.• If the working set is not in memory => thrashing.• Trying to keep track of the working set of each

process and load it before running => prepaging.• w(k,t): the set of pages at instant t, used by the k

most recent page references. w(k,t) is a monoto-nically non decreasing function.

Page 14: Memory Management

14

The Working Set Page Replacement Algorithm

• The working set is the set of pages used by the k most recent memory references• w(k,t) is the size of the working set at time, t

Page 15: Memory Management

15

The Working Set Page Replacement Algorithm

The working set algorithm

Page 16: Memory Management

16

Review of Page Replacement Algorithms

Page 17: Memory Management

17

Belady's Anomaly

• FIFO with 3 page frames• FIFO with 4 page frames• P's show which page references show page faults

Page 18: Memory Management

18

Modeling Page Replacement Algorithms

Stack Algorithms• Belady’s anomaly led to a theory for paging algos

• reference string: sequence of memory references as it runs• page replacement algorithm• the number of page frames, m

Page 19: Memory Management

19

Design Issues for Paging SystemsLocal versus Global Allocation Policies

• Original configuration• Local page replacement – fixed size allocated/proc• Global page replacement – dynamically allocated

Page 20: Memory Management

20

Design Issues for Paging SystemsLocal versus Global Allocation Policies

• In general, global algorithms work better

• Continuously decide how many page frames to allocate – decide based on the working set

• Algorithm for allocating page frames to processes– equal share– proportional to processes’ size– PFF (page fault frequency) algorithm – measuring by

taking the running mean

Page 21: Memory Management

21

Design Issues for Paging SystemsLocal versus Global Allocation Policies

Page fault rate as a function of the number of page frames assigned

Page 22: Memory Management

22

Design Issues for Paging SystemsPage Size

Small page size

• Advantages– less internal fragmentation (half of the last page)– better fit for various data structures, code sections– less unused program in memory

• Disadvantages– programs need many pages, larger page tables– more transfers, more time to load the page table

Page 23: Memory Management

23

Design Issues for Paging SystemsPage Size

• Overhead due to page table and internal fragmentation

• Where– s = average process size in bytes

– p = page size in bytes

– e = page entry

2

s e poverhead

p

page table space

internal fragmentatio

n

Optimized when

2p se

Page 24: Memory Management

24

Implementation IssuesOperating System Involvement with Paging

Four times when OS involved with paging:

1. Process creation determine program size, create & init page table space allocated in swap area, pages in and out info about the page table and swap must be stored

2. Process execution MMU reset for new process and TLB flushed new process page table made current prepaging

Page 25: Memory Management

25

Implementation IssuesOperating System Involvement with Paging

Four times when OS involved with paging:

3. Page fault time read registers, determine virtual address causing fault swap target page out, needed page in backup program counter and re-execute

4. Process termination time release page table, pages and swap area

Page 26: Memory Management

26

Implementation IssuesPage Fault Handling – Χερισμός λαθών σελίδας

1. Hardware traps to kernel, saving PC

2. General registers saved, OS called from assembly

3. OS determines which virtual page needed

4. OS checks validity and protection of address. If OK, it seeks page frame (free or replace one). Otherwise sends a signal or kill to the process

5. If selected frame is dirty, writes it to disk and suspends the process

Page 27: Memory Management

27

Implementation IssuesPage Fault Handling

6. OS brings new page in from disk

7. Page tables updated – marked as valid

8. Faulting instruction backed up to when it began

9. Faulting process scheduled and the OS returns to assembly

10. Registers restored and program continues

Page 28: Memory Management

28

Implementation IssuesInstruction Backup – Αποθήκευση εντολής

An instruction causing a page fault

• The instruction causing the fault is stopped part way

• After page fetched, instruction must be restarted

• OS must determine where the first byte is

Page 29: Memory Management

29

Implementation IssuesInstruction Backup

• OS has difficulties determining start of instruction

• Autoincrement, autodecrement registers: side effect of executing an instruction

• Some CPUs: The PC is copied into a register before the instruction is executed

• If not available, the OS must jump through hoops

Page 30: Memory Management

30

Implementation IssuesLocking Pages in Memory – Κλείδωμα σελίδων στη μνήμη

• Virtual memory and I/O occasionally interact• Proc issues call for read from device into buffer

– while waiting for I/O, another process starts up– has a page fault– buffer for the first proc may be chosen to be paged out

• Need to specify some pages locked– exempted from being target pages (pinning)

• Alternatively, all I/O to kernel buffers

Page 31: Memory Management

31

Implementation IssuesBacking Store – Βοηθητική αποθήκευση

• Special swap area on disk. As new processes are started, space is allocated for them.

• Keep in the process table the disk address of the swap space. Calculating addresses is easy (offsets)

• Processes can grow during execution

• Alternatively, allocate nothing in advance, allocate disk space for a page when swapped out and deallocate it when it is back in. For each page not in memory, there must be a map.

Page 32: Memory Management

32

Implementation IssuesBacking Store

(a) Paging to static swap area(b) Backing up pages dynamically

Page 33: Memory Management

33

Segmentation - Κατάτμηση

• For many problems, 2 or more virtual address spaces may be better• One-dimensional address space with growing tables• One table may bump into another

Page 34: Memory Management

34

Segmentation

• Provide the machine with many completely independent address spaces, called segments

• Different segments have different lengths

• To specify an address in this 2-d memory, provide the segment number and the address

• Advantages:– linking– sharing– protection

Page 35: Memory Management

35

Segmentation

Comparison of paging and segmentation