OS Chapter 9
2015-11-05 09:56:33   0  举报             
     
         
 AI智能生成
  第九章介绍了操作系统的基础知识。首先,我们了解了操作系统的定义和功能,它是计算机系统中的核心程序,负责管理和控制计算机硬件、软件资源,为用户和其他应用程序提供服务。接着,我们学习了操作系统的分类,包括单用户、多用户、分布式操作系统等。此外,我们还探讨了操作系统的主要组成部分,如处理器管理、内存管理、文件系统管理、设备管理和用户接口。在这一章中,我们还简要介绍了一些著名的操作系统,如Windows、Linux和macOS。最后,我们讨论了操作系统的发展趋势,如云计算、物联网和人工智能等技术对操作系统的影响。总之,第九章为我们提供了一个全面的操作系统概述,为后续章节的学习奠定了基础。
    作者其他创作
 大纲/内容
  Background    
     Only part of the program needs to be in memory for execution
Logical address space can therefore be much larger than physical address space
Allows address spaces to be shared by several processes
More programs running concurrently  
     Consider ability to execute partially-loaded program    
     Program no longer constrained by limits of physical memory  
     Virtual memory can be implemented via    
     Demand paging 
Demand segmentation  
     Demand Paging    
     Could bring entire process into memory at load time  
     Or bring a page into memory only when it is needed    
     Less I/O needed, no unnecessary I/O  
     Less memory needed  
     Faster response  
     Lazy swapper – never swaps a page into memory unless page will be needed    
     Swapper that deals with pages is a pager  
     Valid-Invalid Bit    
     With each page table entry a valid–invalid bit is associated(v => in-memory – memory resident, i => not-in-memory)  
     Initially valid–invalid bit is set to i on all entries  
     During address translation, if valid–invalid bit in page table entry is i => page fault  
     page fault    
     concept    
     If there is a reference to a page, first reference to that page will trap to operating system  
     pure    
     This scheme is pure demand paging: never bring a page into memory until it is required.  
     Performance of Demand Paging    
     Page Fault Rate 0 < p < 1
if p = 0 no page faults 
if p = 1, every reference is a fault  
     Effective Access Time (EAT)
		EAT = (1 – p) x memory access + p (page fault overhead + swap page out  + swap page in + restart overhead )  
     EAT = (1-p) x memory access + p x page fault time  
     example    
     Memory access time = 200 nanoseconds
Average page-fault service time = 8 milliseconds  
     EAT = (1 – p) x 200 + p (8 milliseconds) 
	        = (1 – p  x 200 + p x 8,000,000 
              = 200 + p x 7,999,800  
     If one access out of 1,000 causes a page fault, then
         EAT = 8.2 microseconds  
     If want performance degradation < 10 percent
220 > 200 + 7,999,800 x p20 > 7,999,800 x p
p < .0000025
< one page fault in every 400,000 memory accesses  
     Page Replacement    
     concept    
     Find the location of the desired page on disk  
     Find a free frame:    
        -  If there is a free frame, use it  
        -  If there is no free frame, use a page replacement algorithm to select a victim frame  
     	- Write victim frame to disk if dirty  
     Bring  the desired page into the (newly) free frame; update the page and frame tables  
     Continue the process by restarting the instruction that caused the trap  
     Note now potentially 2 page transfers for page fault – increasing EAT  
     Algorithms    
     Frame-allocation algorithm determines 
How many frames to give each process
Which frames to replace  
     Page-replacement algorithm
Want lowest page-fault rate on both first access and re-access  
     First-In-First-Out (FIFO) Algorithm    
     Belady’s Anomaly    
     Adding more frames can cause more page faults!    
     consider 1,2,3,4,1,2,5,1,2,3,4,5  
     frame变多,page fault变多,frame3 : 9次,frame 4  :10次  
     Optimal Algorithm    
     Replace page that will not be used for longest period of time  
     Least Recently Used (LRU) Algorithm    
     Use past knowledge rather than future
Replace page that has not been used in the most amount of time
Associate time of last use with each page  
     12 faults – better than FIFO but worse than OPT
Generally good algorithm and frequently used
But how to implement?  
     implementation    
     Counter implementation    
     Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter  
     When a page needs to be changed, look at the counters to find smallest value
Search through table needed  
     Stack implementation    
     Keep a stack of page numbers in a double link form  
     Page referenced:
move it to the top
But each update more expensive
No search for replacement  
     LRU and OPT are cases of stack algorithms that don’t have Belady’s Anomaly  
     LRU Approximation Algorithms    
     LRU needs special hardware and still slow  
     Reference bit    
     With each page associate a bit, initially = 0
When page is referenced bit set to 1
We do not know the order, however  
     Second-chance algorithm    
     Generally FIFO, plus hardware-provided reference bit
Clock replacement
If page to be replaced has 
Reference bit = 0 -> replace it
reference bit = 1 then:
set reference bit 0, leave page in memory
replace next page, subject to same rules  
     都有可能退化成FIFO,∴會遇到Belady異常情況  
     Counting Algorithms    
     keep a counter of the number of references that have been made to each page  
     LFU (Least Frequently Used) Algorithm    
     replaces page with smallest count  
     MFU (Most Frequently Used)  Algorithm    
     based on the argument that the page with the smallest count was probably just brought in and has yet to be used  
     Allocation of Frames    
     Each process needs minimum number of frames  
     Two major allocation schemes    
     fixed allocation    
     equal allocation    
     For example, if there are 100 frames (after allocating frames for the OS) and 5 processes, give each process 20 frames  
     proportional allocation    
     Allocate according to the size of process    
     按比例的分配  
     priority allocation    
     Use a proportional allocation scheme using priorities rather than size  
     If process Pi generates a page fault    
     select for replacement one of its frames  
     select for replacement a frame from a process with lower priority number  
     Global vs. Local Allocation    
     Global replacement     
     process selects a replacement frame from the set of all frames; one process can take a frame from another  
     But then process execution time can vary greatly  
     But greater throughput so more common  
     Local replacement     
     each process selects from only its own set of allocated frames  
     More consistent per-process performance  
     But possibly underutilized memory  
     Thrashing    
     concept    
     a process is busy swapping pages in and out  
     If a process does not have “enough” pages, the page-fault rate is very high    
     Page fault to get page  
     Replace existing frame  
     But quickly need replaced frame back    
     Low CPU utilization
Operating system thinking that it needs to increase the degree of multiprogramming
Another process added to the system  
     solution    
     working set model     
     Δ = working-set window = a fixed number of page references Example:  10,000 instructions
WSSi (working set of Process Pi) =total number of pages referenced in the most recent  (varies in time)
if Δ too small will not encompass entire locality
if Δ too large will encompass several localities
if Δ = 无限大 => will encompass entire program
D = summation of  WSSi = total demand frames 
If D > m => Thrashing
Policy if D > m, then suspend or swap out one of the processes
  
     page fault frequency    
     More direct approach than WSS  
     Establish “acceptable” page-fault frequency rate and use local replacement policy    
     If actual rate too low, process loses frame
If actual rate too high, process gains frame  
     Memory-Mapped Files    
     Consider a sequential read of a file on disk using the standard system calls open (),read (), and write ().   
     Each file access requires a system call and disk access  
     Alternatively, we can use the virtual memory techniques discussed so far to treat file I/0 as routine memory accesses  
     This approach, known as a file, allows a part of the virtual address space to be logically associated with the file.   
     As we shall see, this can lead to significant performance increases when performing I/0  
     Allocating Kernel Memory    
     Treated differently from user memory  
     Often allocated from a free-memory pool    
     Kernel requests memory for structures of varying sizes  
     Some kernel memory needs to be contiguous  
     Buddy System    
     Allocates memory from fixed-size segment consisting of physically-contiguous pages  
     Memory allocated using power-of-2 allocator
Satisfies requests in units sized as power of 2 
e.g., 4KB, 8KB, 16KB
Request rounded up to next highest power of 2 
e.g., 11KB  16KB  
     For example, assume 256KB chunk available, kernel requests 21KB
Split into AL and Ar of 128KB each
One further divided into BL and BR of 64KB
One further into CL and CR of 32KB 
    each – one used to satisfy request  
     Advantage – quickly coalesce unused chunks into larger chunk
Disadvantage - fragmentation  
     Slab Allocation  
     Other Considerations    
     prepaging    
     To reduce the large number of page faults that occurs at process startup  
     Prepage all or some of the pages a process will need, before they are referenced  
     Assume s pages are prepaged and α of the pages is used
Is cost of s * α  save pages faults > or < than the cost of prepaging s * (1- α) unnecessary pages?  
α near zero  prepaging loses   
     Page Size    
     Page size selection must take into consideration:
Fragmentation
Page table size 
Resolution
I/O overhead
Number of page faults
Locality
TLB size and effectiveness
Always power of 2, usually in the range 212 (4,096 bytes) to 222 (4,194,304 bytes)  
     TLB Reach     
     TLB Reach - The amount of memory accessible from the TLB
TLB Reach = (TLB Size) X (Page Size)
Ideally, the working set of each process is stored in the TLB
Otherwise there is a high degree of page faults
Increase the Page Size
This may lead to an increase in fragmentation as not all applications require a large page size
Provide Multiple Page Sizes
This allows applications that require larger page sizes the opportunity to use them without an increase in fragmentation  
     program structure    
     Program 1:  128 x 128 = 16,384 page faults 
                for (j = 0; j <128; j++)                  for (i = 0; i < 128; i++)                        data[i,j] = 0;
Program 2:  128 page faults
             for (i = 0; i < 128; i++)               for (j = 0; j < 128; j++)                     data[i,j] = 0;  
    
 
 
 
 
  0 条评论
 下一页
  
   
   
   
   
  
  
  
  
  
  
  
  
  
 