39 Lecture
CS501
Midterm & Final Term Short Notes
The Cache
The cache is a small, fast memory that stores frequently accessed data and instructions from the main memory of a computer system. It acts as a buffer between the processor and the main memory, allowing the processor to quickly access data and i
Important Mcq's
Midterm & Finalterm Prepration
Past papers included
Download PDF
- What is the purpose of a cache in a computer system?
A) To store infrequently accessed data
B) To provide additional storage for the main memory
C) To act as a buffer between the processor and main memory
D) To speed up the processing of instructions
Answer: C
What principle does the cache operate on?
A) Temporal and spatial locality
B) Random access
C) Sequential access
D) LRU (Least Recently Used) replacement
Answer: A
Which of the following is a characteristic of a good cache design?
A) Large capacity
B) High access time
C) High hit rate
D) Low associativity
Answer: C
What is the purpose of a cache hit?
A) To retrieve data from the main memory
B) To store data in the main memory
C) To retrieve data from the cache
D) To store data in the cache
Answer: C
Which of the following is a disadvantage of a direct-mapped cache?
A) Low hit rate
B) High associativity
C) High complexity
D) Large size
Answer: A
What is the difference between a write-through and write-back cache?
A) Write-through caches are slower than write-back caches
B) Write-back caches are slower than write-through caches
C) Write-through caches write data to both the cache and main memory, while write-back caches only write to the cache until it is full
D) Write-back caches write data to both the cache and main memory, while write-through caches only write to the cache until it is full
Answer: C
Which cache replacement algorithm evicts the least recently used cache line?
A) First-In-First-Out (FIFO)
B) Least Frequently Used (LFU)
C) Least Recently Used (LRU)
D) Random
Answer: C
What is cache coherence?
A) The process of updating the cache when the main memory is modified
B) The process of updating the main memory when the cache is modified
C) The process of ensuring that all caches have the same view of shared memory
D) The process of ensuring that all processors have the same view of shared memory
Answer: C
Which of the following is an example of a cache miss?
A) When data is successfully retrieved from the cache
B) When data is not found in the cache and must be retrieved from main memory
C) When data is overwritten in the cache
D) When data is stored in the cache
Answer: B
What is the difference between a fully associative and set-associative cache?
A) Fully associative caches have a higher hit rate than set-associative caches
B) Set-associative caches have a higher hit rate than fully associative caches
C) Fully associative caches are larger than set-associative caches
D) Set-associative caches are larger than fully associative caches
Answer: B
Subjective Short Notes
Midterm & Finalterm Prepration
Past papers included
Download PDF
What is a cache? How does it improve computer performance? Answer: A cache is a small, high-speed memory that stores frequently accessed data to reduce the number of times the CPU has to access the slower main memory. It improves computer performance by providing faster access to data, reducing the average memory access time. What is the difference between a direct-mapped cache and an associative cache? Answer: In a direct-mapped cache, each memory location can only be stored in one specific location in the cache. In an associative cache, each memory location can be stored in any location in the cache. What is cache coherence? How is it maintained? Answer: Cache coherence is the property that ensures that all copies of a memory location in different caches have the same value. It is maintained through a protocol such as MESI (Modified-Exclusive-Shared-Invalid) that controls how cache copies are updated and invalidated. What is a cache hit? What is a cache miss? Answer: A cache hit occurs when the CPU requests data that is already stored in the cache. A cache miss occurs when the CPU requests data that is not stored in the cache and must be retrieved from main memory. What is the principle of locality? How does it relate to the cache? Answer: The principle of locality states that memory accesses tend to cluster around a small set of memory locations. This principle is important for the cache because it allows the cache to store the most frequently accessed data, reducing the number of cache misses. What is a write-back cache? How does it differ from a write-through cache? Answer: A write-back cache only writes data to main memory when it is evicted from the cache. In contrast, a write-through cache immediately writes data to main memory. Write-back caches can be more efficient because they reduce the number of main memory writes. What is a cache line? How is it related to cache performance? Answer: A cache line is the smallest unit of data that can be stored in the cache. The size of the cache line can affect the cache performance because larger cache lines can reduce the number of cache misses, but smaller cache lines can reduce the cache access time. What is the difference between a level 1 (L1) cache and a level 2 (L2) cache? Answer: An L1 cache is a small, fast cache that is built into the CPU. An L2 cache is a larger, slower cache that is located outside the CPU, typically on the motherboard or in a separate chip. What is cache bypassing? When is it useful? Answer: Cache bypassing is the process of skipping the cache and accessing main memory directly. It can be useful in certain situations where the cache may be slowing down memory accesses, such as when accessing large, contiguous blocks of memory. What is cache thrashing? How can it be prevented? Answer: Cache thrashing occurs when the cache is repeatedly filled with data that is immediately evicted, causing a high number of cache misses. It can be prevented by increasing the size of the cache, increasing the cache line size, or optimizing the program to reduce unnecessary memory accesses.