41 Lecture

CS501

Midterm & Final Term Short Notes

Numerical Examples of DRAM and Cache

Numerical examples of DRAM and Cache involve the calculation of the hit rate, access time, and efficiency of a cache or DRAM. These examples involve determining the number of hits and misses in the cache, calculating the average access time for


Important Mcq's
Midterm & Finalterm Prepration
Past papers included

Download PDF
  1. What is the hit rate of a cache with 2000 cache lines, where 1500 references were made and 300 misses occurred? a. 85% b. 80% c. 75% d. 70% Answer: a What is the miss rate of a cache with 512 cache lines, where 1000 references were made and 50 misses occurred? a. 5% b. 10% c. 15% d. 20% Answer: a If a cache access takes 5 ns and a DRAM access takes 50 ns, and the hit rate of the cache is 90%, what is the average memory access time? a. 5.5 ns b. 6.5 ns c. 7.5 ns d. 8.5 ns Answer: b A program has a total of 10,000 memory references, of which 1000 are cache misses. What is the hit rate of the cache? a. 90% b. 85% c. 80% d. 75% Answer: a A cache has 512 lines, each of which can hold 32 bytes. How many bits are required to address a byte in this cache? a. 7 bits b. 8 bits c. 9 bits d. 10 bits Answer: c If a cache has a hit rate of 95%, what is the miss rate? a. 5% b. 10% c. 15% d. 20% Answer: a If a cache has a hit rate of 80% and an access time of 5 ns, and a DRAM has an access time of 50 ns, what is the average memory access time? a. 9 ns b. 10 ns c. 11 ns d. 12 ns Answer: c A cache has a hit rate of 90% and an access time of 5 ns. What is the effective access time if the cache is split into two levels, where the L1 cache has a hit rate of 95% and an access time of 2 ns, and the L2 cache has a hit rate of 80% and an access time of 10 ns? a. 4.1 ns b. 4.5 ns c. 5.0 ns d. 5.5 ns Answer: b A cache has 256 lines, each of which can hold 64 bytes. What is the total capacity of the cache in bytes? a. 16384 bytes b. 32768 bytes c. 65536 bytes d. 131072 bytes Answer: b If a cache has a hit rate of 80% and an access time of 5 ns, and a DRAM has an access time of 50 ns, what is the speedup achieved by the cache? a. 4x b. 5x c. 6x d. 7x Answer: c



Subjective Short Notes
Midterm & Finalterm Prepration
Past papers included

Download PDF
  1. Explain the concept of cache hit and cache miss with an example. Answer: A cache hit occurs when the data requested by the processor is present in the cache memory. On the other hand, a cache miss occurs when the requested data is not present in the cache memory. For example, let's consider a cache memory that stores the contents of the main memory. If a processor requests a particular data that is present in the cache memory, then it is a cache hit. However, if the processor requests a data that is not present in the cache memory, then it is a cache miss. In this case, the data needs to be fetched from the main memory and stored in the cache memory for future use. How is the hit rate of a cache memory calculated? Answer: The hit rate of a cache memory is calculated by dividing the number of cache hits by the total number of memory access requests. For example, if a cache memory receives 100 memory access requests and 80 of them result in cache hits, then the hit rate of the cache memory is 80%. Explain the difference between DRAM and SRAM. Answer: DRAM (Dynamic Random Access Memory) and SRAM (Static Random Access Memory) are two types of memory used in computer systems. The main difference between the two is that DRAM stores data in a capacitor, which requires constant refreshing to maintain its contents, while SRAM stores data in a flip-flop, which does not require refreshing. This makes SRAM faster and more expensive than DRAM. Additionally, DRAM is typically used for main memory, while SRAM is used for cache memory. What is the concept of page replacement in virtual memory? Answer: Page replacement is a technique used in virtual memory to manage memory allocation. When the available physical memory becomes full, the operating system swaps some of the pages in memory to the hard disk to free up space. When a process needs a page that is not present in physical memory, the operating system replaces a page that is currently in memory with the requested page from the hard disk. This is called page replacement. How does the size of a cache affect its performance? Answer: The size of a cache memory directly affects its performance. A larger cache memory can hold more data, which increases the chances of a cache hit and reduces the number of cache misses. This, in turn, reduces the time required to access data from the main memory, resulting in faster overall performance. However, a larger cache memory also requires more power and is more expensive than a smaller cache memory. What is the concept of write-back and write-through in cache memory? Answer: Write-back and write-through are two techniques used in cache memory to update the main memory. In the write-back technique, when a write operation is performed on the cache memory, the corresponding data in the main memory is not immediately updated. Instead, the data in the cache memory is marked as "dirty" and the update is deferred until a later time. In the write-through technique, the data in the cache memory and the main memory are updated simultaneously for every write operation. Explain the concept of associative mapping in cache memory. Answer: Associative mapping is a technique used in cache memory to store data. In this technique, each block of data in the cache memory is associated with a tag that identifies the location of the block in the main memory. When the processor requests a block of data, the cache memory compares the tag of the requested block with the tags of all the blocks in the cache memory. If a match is found, the corresponding block of data is returned. This allows the cache memory to store data in a flexible manner without requiring a fixed address mapping. What is the concept of TLB

Numerical examples of DRAM and cache are used to illustrate the performance improvement gained by using a memory hierarchy. The cache is a small but fast memory that stores recently accessed data. DRAM, on the other hand, is larger but slower than cache. When a processor needs to access memory, it first checks the cache. If the required data is present in cache, it is immediately returned to the processor. Otherwise, the processor retrieves the data from DRAM and stores it in cache for future use. Consider the following example: a processor requires data from memory at address 0x200. The cache has a block size of 16 bytes, and the current block in cache starts at address 0x1F0. Assume that the DRAM access time is 100 nanoseconds (ns) and the cache access time is 10 ns. If the data at address 0x200 is not present in cache, then the processor needs to retrieve the entire 16-byte block from DRAM. This takes 100 ns. Once the block is in cache, subsequent accesses to any address within the block will take 10 ns. Another example involves the concept of cache hit rate. Assume that a cache has a capacity of 256 bytes, and that the cache hit rate is 80%. The processor requires data from memory at address 0x400. The cache is organized as 16-byte blocks, and the current block in cache starts at address 0x3C0. If the required data is present in cache, the processor can access it in 10 ns. If the data is not present in cache, the processor must retrieve the entire block from DRAM, taking 100 ns. The hit rate of the cache can be improved by increasing its size or by using a more sophisticated replacement policy. In conclusion, the use of DRAM and cache in a memory hierarchy can significantly improve system performance. The performance improvement can be quantified through numerical examples such as those described above, which illustrate the impact of cache hit rates, block sizes, and access times on overall memory performance.