Computer Science 61C in youtube

Memory Caching
Mistmatch between processor and memory speeds leads us to add a new level: a memory cache
Implementated with same IC processing technolongy as the CPU : faster buat more expensive than DRAM memory
Cashe is a copy of a subset of main memory
Most processors have separate caches for instructions and dataincreasing distance from the processor in access time
size of the memory at each level
Inclusive – what is in L1 cache is a subset of what is in L2 cache is a subset of what is in Main Memory that is a subset of is in Secondary Memory

Typical Memory  Hierarchy
The Trick :  present processor with a s much memory as is availabe in the cheapest technolony at the speed offered by the fastest technonlgy

If level closer to Processor it is
Smaller
Faster
More expensive
subset of lower levels(constains most recenstly
used data)
Lowest Level (Usually Disk) contains all available datae
Memory Hierarchy presents the processor with the illusion of very large & fast memory

cache contains copies of data in memory that are being used
Memory contains copies of data on Disk that are being used
Caches work on the principles of temporal  and spatial locality
temporal locality : if we use it now, chances are we’ll want to use it again soon
Spatial Locality : if we use a piece of memory, chances are we’ll use the neghboring pieces soon

Temporal locality / locality in time
Spatial Locality / locality in space

In a Direct- Mapped cache (1/4) direct-mapped cache, each memory address is associated with one possible block within the cache
– Therefore, we only need to look in a single location in the cache for the data if it exists in the cache
– Block is the unit of transfer between cache and memory

Leave a Reply