caching - Why does cache write-back happen the way it does? -
when system writes data cache, must @ point write data backing store well.
why? course note's don't justify this. component processor interested in what's in cache? apparently mirroring isn't urgent, can postpone the writes...
the timing of write controlled known write policy.
there 2 basic writing approaches:
...
write-back (or write-behind): initially, writing done cache. write backing store postponed until cache blocks containing data modified/replaced new content.
so it's postponed until it's about replaced? how make sense -- you're mirroring information know change! why not when blocks added?
let's assume talking cpu caches. actually, following rules true types of write-back caches (e.g. databases , like).
one goal of caches make use of temporal locality: small number of memory addresses written ("hot" addresses), while other addresses "cold". example of "hot" addresses program's stack because each time program enters function, arguments copied onto stack, , each time program exists function, arguments removed it. thus, same stack addresses reused. slow work directly ram in case (ram latency ~200 cpu cycles , l1 cache latency ~4 cycles). that's why when write operation executed, modifies cache entries, , cache entry marked "dirty". cache entry synchronized ram later, when 1 of following events occurs:
- write operation performed address not present in cache. because cache entries occupied, need search "victim" cache entry. if "victim" cache entry dirty (a), needs copied main memory (slow operation, cpu blocked while executed). if not (b), cache entry dropped (fast).
- memory subsystem tries reduce probability of (a) in favor of (b). achieved periodically copying dirty cache entries in background , removing dirty flag them.
Comments
Post a Comment