In other words, do L1, L2, L3, etc. caches always reflect the endianness of their CPU?
Or does it make more sense to always store data in caches in some specific endianness?
Is there a general design decision?
In other words, do L1, L2, L3, etc. caches always reflect the endianness of their CPU?
Or does it make more sense to always store data in caches in some specific endianness?
Is there a general design decision?
Most modern caches do not store data as a sequential chunk of bytes, but rather use banking and interleaving techniques due to floorplan or timing considerations. In addition, most caches employ error correction techniques so additional bits may be interleaved with the data.
As a result, there's no real sense in discussing endianness of a cache, since the internal order is usually mangled by design considerations. On top of that, in most cases caches provide the data in full line granularity, so there's also no point in asking what offset you start reading from.
Finally, endianness is a matter of architecture, it's how you interpret data you get from the CPU. It exists to describe the possible options you can interpret data in. Caches are micro architectural, so by definition your CPU functional behavior should be oblivious to them, and they're free to implement whatever internal structure they want. The question may still be meaningful if you have some means to peek internally into the cache, and would like to translate that into a value, in which case the above consideration apply, and each processor may differ.
© 2022 - 2024 — McMap. All rights reserved.