Address translation with multiple pagesize-specific TLBs
Asked Answered
P

1

5

For Intel 64 and IA-32 processors, for both data and code independently, there may be both a 4KB TLB, and a Large Page (2MB, 1GB) TLB (LTLB). How does address translation work in this case?

  1. Would the hardware simply be able to access both in parallel, knowing that a double-hit can't occur?
  2. In the LTLBs, how would the entries be organized? I suppose, when the entry is originally filled from a page-structure entry, the LTLB entry could include information about how a hit on this entry would proceed?

Anyone have a reference to a current microarchetucture?

Platto answered 15/4, 2018 at 13:35 Comment(0)
I
10

There are many possible designs for a TLB that supports multiple page sizes and the trade-offs are significant. However, I'll only briefly discuss those designs used in commercial processors (see this and this for more).

One immediate issue is that how to know the page size before accessing a set-associative TLB. A given virtual address to be mapped to a physical address has to be partitioned as follows:

-----------------------------------------
|       page number       | page offset |
-----------------------------------------
|     tag     |   index   | page offset |
-----------------------------------------

The index is used to determine which set of the TLB to lookup and the tag is used to determine whether there is a matching entry in that set. But given only a virtual address, the page size cannot be known without accessing the page table entry. And if the page size is not known, the size of the page offset cannot be determined. This means that the location of the bits that constitute the index and the tag are not known.

Most commercial processors use one of two designs (or both) to deal with this issue. The first is by using a parallel TLB structure where each TLB is designated for page entries of a particular size only (this is not precise, see below). All TLBs are looked up in parallel. There can either be a single hit or all misses. There are also situations where multiple hits can occur. In such cases the processor may choose one of the cached entries.

The second is by using a fully-associative TLB, which is designed as follows. Let POmin denote the size of the page offset for the smallest page size supported by the architecture. Let VA denote the size of a virtual address. In a fully-associative cache, an address is partitioned into a page offset and a tag; there is no index. Let Tmin denote VA - POmin. The TLB is designed so that each entry to hold a tag of size Tmin irrespective of the size of the page of the page table entry cached in that TLB entry.

The Tmin most significant bits of the virtual address are supplied to the comparator at each entry in of the fully-associative TLB to compare the tags (if the entry is valid). The comparison is performed as follows.

                  |   M   |             

                  |11|0000|             | the mask of the cached entry
-----------------------------------------
|          T(x)      |M(x)|             | some bits of the offset needs to be masked out
-----------------------------------------
|          T(x)      |       PO(x)      | partitioning according to actual page size
-----------------------------------------
|         T(min)          |   PO(min)   | partitioning before tag comparison
-----------------------------------------

Each entry in the TLB contains an field called the tag mask. Let Tmax denote the size of the tag of the largest page size supported by the architecture. Then the size of the tag mask, M, is Tmin - Tmax. When a page table entry gets cached in the TLB, the mask is set in a way so that when its bitwise-and'ed with the corresponding least significant bit of a given tag (of Tmin), any remaining bits that belong to the page offset field would become all zeros. In addition, the tag stored in the entry is appended with a sufficient number of zeros so that its size is Tmin. So some bits of the mask would be zeros while others would be ones, as shown in the figure above.

Now I'll discuss a couple of examples. For simplicity, I'll assume there is no hyperthreading (possible designs options include sharing, static partitioning, and dynamic partitioning). Intel Skylake uses the parallel TLB design for both the L1 D/I TLB and the L2 TLB. In Intel Haswell, 1 GB pages are not supported by the L2 TLB. Note that 4 MB pages use two TLB entires (with replicated tags). I think that the 4 MB page table entries can only be cached in the 2 MB page entry TLB. The AMD 10h and 12h processors use a fully-associative L1 DTLB, a parallel L2 DTLB, a fully-associative parallel L1 ITLB, and an L2 ITLB that supports only 4 KB pages. The Sparc T4 processor uses a fully-associative L1 ITLB and a fully-associative L1 DTLB. There is no L2 TLB in Sparc T4.

Inearth answered 17/4, 2018 at 1:52 Comment(4)
Great and thorough answer! Also, great references! Thank you!Platto
About the description of the first design which says "There are also situations where multiple hits can occur.", do we have an example to fit such situation? I feel like it won't have multiple hits because the smaller page must has distinguishable address/tag from the larger page.Sphygmograph
@Sphygmograph For example, the translation for a 4KB page may be modified by software such that it becomes part of a larger 2MB page without flushing the corresponding translation that may exist in the TLBs. A subsequent access outside the 4KB range but inside the new 2MB page causes a new translation to be cached. At this point, an access to the 4KB range may hit in two different TLB entries. The behavior is undefined if the cached physical address or page attributes are different.Inearth
Not being aware of such tricks inside the architecture. I thought the page size was such a tunable parameter that belongs to the OS space and the architecture has only to "folow" the OS definition. Thanks for this enlightenmentZug

© 2022 - 2024 — McMap. All rights reserved.