Why 4-level paging can only cover 64 TiB of physical address
Asked Answered
V

1

2

There are the words in linux/Documentation/x86/x86_64/5level-paging.rst

Original x86-64 was limited by 4-level paging to 256 TiB of virtual address space and 64 TiB of physical address space.

I know that the limit of virtual address is 256TB because 2^48 = 256TB. But I don't know why its limit of physical is only 64TB.

Suppose we set the size of each page to 4k. Thus a linear address has 12 bits of offset, 9 bits indicate the index in each four level, which means 512 entries per level. A linear address can cover 512^4 pages, 512^4 * 4k = 256TB of space.

This is my understanding of the calculation of space limit. I'm wondering what's wrong with it.

Volition answered 28/5, 2022 at 16:19 Comment(0)
F
5

The x86-64 ISA's physical address space limit is unchanged by PML5, remaining at 52-bit. Real CPUs implement some narrower number of physical address bits, saving bits in cache tags and TLB entries, among other places.

The 64 TiB limit is not imposed by x86-64 itself, but by the way Linux requires more virtual address space than physical for its own convenience and efficiency. See x86_64/mm.txt for the actual layout of Linux's virtual address space on x86-64 with PML4 paging, and note the 64 TB "direct mapping of all physical memory (page_offset_base)"


x86-64 Linux doesn't do HIGHMEM / LOWMEM

Linux can't actually use more than phys mem = 1/4 virtual address space, without nasty HIGHMEM / LOWMEM stuff like in the bad old days of 32-bit kernels on machines with more than 1 GiB of RAM (vm/highmem.html). (With a 3:1 user:kernel split of address space, letting user-space have 3GiB, but with the kernel having to map pages in/out of its own space if not accessing them via the current process's user-space addresses.)

Linus's rant about 32-bit PAE expands on why it's nice for an OS to have enough virtual address space to keep everything mapped, with the usual assertion that people who don't agree with him are morons. :P I tend to agree with him on this, that there are obvious efficiency advantages and that PAE is a huge PITA for the kernel. Probably even moreso on an SMP system.

If anyone had proposed a patch to add highmem support for x86-64 to allow using more than 64 TiB of physical memory with the existing PML4 format, I'd expect Linus would tell them 1995 called and wants its bad idea back. He wouldn't consider merging such a patch unless much RAM became common for servers, but hardware vendors still hadn't provided an extension for wider virtual addresses.

Fortunately that didn't happen: probably no CPU has supported wider than 46-bit phys addrs without supporting PML5. Vendors know that supporting more RAM than mainline Linux can use wouldn't be a selling point. But as the doc said, commercial systems were getting up to a max capacity of 64 TiB.


x86-64's page-table format has room for 52-bit physical addresses

The x86-64 page-table format itself has always had that much room: Why in x86-64 the virtual address are 4 bits shorter than physical (48 bits vs. 52 long)? has diagrams from AMD's manuals. Of course early CPUs had narrower physical addresses so you couldn't for example have a PCIe device put its device memory way up high in physical address space.

Your calculation has nothing to do with physical address limits, which is set by the number of bits in each page-table entry that can be used for that.

In x86-64 (and PAE), the page table format reserves bits up to bit #51 for use as physical-address bits, so OSes must zero them for forward compatibility with future CPUs. The low 12 bits are used for other things, but the physical address is formed by zeroing out the bits other than the phys-address bits in the PTE, so those low 12 bits become the low zero bits in an aligned physical-page address.


x86 terminology note: logical addresses are seg:off, and segment_base + offset gives you a linear address. With paging enabled (as required in long mode), linear addresses are virtual, and are what's used as a search key for the page tables (effectively a radix tree cached by the TLB).

Your calculation is just correctly reiterating the 256 TiB size of virtual address space, based on 4-level page tables with 4k pages. That's how much memory can be simultaneously mapped with PML4.

A physical page has to be the same size as a virtual page, and in x86-64 yes that's 4 KiB. (Or 2M largepage or 1G hugepage).

Fun fact: the x86-64 page-table-entry format is the same as PAE, so modern CPUs can also access large amounts of memory 32-bit mode. But of course not map it all at once. It's probably not a coincidence that AMD chose to use an existing well-designed format when making AMD64, so their CPUs would only need two different modes for hardware page-table walker: legacy x86 with 4-byte PTEs (10 bits per level) and PAE/AMD64 with 8-byte PTEs (9 bits per level).

Fillian answered 28/5, 2022 at 21:54 Comment(4)
Thanks a lot! I have read the x86_64/mm.rst and I understanded the question. But when I was reading the memory map of 5-level page tables, I have another question. In x86_64, architecture only support 52-bit physical addresses, which is 4PB. But the map use a 32PB region to map all physical memory. Is the extra 28PB space reserved for other architectures?Volition
@Tommy.Zhou: Possibly reserved in case of future extensions to support more physical space on x86-64. Or yes maybe to be the same as some other ISA(s) which allows wider physical addresses, I didn't look at */mm.txt in the kernel source tree. Or just because put all the address-space somewhere, instead of calling it "unused" (although it is followed by an unused region).Fillian
@Tommy.Zhou: One reason to choose your padding locations with care is to keep the page-table tree as sparse as possible, with the frequently used parts under the same PML5 entry if possible, to make page walks cheaper. Obviously Linux scatters a lot of things around because it statically chooses positions inside the huge address-space based on using all of it, which is not great in that respect, but perhaps choosing where to put that extra 28PB that's unused for now is useful, maybe it's just there for future growth. IDK if they considered this optimization while laying it out.Fillian
Semi-related: x86-64 canonical address? has the basics on x86-64 virtual addresses, and the "hole" created by having only 48 or 57 virtual address bits, making most of the 64-bit address space unusable.Fillian

© 2022 - 2024 — McMap. All rights reserved.