GPUs and VPUs need contiguous memory.
CMA and Static memory allocation are the examples of contiguous memory.
Why is contiguous memory required here?
GPUs and VPUs need contiguous memory.
CMA and Static memory allocation are the examples of contiguous memory.
Why is contiguous memory required here?
Contiguous memory allocation (CMA) is needed for I/O devices that can only work with contiguous ranges of physical memory. I/O devices that can only work with continuous ranges are built that way in order to simplify the design of the device.
On systems with an I/O memory management unit (IOMMU), this would not be an issue because a buffer that is contiguous in the device address space can be mapped by the IOMMU to non-contiguous regions of physical memory. Also some devices can do scatter/gather DMA (i.e., can read/write from/to multiple non-contiguous buffers). Ideally, all I/O devices should be designed to either work behind an IOMMU or should be capable of scatter/gather DMA. Unfortunately, this is not the case and there are devices that require physically contiguous buffers. There are two ways for a device driver to allocate a contiguous buffer:
CMA solves this exact problem by providing the advantages of both of these approaches with none of their downsides. The basic idea is to make it possible to migrate allocated physical pages to create enough space for a contiguous buffer. More information on how CMA works can be found here.
© 2022 - 2024 — McMap. All rights reserved.