How does Intel TBB's scalable_allocator work?
Asked Answered
G

2

33

What does the tbb::scalable_allocator in Intel Threading Building Blocks actually do under the hood ?

It can certainly be effective. I've just used it to take 25% off an apps' execution time (and see an increase in CPU utilization from ~200% to 350% on a 4-core system) by changing a single std::vector<T> to std::vector<T,tbb::scalable_allocator<T> >. On the other hand in another app I've seen it double an already large memory consumption and send things to swap city.

Intel's own documentation doesn't give a lot away (e.g a short section at the end of this FAQ). Can anyone tell me what tricks it uses before I go and dig into its code myself ?

UPDATE: Just using TBB 3.0 for the first time, and seen my best speedup from scalable_allocator yet. Changing a single vector<int> to a vector<int,scalable_allocator<int> > reduced the runtime of something from 85s to 35s (Debian Lenny, Core2, with TBB 3.0 from testing).

Gleason answered 18/3, 2009 at 10:58 Comment(0)
H
22

There is a good paper on the allocator: The Foundations for Scalable Multi-core Software in Intel Threading Building Blocks

My limited experience: I overloaded the global new/delete with the tbb::scalable_allocator for my AI application. But there was little change in the time profile. I didn't compare the memory usage though.

Have answered 19/3, 2009 at 6:22 Comment(3)
Thanks! Article contains exactly the sort of information I was looking for.Gleason
The original link is now defunct, but CiteSeer has the PDF: citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.71.8289Hutchings
To add a datapoint: in my particular app, allocator contention halted speedup at around 15 threads, past that it would kill all speedup and by 40 it would be much slower than single-thread. With scalable_allocator used in the inner per-thread kernels the bottleneck disappeared and expected scaling came back. (machine has 40 physical cores).Eusebioeusebius
W
3

The solution you mentioned is optimized for Intel CPUs. It incorporates specific CPU mechanisms to improve performance.

Sometime ago I found another very useful solution: Fast C++11 allocator for STL containers. It slightly speeds up STL containers on VS2017 (~5x) as well as on GCC (~7x). It uses memory pool for elements allocation which makes it extremely effective for all platofrms.

Windup answered 5/11, 2017 at 15:3 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.