In arbitrary-precision arithmetic, ball arithmetic is about twice as fast as interval arithmetic and uses half as much space. The reason is that only the center of a ball needs high precision, whereas in interval arithmetic, both endpoints need high precision. Details depend on the implementation, of course. (In practice, Arb is faster than MPFI by much more than a factor two, but this is largely due to implementation effort.)
In hardware arithmetic, balls don't really have a speed advantage over intervals, at least for scalar arithmetic. There is an obvious advantage if you look at more general forms of ball arithmetic and consider, for example, a ball matrix as a floating-point matrix + a single floating-point number for the error bound of the whole matrix in some norm, instead of working with a matrix of individual intervals or balls.
Joris van der Hoeven's article on ball arithmetic is a good take on the differences between ball and interval arithmetic: http://www.texmacs.org/joris/ball/ball.html
An important quote is: "Roughly speaking, balls should be used for the reliable approximation of numbers, whereas intervals are mainly useful for certified algorithms which rely on the subdivision of space."
Ignoring performance concerns, balls and intervals are usually interchangeable, although intervals are better suited for subdivision algorithms. Conceptually, balls are nice for representing numbers because the center-radius form corresponds naturally to how we think of approximations in mathematics. This notion also extends naturally to more general normed vector spaces.
Personally, I often think of ball arithmetic as floating-point arithmetic + error analysis, but with the error bound propagation done automatically by the computer rather than by hand. In this sense, it is a better way (for certain applications!) of doing floating-point arithmetic, not just a better way of doing interval arithmetic.
For computations with single numbers, error over-estimation has more to do with the algorithms than with the representation. MPFI guarantees that all its atomic functions compute the tightest possible intervals, but this property is not preserved as soon as you start composing functions. With either ball or interval arithmetic, blow-up tends to happen in the same way as soon as you run calculations with many dependent steps. To track error bounds resulting from large uncertainties in initial conditions, techniques such as Taylor models are often better than direct interval or ball arithmetic.
True complex balls (complex center + single radius) are sometimes better than rectangular complex intervals for representing complex numbers because the wrapping effect for multiplications is smaller. (However, Arb uses rectangular "balls" for complex numbers, so it does not have this particular advantage.)