A "plain" Java program and a Hadoop-based, MapReduce-based implementation are very different beasts and are hard to compare. It's not like Hadoop parallelizes a little bit of your program; it is written in an entirely different form from top to bottom.
Hadoop has overheads: just the overhead of starting a job, and starting workers like mappers and reducers. It introduces a lot more time spent serializing/deserializing data, writing it locally, and transferring it to HDFS.
A Hadoop-based implementation will always consume more resources. So, it's something to avoid unless you can't avoid it. If you can run a non-distributed computation on one machine, the simplest practical advice is to not distribute. Save yourself the trouble.
In the case of Mahout recommenders, I can tell you that very crudely, a Hadoop job incurs 2-4x more computation than a non-distributed implementation on the same data. Obviously that depends immensely on the algorithm and algo tuning choices. But to give you a number: I wouldn't bother with a Hadoop cluster of less than 4 machines.
Obviously, if your computation can't fit on one of your machines, you have no choice but to distribute. Then the tradeoff is what kind of wall-clock time you can allow versus how much computing power you can devote. The reference to Amdahl's law is right, though it doesn't consider the significant overhead of Hadoop. For example, to parallelize N ways, you need at least N mappers/reducers, and incur N times the per-mapper/reducer overhead. There's some fixed startup/shutdown time too.