For your first question, the intuition behind divide-and-conquer is that in many problems the amount of work that has to be done is based on some combinatorial property of the input that scales more than linearly.
For example, in the closest pair of points problem, the runtime of the brute-force answer is determined by the fact that you have to look at all O(n2) possible pairs of points.
If you take something that grows quadratically and cut it into two pieces, each of which is half the size as before, it takes one quarter of the initial time to solve the problem in each half, so solving the problem in both halves takes time roughly one half the time required for the brute force solution. Cutting it into four pieces would take one fourth of the time, cutting it into eight pieces one eighth the time, etc.
The recursive version ends up being faster in this case because at each step, we avoid doing a lot of work from dealing with pairs of elements by ensuring that there aren't too many pairs that we actually need to check. Most algorithms that have a divide and conquer solution end up being faster for a similar reason.
For your second question, no, divide and conquer algorithms are not necessarily faster than brute-force algorithms. Consider the problem of finding the maximum value in an array. The brute-force algorithm takes O(n) time and uses O(1) space as it does a linear scan over the data. The divide-and-conquer algorithm is given here:
- If the array has just one element, that's the maximum.
- Otherwise:
- Cut the array in half.
- Find the maximum in each half.
- Compute the maximum of these two values.
This takes time O(n) as well, but uses O(log n) memory for the stack space. It's actually worse than the simple linear algorithm.
As another example, the maximum single-sell profit problem has a divide-and-conquer solution, but the optimized dynamic programming solution is faster in both time and memory.
Hope this helps!