In general the effect on coverage is the same.
Source code instrumentation can give superior reporting results, simply because byte-code instrumentation cannot distinguish any structure within source lines, as the code block granularity is only recorded in terms of source lines.
Imagine I have two nested if statements (or equivalently, if (a && b) ... *) in a single line. A source code instrumenter can see these, and provide coverage information for the multiple arms within the if, within the source line; it can report blocks based on lines and columns. A byte code instrumenter only sees one line wrapped around the conditions. Does it report the line as "covered" if condition a executes, but is false?
You may argue this is a rare circumstance (and it probably is), and is therefore not very useful. When you get bogus coverage on it followed by a field failure, you may change your mind about utility.
There's a nice example and explanation of how byte code coverage makes getting coverage of switch statements right, extremely difficult.
A source code instrumenter may also achieve faster test executions, because it has the compiler helping optimize the instrumented code. In particular, a probe inserted inside a loop by a binary instrumenter may get compiled inside the loop by a JIT compiler. A good Java compiler will see the instrumentation produces a loop-invariant result, and lift the instrumentation out of the loop. (A JIT compiler can arguably do this too; the question is whether they actually do so).