When you say "Regex", you're refering to a several techniques - this isn't just an issue of the underlying theory. Consider the question "does this string match my given Regex?" For such a question, the notion of "greedy" is merely an implementation detail - if you're using one of the common (but inefficient) backtracking implementations, this can affect performance but not output. Similarly, the question "does this string contain a match?" is not affected by greedy vs. non-greedy quantifiers. This first type of regex is concerning with the abstract notion of set-membership: defining the language of matching strings.
So why do non-greedy quantifiers even exist? Regexes are not merely used for matching; common implementations can locate where the match is and which parts of the regex match which parts of the output. By doing this, a user depends on the intricacies of the implementation, which isn't trivial. This second type of regex is concerned with getting a few bits of text into a more practical representation withing the context of an otherwise turing-complete language.
Generally, when you talk of the strength of the regular expression formalism, you're talking about the first world - in which the computer answers with a simple yes or no. It's easy to talk about because the specification is clear. When you talk of a greedy vs. non-greedy quantifier, you're talking about the second world - used lots in practice, but with a specification that's mostly grown without much planning to solve real problems and is a standard by virtue of backwards compatibility. This second world is solving entirely different problems, and it's not clear to me what "expressive power" should even mean here. Certainly, non-greedy can be practical; and that's kind of the point...
Non-greedy quantifiers don't do anything for the first type of expressiveness, and they do for the second, although it's not quite clear what "expressive power" means here anyhow.