The principle involved isn't unique to parallelization. If 25% of the program's time is spent doing some particular operation, making everything but that 25% happen instantly (without affecting the 25%) would leave a program that took 25% of the original time, and was thus four times as fast.
In cases where an algorithm has clear phases that are or are not parallelizable, the application of the above formula would be simple--figure that N way parallelization will make the parts that are parallelizable run N times as fast, while the parts that aren't parallelizable will run at normal speed. In practice, I don't think most algorithms consist entirely of parts that are either 100% parallelizable or 100% sequential. In most interesting cases, algorithms can run largely in parallel, but have various ordering constraints; in some cases, the precise ordering constraints may be data-dependent. As such, the "percentage of parallelization" could be variable based upon the number of processors, among other things, and so trying to plug it into a formula would not be very helpful.