The accepted answer can be extended by using import fractions
so that it includes finding the "tolerance", i.e., the "minimal power of 10 that makes the array integers". This makes the solution much easier for arbitrary real-valued floats.
Here is the original example with the current accepted answer:
b = np.array([1.0, 0.5, 0.25, 0.75, 0.5])
d = np.gcd.reduce((b * 100).astype(int))
print(100/d, 100/d*b)
Output: 4.0 [4. 2. 1. 3. 2.]
If the minimal power of 10 is not adjusted, then the following example does not work with the current accepted answer:
b = np.array([1./5, 2./11, 3./15, 4./9, 5./21])
d = np.gcd.reduce((b * 100).astype(int))
print(100/d, 100/d*b)
Output: 100.0 [20. 18.18181818 20. 44.44444444 23.80952381]
But with this solution, you do not need to find the minimal power of 10 first:
b = np.array([1.0, 0.5, 0.25, 0.75, 0.5])
d = np.lcm.reduce([fractions.Fraction(x).limit_denominator().denominator for x in b])
print(d, d*b)
b = np.array([1./5, 2./11, 3./15, 4./9, 5./21])
d = np.lcm.reduce([fractions.Fraction(x).limit_denominator().denominator for x in b])
print(d, d*b)
It still works with the original example, output: 4 [4. 2. 1. 3. 2.]
But it also works with the new example, output: 3465 [ 693. 630. 693. 1540. 825.]
As pointed out in a comment (from Mark Dickinson) to the accepted answer here, it may not work for fractions with denominators exceeding 1 million.
np.array([0.1, 0.9, 0.25, 0.75, 0.5]) * 20 == array([2.0, 18.0, 5.0, 15.0, 10.0])
– Multifoliate