You can also convert the entire array to an object
array of Fraction
objects, by abusing the element-wise conversion of numpy arrays under arithmetic operations. (Note: this requires the original array to be an integer array, since arithmetic between float
s and Fractions
produce float
s.)
>>> A = np.array([[-1, 1],[-2, -1]])
array([[-1, 1],
[-2, -1]])
>>>
>>> A.dtype
dtype('int64')
>>>
>>> A = A + Fraction()
>>> A
array([[Fraction(-1, 1), Fraction(1, 1)],
[Fraction(-2, 1), Fraction(-1, 1)]], dtype=object)
With the array in this format, any further arithmetic performed will be over elements of type Fraction
.
Edit: disclaimers
As mentioned by @Hi-Angel in the comments, there are a number of NumPy/SciPy functions (e.g., np.linalg.inv
) that expect input arrays to use a primitive dtype (e.g., int32
, float64
, etc.); these functions tend to be C/Cython-optimized routines that only work on C-primitives. And because fractions.Fraction
is a Python object, these functions will not work on arrays of Fraction
s.
And as mentioned elsewhere, even the functions that do work on Fraction
arrays will run notably slower on them, compared to running on NumPy arrays of primitive dtypes.
However, if you just need a custom numeric object for your application, like the arbitrary-precision rational type Fraction
or the base-10 floating-point type decimal.Decimal
, and want the convenience of e.g. element-wise operations on arrays, you CAN use NumPy arrays to achieve that using the method above or similar methods.
But it's not as fast or well-supported as using arrays of primitives, so personally if I don't NEED a custom number type I just use float64
s or int64
s.