This question is asking for programming languages which accept numeric constants for assignment to arbitrary precision variables or for use in arbitrary precision expressions without performing conversion to IEEE floating-point representation before assignment or application in an expression. For example, the following pseudo-language assignment expression:
BigNum x = 0.1;
Many languages provide or have access to libraries which will enable construction of such BigNum
type objects from a text string. I am looking for programming languages which can convert a numeric token like 0.1
directly into a BigNum
without requiring the programmer to create a string which must then be parsed and potentially throw an exception or flag an error at runtime. Instead, I am interested in languages where the compiler or tokenizer can report syntax errors from incorrectly formatted numbers or invalid expressions before processing the numeric literal into an arbitrary precision decimal or integer ratio representation.
From Literals/Floating point on the Rosetta Code web site, it looks like J, Maple, and Maxima provide a literal syntax for arbitrary precision floating point numbers. Do any other more widely used languages provide the same or something similar to the pseudo-example I provided above?
As a concrete example, Julia provides built in support for rational numbers. These number have a literal representation which can be used in source code. For example:
x = 1//10
Now 1//10
and 0.1
are the same numbers mathematically -- in base 10. However, most programming languages will convert a literal decimal number in source code into an IEEE floating-point number. Often that is exactly what is wanted. However, more than a few people unfamiliar with the IEEE floating point representations of numbers -- or similar floating-point representations which have largely faded into history -- are a surprised to learn that one-tenth isn't exactly one-tenth when it is converted into a binary fraction. Moreover, this surprise usually arises after code which works "most of the time" produces a surprising result when floating-point "errors" accumulate rather average/cancel out. Of course, that is the nature of floating-point representations and arithmetic operations which are, just the same, very useful in practice. Caveat emptor: What Every Computer Scientist Should Know About Floating-Point Arithmetic
Still, I find there are times when integers are insufficient and floating-point numbers introduce unnecessary issues in otherwise exact calculations. For that, rational number and arbitrary precision libraries fit the bill. Great. However, I would still like to know if there are any languages which support direct representation of rational and arbitrary precision literals in the language itself. After all, I do not want to use a language which only has string literals which must then be parsed into numbers at run-time.
So far, Julia is a good answer for rational numbers, but far from the only language with support for rational number literals. However, it does not have arbitrary precision literals. For that, J, Maple, and Maxima seem to have what I am seeking. Perhaps that is very nearly the complete list. Still, if anyone knows of another candidate or two, I would appreciate a pointer...
The Answer So Far...
The best answer to date is Haskell. It provides a rich comprehension of numeric types and operations as well as numeric literal notation which includes rational number expressions and which appears to provide for treating decimal numbers with a fractional part as rational numbers rather than floating-point literals in all cases. At least, that is what I gather from a quick reading of Haskell documentation and a blog post I came across, Overloading Haskell numbers, part 3, Fixed Precision, in which the author states:
...notice that what looks like a floating point literal is actually a rational number; one of the very clever decisions in the original Haskell design.
For many programmers, Julia will be more approachable while offering excellent support for a variety of mathematical types and operations as well as usually excellent performance. However, Python has a very capable syntax as well, many natively compiled packages which match or exceed those available to Julia today, and, unquestionably, enjoys far greater adoption and application in commercial, open-source, and academic projects -- still, my personal preference is for Julia if I have a choice.
For myself, I will be spending more time reseaching Haskell and revisiting Ocaml/F# which may be viable intermediate choices between Julia/Python like languages and a language like Haskell -- how those programming languages may fall across some sort of spectrum is left as an exercise for the reader. If Ocaml/F# offer comparable expressive power to Haskell in the cases in which I have an interest, they may be a better choice just on the basis of current and likely future adoption rates. But for now, Haskell seems to be the best answer to my original question.
1//10
for Julia or passed as a string to a function which constructs a number object of the correct type using yet another little parser while the compiler remains ignorant of the string's contents or semantics. – Pervert