As a personal project I am working on implementing an Arbitrary Precision number type for a pet project of mine.
I already know about all the popular, tested and robust libraries out there that do this. I want to work on a solution as a self improvement education project.
I am researching the area and trying to figure out if there is some way to roughly predict if an operation will cause an overflow before I actually do the calculations. I am not so concerned about false positives either.
I want to be able to use the smallest space that is appropriate for the calculation. If the calculation will stay within its native bounds I keep it there.
For example: Multiplying two 64 bit Integers if each are large enough will cause an overflow.
I want to detect this and up-convert the numbers to my number type only if the result may exceed 64 bits of resolution. I will be working with signed numbers in this experiment.
What is the most sane, efficient way to detect an overflow/underflow?