does a computer take more time to multiply, divide, subtract, add two big numbers than smaller number
Asked Answered
Z

2

8

We (people) take more time to multiply, add, divide and subtract two large numbers than two small numbers.

Does a computer take more time to multiply 5 * 2 than say 51234 * 987654 or is the operation done in same amount of time?

What about two numbers that are greater than word size of the processor (e.g. two 128 bit number)?

I saw the article https://en.wikipedia.org/wiki/Multiplication_algorithm

Zymosis answered 8/3, 2016 at 13:26 Comment(2)
Depends on which computer, which operation, how big, maybe the phase of the moon..Teagan
I think they have the same complexity, since how would the CPU know if the number is big or small? It would still have to chew every bit of it (regardless if the higher bits are 0).Ala
S
6

Depends upon the input-type. For primitives that are natively supported by the CPU, such as multiplication of 64bit-numbers on a 64bit-CPU: no, these are atomic operations that always take exactly the same amount of time. For non-primitive data-types, like Java's BigInteger or comparable library-classes in other languages: yes, these aren't atomic operations anymore and thus differ in the amount of time required depending upon the size of the operands.

Multiplication of primitives always takes the same amount of time due to a simple reason: the operation is build hard-wired without any conditional execution and will always iterate all 64bits on a 64bit CPU, no matter whether the input is only 5bits long or takes up all 64 bits. Same would apply to architectures for any other number of bits.

EDIT:
As @nwellnhof stated: some instructions actually do contain branching, such as for example floating-point arithmetic. Usually these instructions are based on microcode and thus can't be counted as atomic instructions in a narrower sense though.

Sextuplicate answered 8/3, 2016 at 14:40 Comment(2)
Machine instructions can very well contain conditional execution paths and take a different number of cycles depending on the actual input values. Many floating point instructions on x86 CPUs do, for example.Hyperbolism
@Hyperbolism well, guess I've gotta put that a bit more precise. I've ommited microcode-instructions for simplicity.Sextuplicate
T
8

Assuming you're interested mainly in the x86 family, there was a time when the time multiplication took was dependent on the operands. That was in the previous century - processors up to and including the 80486. From Pentium on, imul has always taken a number of cycles independent of its input.

Division has always been and still is an operation that takes a number of cycles that does depend on the inputs somehow, IIRC it depends mainly on the magnitude of the result.

Floating point instructions can often take variable time if you consider also "weird input" such as denormals.

Teagan answered 8/3, 2016 at 15:1 Comment(0)
S
6

Depends upon the input-type. For primitives that are natively supported by the CPU, such as multiplication of 64bit-numbers on a 64bit-CPU: no, these are atomic operations that always take exactly the same amount of time. For non-primitive data-types, like Java's BigInteger or comparable library-classes in other languages: yes, these aren't atomic operations anymore and thus differ in the amount of time required depending upon the size of the operands.

Multiplication of primitives always takes the same amount of time due to a simple reason: the operation is build hard-wired without any conditional execution and will always iterate all 64bits on a 64bit CPU, no matter whether the input is only 5bits long or takes up all 64 bits. Same would apply to architectures for any other number of bits.

EDIT:
As @nwellnhof stated: some instructions actually do contain branching, such as for example floating-point arithmetic. Usually these instructions are based on microcode and thus can't be counted as atomic instructions in a narrower sense though.

Sextuplicate answered 8/3, 2016 at 14:40 Comment(2)
Machine instructions can very well contain conditional execution paths and take a different number of cycles depending on the actual input values. Many floating point instructions on x86 CPUs do, for example.Hyperbolism
@Hyperbolism well, guess I've gotta put that a bit more precise. I've ommited microcode-instructions for simplicity.Sextuplicate

© 2022 - 2024 — McMap. All rights reserved.