Depends upon the input-type. For primitives that are natively supported by the CPU, such as multiplication of 64bit-numbers on a 64bit-CPU: no, these are atomic operations that always take exactly the same amount of time. For non-primitive data-types, like Java's BigInteger
or comparable library-classes in other languages: yes, these aren't atomic operations anymore and thus differ in the amount of time required depending upon the size of the operands.
Multiplication of primitives always takes the same amount of time due to a simple reason: the operation is build hard-wired without any conditional execution and will always iterate all 64bits on a 64bit CPU, no matter whether the input is only 5bits long or takes up all 64 bits. Same would apply to architectures for any other number of bits.
EDIT:
As @nwellnhof stated: some instructions actually do contain branching, such as for example floating-point arithmetic. Usually these instructions are based on microcode and thus can't be counted as atomic instructions in a narrower sense though.