How does the system perform the 2^56 modulo 7, if it's 32 bits operating system in cryptography for example?
And how it stored in memory?
How does the system perform the 2^56 modulo 7, if it's 32 bits operating system in cryptography for example?
And how it stored in memory?
A 32-bit operating system does not limit you from having custom types that exceed that size. Your application can take two 32-bit words and treat it like one 64-bit number. Most programming languages even have a "double-word" integral type to simplify matters.
You can further extend the concept to create an arbitrary precision integral data type that is only bound by the amount of limited memory. Essentially you have an array of words, and you store your N-bit numbers in the bits of the words of this array.
The fact that it's a 32-bit operating system does not by itself limit the numeric computation that you can do. A Java long
, for example, is a 64-bit integral type, regardless of where it's running. For an arbitrary precision, java.math.BigInteger
ups the ante and provides "infinite word size" abstraction. And yes, this "feature" is available even in 32-bit operating systems (because that was never a limiting factor to begin with).
Finding modular multiplicative inverse or modular exponentiation is a common mathematical/algorithmic task in the fields of cryptography.
One identity that you may want to use here is the following:
A * B (mod M) == (A (mod M)) * (B (mod M)) (mod M)
To find x = 256 (mod 7), you do NOT have to first compute and store 256. If you have y = 255 (mod 7) -- a number between 0..6 -- you can find x = y * 2 (mod 7).
But how do you find y = 255 (mod 7)? Well, one naive way is to apply the process linearly and first try to find z = 254 (mod 7) and so on. This is a linear exponentiation, but you can do better by performing e.g. exponentiation by squaring.
That is, if you have say 28, you can square it to immmediately get 216. You can then square that to immediately get 232.
There are many sophisticated mathematical algorithms applicable to cryptography, and whether or not it's implemented in a program running on a 32-bit or a 64-bit operating system is not directly relevant. As long as enough memory is available, the computer is more than capable of performing arbitrary-precision arithmetic.
Precisely because arbitrary-precision arithmetic is a useful abstraction, many high-performance libraries are available, so that you can build your application on top of a pre-existing framework instead of having to build from the ground up.
Some high-level languages even have arbitrary-precision arithmetic built-in. Python, for example, provides arbitrary precision int
and long
at the language level.
2**56 == 2**(28+28) == 2**28 * 2**28 == (2**28)**2
2**28 == 2**(14+14) == 2**14 * 2**14 == (2**14)**2
2**14 == 2**(7+7) == 2**7 * 2**7 == (2**7)**2
2**7 == 2**(3+3 +1) == 2**3 * 2**3 * 2 == (2**3)**2 * 2
2**3 == 2**(1+1 +1) == 2**1 * 2**1 * 2 == (2**1)**2 * 2
2**56 == (2**28)**2 == ((2**14)**2)**2 == (((2**7)**2)**2)**2
== (((2*(2**3)**2)**2)**2)**2 == (((2*(2*(2**1)**2)**2)**2)**2)**2
2**56 %7
== (((2*(2*(2**1)**2)**2)**2)**2)**2 %7
== (((2*(2*(2**1 %7)**2 %7)**2 %7)**2 %7)**2 %7)**2 %7
== (((2*(2*(2)**2 %7)**2 %7)**2 %7)**2 %7)**2 %7
== (((2*(2*4 %7)**2 %7)**2 %7)**2 %7)**2 %7
== (((2*(1)**2 %7)**2 %7)**2 %7)**2 %7
== (((2 %7)**2 %7)**2 %7)**2 %7
== ((2**2 %7)**2 %7)**2 %7
== ((4)**2 %7)**2 %7
== (16 %7)**2 %7
== (2)**2 %7
== 4 %7
== 4
so (2**56) % 7 == 4
You'll note that we never dealt with any large numbers (indeed the largest was 56).
Furthermore:
2**224 %7 == (2**56)**4 %7 == (4*4*4*4) %7 ==
((16%7) * (16%7)) %7 == (2*2) %7 == 4 %7 == 4
And thus also 2**896 %7 = 4
, etc (since 896 = 4 * 224
, where 224 = 4 * 56
).
Modular exponentiation algorithms are used for this kind of operation. This Wikipedia article tells how it is done: http://en.wikipedia.org/wiki/Modular_exponentiation
Generally, if you know your numbers are going to get very big, you'll use a library like GMP (Gnu Multi-Precision) to handle the math. It does what you'd do on paper if you had 2^32 fingers on you hands.
Which system? Which architecture?
Generally speaking on a 32-bit architecture, you are getting overflow results. Some languages have built-in, arbitrarily big, numeral types which can handle these calculations. Examples of this are BigDecimal
in Java, and the built-in long int
s in Python.
One uses that (a * b) mod c = ((a mod c) * (b mod c)) mod c. That means that you can basically do
I think your terminology is a bit confused.
A 32 bit operating system or 32-bit architecture is one in which machine addresses are limited to 32 bits. It is not at all unusual for a 32 bit architecture to have arithmetic instructions that operate on 64 bit integers and / or 64 bit floating point numbers.
So, it is quite likely that a machine with a 32 bit architecture (and running a 32 bit operating system) would use 64 bit arithmetic and store the result in memory as a 64 bit long
or long long
using 2 consecutive 32 bit words.
too add to other answers which do a good explanation of a 32 int and modular multiplicative inverse and what not
I'll explain what a the 32 bit CPU is
32 bit
CPUs as most people know them as, is related to the Address bus size
this is that the amount of addresses available so for example on a x86 (your common desktop CPU [AMD, Intel]) processor
this allows 2^32
bytes of address space or 4GB
this is usually split between the the addressable hardware and the RAM, so the reason for actually implementing a 64 bit
Processor was because we were coming closer too the 4GB
of RAM limit
as a side note this has previously happened when CPU's were 16bit
© 2022 - 2024 — McMap. All rights reserved.
2^56/7
do you mean "two raised to 56th power, divided by 7", or "two raised to 56th power, modulo 7"? You may want to check up on the notational convention if you want to work further in this field. – Trochlear