Yes, you're right, in general, though there is one small caveat worth mentioning.
In certain circumstances, such as accessing a disk or swapping pages into memory, it may make a difference - a big difference in some cases - whether reads are sequential within some sort of block, with an overhead incurred when accessing a different one. This can, sometimes, be very sensitive to the order of the loops in a ≥2D matrix.
By way of specific example, let's take a case where we have 128-byte pages of data, we can only hold one at a time, the latest page is cached, we want to make a 128x128 array and process each element, we don't care about the order of processing, but swapping pages incurs a delay 60x that of processing an element, and that processing takes 1s, so the delay is 60s == 1 minute. See why I picked x60? That's probably conservative if we're talking about, say, an OG 1x CD-ROM drive, the internet, lots of things, even memory when you have a "cache miss" though it may not be so dramatic.
My C is a bit rusty, and don't gripe about using magic numbers instead of constants, but you might typically do something along the lines of:
uchar *buffer;
int x, y;
buffer = malloc(128 * 128);
for(x=0 ; x<128 ; x++) {
for(y=0 ; y<128 ; y++) {
process_thingie(buffer[x*128 + y]);
}
}
so that takes 128*128 ~= 16k seconds ~= 4½ hours for the processing, and you're accessing each page one at a time, so that's another 128 minutes, call it a total of 6½ hours.
Instead, you could ignore the 2D aspect entirely - as in your question:
for(i=0 ; i<128*128; i++)
process_thingie(buffer[i]);
which gives the same results, 16k records, swapping each page in once, there you go no problem:
"Yes. You're right."
BUT. Let's say for whatever reasons, you get the inner/outer loops reversed:
for(y=0 ; y<128 ; y++) {
for(x=0 ; x<128 ; x++) {
process_thingie(buffer[x*128 + y]);
}
}
(or just change x*128 + y
to y*128 + x
, same thing)
Well, that's very different. Each subsequent access has a stride of 128 bytes from the previous one, so we're obligated to swap in a new page on every iteration
That takes 128*128 ~= 16k seconds ~= 4½ hours for the processing, but with a swap every time, that's ALSO 16k minutes, for a total of about 11½ DAYS
technically it's still O(n²)... there's just another n hiding. Either implementation will scale at O(n²) relative to itself
CD-ROM game developers, for example, had to concern themselves with the physical placement of file on the disk (not quite the same but see this discussion of related issues in Myst),
Nowadays this is rarely such a concern, SSDs are crazy fast, and optimizing compilers (I think?) are usually smart enough to see what's up and swap the two loops (yeah), but they certainly can not catch everything, there may be some good reason you need to access the data in this order, you might need optimum performance for a certain task.
This does still come up in real life, programmers do sometimes need to be aware of it - I think this is one of the things game console developers still need to obsess over.
n*n = n^2
. Why the uncertainty? – Natiea
is given as an array of lengthn
" suffice to describe whatn
is? – Imbricationi = 1; while i < len(a): i *= 2
is O(log n). The conditions of the loop matter. – Graecoromando_something
is very fast, the overhead of the loop itself might matter, as loops in Python are relatively slow. However some quick testing indicates that it does not make a difference: Two nested loops of total length n2 are as slow as one of length n2. List comprehensions are way faster (again, ifdo_something
is very fast). – Kenleylen(a)
) isn
, thenlen(a) * len(a)
is by definitionn²
, and that's how many iterations the loop has and O(n²) is the complexity of the loop. The only way it can be thought of as O(n) is if you definen = len(a) * len(a)
, which would be non-standard and fairly confusing. Just because it looks similar to a loop that's O(n) doesn't make it O(n). – Drivel