I'm reading a book about Operating Systems and it gives some examples in C that I mostly understand. The example I'm looking at now shows two nearly identical pieces of code that will run on a fictitious system...
int i, j;
int [128] [128] data;
for (j = 0; j < 128; j++)
for (i = 0; i < 128; i++)
data [i] [j] = 0;
And the second piece of code
int i, j;
int [128] [128] data;
for (i = 0; i < 128; i++)
for (j = 0; j < 128; j++)
data [i] [j] = 0;
On this particular system the first section of code would result in 16k page faults, while the second would result in only 128.
My apologies if this is a silly question, but in my experiences with .NET I've always been largely unaware of memory. I just create a variable and it is 'somewhere' but I don't know where and I don't care.
My question is, how would .NET compare to these C examples in this fictional system (pages are 128 words in size, each row of the array takes one full page. In the first example we set one int on page 1, then one int on page 2, etc....while the second example sets all ints on page 1, then all ints on page 2, etc...)
Also, while I think I understand why the code produces different levels of paging, is there anything meaningful I can do with it? Isn't the size of the page dependent on the operating system? Is the take away that, as a general rule of thumb to access memory in an array as contiguously as possible?