Can't understand Belady's anomaly
Asked Answered
D

5

38

So Belady's Anomaly states that when using a FIFO page replacement policy, when adding more page space we'll have more page faults.

My intuition says that we should less or at most, the same number of page faults as we add more page space.

If we think of a FIFO queue as a pipe, adding more page space is like making the pipe bigger:

 ____
O____O  size 4

 ________
O________O  size 8

So, why would you get more page faults? My intuition says that with a longer pipe, you'd take a bit longer to start having page faults (so, with an infinite pipe you'd have no page faults) and then you'd have just as many page faults and just as often as with a smaller pipe.

What is wrong with my reasoning?

Decennary answered 26/1, 2011 at 0:12 Comment(5)
Not sure exactly what you're looking for here -- the WP page has an actual example: en.wikipedia.org/wiki/Belady's_anomalyMinute
Have you read the Wikipedia article? It is called an anomaly because it runs counter to most people's intuition. :)Rabideau
In this particular case, having more page frames caused the algorithm to keep pages around longer that end up being used less frequently later, and they don't drop out of the FIFO fast enough to free up space for pages that actually end up being needed. But I don't know that there's a general intuition you can get from this. That's just what can happen.Minute
"In this particular case, having more page frames caused the algorithm to keep pages around longer that end up being used less frequently later" I fail to understand how that can make any kind of difference. Why would it better to not just have them in memory at all(what happens when you have a smaller pipe)Decennary
devoured: In this case, sure, it would have been better, but a FIFO can't predict the future. Did you work through the example on wikipedia?Minute
P
40

The reason that when using FIFO, increasing the number of pages can increase the fault rate in some access patterns, is because when you have more pages, recently requested pages can remain at the bottom of the FIFO queue longer.

Consider the third time that "3" is requested in the wikipedia example here:

Page faults are marked with an "f".

1:

Page Requests   3    2    1    0    3    2    4    3    2    1    0    4
Newest Page     3f   2f   1f   0f   3f   2f   4f   4    4    1f   0f   0
                     3    2    1    0    3    2    2    2    4    1    1
Oldest Page               3    2    1    0    3    3    3    2    4    4

2:

Page Requests   3    2    1    0    3    2    4    3    2    1    0    4
Newest Page     3f   2f   1f   0f   0    0    4f   3f   2f   1f   0f   4f
                     3    2    1    1    1    0    4    3    2    1    0
                          3    2    2    2    1    0    4    3    2    1
Oldest Page                    3    3    3    2    1    0    4    3    2

In the first example (with fewer pages), there are 9 page faults.

In the second example (with more pages), there are 10 page faults.

When using FIFO, increasing the size of the cache changes the order in which items are removed. Which in some cases, can increase the fault rate.

Bélády's Anomaly does not state anything about the general trend of fault rates with respect to cache size. So your reasoning (about viewing the cache as a pipe), in the general case is not wrong.

In summary: Bélády's Anomaly points out that it is possible to exploit the fact that larger cache sizes can cause items in the cache to be raised in the FIFO queue later than smaller cache sizes, in order to cause larger cache sizes to have a higher fault rate under a particular (and possibly rare) access pattern.

Paleo answered 26/1, 2011 at 1:1 Comment(5)
but when 3 was accesed second time then in second case it was in cache and in first it was notFortaleza
can you please explain the example? those numbers in example are not making any sense to me. Not able to see the pattern, get their meaning :(Openwork
well got it...thanks to this lecture...youtube FTW :)Openwork
Can it occur in algorithms other than FIFO?Subsequence
@Olhovsky, re "bottom"; Shouldn't they be brought to the top on-access?Shane
G
10

This statement is wrong because it is overgeneralized:

Belady's Anomaly states that when using a FIFO page replacement policy, when adding more page space we'll have more page faults

This is a corrected version:

"Belady's Anomaly states that when using a FIFO page replacement policy, when adding more page space, some memory access patterns will actually result in more page faults."

In other words... whether the anomaly is observed depends on the actual memory access pattern.

Gaelic answered 25/3, 2015 at 17:12 Comment(4)
Does this only occur in the FIFO algorithm?Subsequence
What about Random Page Replacement Algorithm? Does it suffer from Belady's Anomaly?Tetchy
@Tetchy I would say it does not. With Random Page Replacement, the behavior is random every time regardless of the memory access pattern. And how often it faults is also a random variable. Adding more space would not be expected to result in more page faults on average...Gaelic
But atleast once if RPR algo mimics Belady Anomaly then the algorithm is said to suffer from anomaly. FIFO doesn't always suffer from anomaly. Consider what if RPR algo mimics FIFO algorithm?Tetchy
E
2

Belady's anomaly occurs in page replacement algorithm do not follow stack algorithm.That is the pages when frames were less should be a subset of pages when frame are more.On increasing page frame,the page frames which were present before has to be there.This can occur in FIFO sometimes,even random page replacement but not LRU or optimal.

Exacerbate answered 6/4, 2017 at 4:37 Comment(0)
T
1

I could not understand belady anomaly even after reading the Wikipedia article and accepted answer. After writing the trace I kind of got it. Here I’d like to share my understanding.

Keys to understanding belady anomaly.

  • Unlike LRU, FIFO just pushes out the oldest elements regardless of frequency. So staying in FIFO longer means falling victim to eviction.

From here, I refer to 3 and 4 pages FIFO queue as FIFO3 and FIFO4.

To understand Wikipedia example, I divided it into 2 parts. When FIFO3 catches up with FIFO4 and when FIFO3 overtakes FIFO4.

How FIFO3 catch up with FIFO4 on 9th

enter image description here

Look at 3 in both FIFOs. In FIFO3, 3 is evicted on 4th and came back on 5th. So it was still there on 8th and cache hit happened. In FIFO4, 3 is HIT on 5th, but this cache hit made 3 stays longer and evicted on 7th, right before next 3 on 8th.

2 is the same as 3. Second 2(6th) is MISS on FIFO3 and HIT on FIFO4, but third 2(9th) is HIT on FIFO3, and MISS on FIFO4. It might help to think like this. On FIFO3, 3 was updated on 5th so stayed longer until 8th. On FIFO4 3 was old and evicted on 7th, right before next 3 comes.

How FIFO3 overtakes FIFO4

enter image description here

Because there are 2 cache misses on 8, 9th for FIFO4, 4 is pushed down and evicted on 11th in FIFO4. FIFO3 still retains 4 on 12th because there are cache hit on 8, 9th, so 4 was not pushed down. I think this is why Wikipedia's aritcle says "Penny Wise, Pound Foolish"

Conclusion

FIFO is a simple and naive algorithm that does not take frequency into account. You might get a better understanding by applying LRU(Least Recently Used) to Wikipedia’s example. In LRU, 4 is better than 3.

Transducer answered 2/11, 2020 at 12:53 Comment(2)
good insight. So if I get your point correctly: for FIFO having a smaller cache size is likely to make the oldest page being kicked out early. However, this is NOT ALWAYS harmful. Later when the same page is loaded back due to a page fault, its position in the queue is "refreshed" and it is positions at the head, which make it likely to survive until next access. On the other hand, a larger cache size can DELAY the time when a page is kicked out. When that eventually happens, it could be too later as the same page is soon going to be accessed againLingerie
the general idea: FIFO does not take frequency/history/access pattern into account. When a page is recently accessed, in a common sense it is supposed to remain longer in the cache. To be able to remain longer, the page is supposed to be in a "secure" (i.e. near head) position of queue. However, larger cache size can sometimes produce adverse effect: the recently access paged will be removed prior to other pages, so page fault will come again for the next accessLingerie
C
-3

Belady's anomaly happens in a FIFO scheme only when the page that is currently being referred is the page that was removed last from the main memory. only in this case even though you increase more page space, you'll have more page faults.

Coughlin answered 20/5, 2013 at 17:25 Comment(1)
Does it only happen in FIFO?Subsequence

© 2022 - 2024 — McMap. All rights reserved.