What pagination schemes can handle rapidly-changing content lists?
Asked Answered
W

5

51

Pagination is hard when your content rankings can change quickly, and even harder when those rankings differ per-user. (Let's treat infinite scroll as a type of pagination where the links are invisible.) There are two hard problems: newly-added content at the top, and reranked content.

Let's forget about newly-added content, and accept that you'll have to refresh page 1 to see it. Let's also pretend we're doing pure ORDER BY position; if you're ordering by something else, you might have to use window functions. Our pages have 4 rows of animals per page. They start out:

+----+----------+-----------+
| id | position^|  animal   |
+----+----------+-----------+
|  1 |        1 | Alpacas   |
|  2 |        2 | Bats      |
|  3 |        3 | Cows      |
|  4 |        4 | Dogs      |
|  5 |        5 | Elephants |
|  6 |        6 | Foxes     |
|  7 |        7 | Giraffes  |
|  8 |        8 | Horses    |
+----+----------+-----------+

After we fetch page 1, and before we fetch page 2, a lot of items move around. The DB is now:

+----+----------+-----------+
| id | position^|  animal   |
+----+----------+-----------+
|  4 |        1 | Dogs      |
|  2 |        2 | Bats      |
|  1 |        3 | Alpacas   |
|  5 |        4 | Elephants |
|  6 |        5 | Foxes     |
|  7 |        6 | Giraffes  |
|  3 |        7 | Cows      |
|  8 |        8 | Horses    |
+----+----------+-----------+

There are three common approaches:

Offset/limit approach

This is the typical naive approach; in Rails, it's how will_paginate and Kaminari work. If I want to fetch page 2, I'll do

SELECT * FROM animals
ORDER BY animals.position
OFFSET ((:page_num - 1) * :page_size) 
LIMIT :page_size;

which gets rows 5-8. I'll never see Elephants, and I'll see Cows twice.

Last seen ID approach

Reddit takes a different approach. Instead of calculating the first row based on page size, the client tracks the ID of the last item you've seen, like a bookmark. When you hit "next", they start looking from that bookmark onward:

SELECT * FROM animals
WHERE position > (
  SELECT position FROM animals 
  WHERE id = :last_seen_id
) 
ORDER BY position
LIMIT :page_size;

In some cases, this works better than page/offset. But in our case, Dogs, the last-seen post, zoomed right to #1. So the client sends up ?last_seen_id=4, and my page 2 is Bats, Alpacas, Elephants and Foxes. I haven't missed any animals, but I saw Bats and Alpacas twice.

Server side state

HackerNews (and our site, right now) solves this with server-side continuations; they store the entire result set for you (or at least several pages in advance?), and the "More" link references that continuation. When I fetch page 2, I ask for "page 2 of my original query". It uses the same offset/limit calculation, but since it's against the original query, I simply don't care that things have now moved around. I see Elephants, Foxes, Giraffes, and Horses. No dups, no missed items.

The downside is that we have to store a lot of state on the server. On HN, that's stored in RAM, and in reality those continuations often expire before you can press the "More" button, forcing you to go all the way back to page 1 to find a valid link. In most applications, you can store that in memcached, or even in the database itself (using your own table, or in Oracle or PostgreSQL, using holdable cursors). Depending on your application, there might be a performance hit; in PostgreSQL, at least, you have to find a way to hit the right database connection again, which requires a lot of sticky-state or some clever back-end routing.

Are these the only three possible approaches? If not, are there computer-science concepts that would give me Google juice to read about this? Are there ways to approximate the continuation approach without storing the entire result set? Long term, there's complex event-streaming/point-in-time systems, where "the result set as of the moment I fetched page 1" is forever derivable. Short of that... ?

Woollen answered 7/3, 2012 at 13:15 Comment(4)
I'd suggest to look at it from a different angle. Maybe it's possible to avoid pagination at all—just use infinite scroll + some extensive scripting that updates list without page reload and displays appropriate ↑/↓ symbols for user convenience. It depends on your use case, though. Upd: FWIW, here's a related question from UX StackExchange.Leyla
Yeah, that doesn't work for our use case... things are continually reranked, and you wouldn't want the display to be continually updating. Great idea, though.Woollen
You can store state on client, and send all ids of seen records.Negotiable
Thanks for providing the answer, in your question.Niccolite
W
5

We're going with the server-side state approach for now, caching the entire result on the first query so we always return a consistent list. This will work as long as our query already returns all rows; eventually we'll need to use a nearest-neighbor approach and that wont work.

But I think there's a fourth possibility, which scales very well, as long as:

  1. You don't need a guarantee of no duplicates, only a high likelihood
  2. You're okay with missing some content during scrolls, as long as you avoid duplicates

The solution is a variant of the "last seen ID" solution: Have the client keep not one, but 5 or 10 or 20 bookmarks - few enough that you can store them efficiently. The query ends up looking like:

SELECT * FROM posts
WHERE id > :bookmark_1
AND id > :bookmark_2
...
ORDER BY id

As the number of bookmarks grows, the odds rapidly diminish that you are (a) starting at some point past all n bookmarks but (b) seeing duplicate content anyway because they were all reranked.

If there are holes, or better answers in the future, I'll happily unaccept this answer.

Woollen answered 1/4, 2012 at 22:17 Comment(1)
what do you do here if your id is an UUIDThornie
T
8

Oracle handles this nicely. As long as a cursor is open, you can fetch as many times as necessary and your results will always reflect the point in time at which the cursor was opened. It uses data from the undo logs to virtually rollback changes that were committed after the cursor was opened.

It will work as long as the required rollback data is still available. Eventually the logs get recycled and the rollback data is no longer available, so there is some limit, depending on the log space, system activity, etc.

Unfortunately (IMO), I don't know of any other DB that works like this. The other databases I've worked with use locks to ensure read-consistency, which is problematic if you want read consistency over more than very short duration.

Tenebrific answered 16/3, 2012 at 23:56 Comment(1)
Turns out PostgreSQL has holdable cursors as well. On Oracle, can you hit that cursor from a different connection, slave, etc.? PostgreSQL holdable cursors are disk-based (so you're not chewing up RAM) and they also work off the transaction log, but they are only available on the same connection, so you have to either ensure stickiness or do some back-end routing.Woollen
P
6

Solution 1: "the hacky solution"

A solution could consist in your client keeping track of the already seen content, a list of IDs for example. Each time you need another page, you add this ID list to the parameters of your server call. Your server can then order the content, remove already seen content and apply the offset to get the right page.

I would not recommend it though and I insist on hacky. I just write it down here because it's quick and could fit with some needs. here are the bad things I can think of:

1) It needs some work on client side to get it right (what does "already seen" means in my sentence above, what if I go to a previous page?)

2) The resulting order doesn't reflect your true ordering policy. A content could be displayed in page 2 although the policy should have put it on page 1. It could lead to a user misunderstanding. Let's take the example of stack overflow with its former ordering policy, that means most upvoted answers first. We could have a question with 6 upvotes being in page 2 while a question with 4 upvotes would be in page 1. This happen when the 2 or more upvotes occurred while user was still on page 1. --> can be surprising for the user.

Solution 2: "the client solution"

It's basically the client-side equivalent solution to the one you call "server-side state". It's then useful only if keeping track of the full order on server side is not convenient enough. It works if the items list is not infinite.

  • Call your server to get the full (finite) order list + the number of items/page
  • Save it on client side
  • Retrieve items directly through the ids of your content.
Prevailing answered 29/5, 2014 at 4:54 Comment(0)
W
5

We're going with the server-side state approach for now, caching the entire result on the first query so we always return a consistent list. This will work as long as our query already returns all rows; eventually we'll need to use a nearest-neighbor approach and that wont work.

But I think there's a fourth possibility, which scales very well, as long as:

  1. You don't need a guarantee of no duplicates, only a high likelihood
  2. You're okay with missing some content during scrolls, as long as you avoid duplicates

The solution is a variant of the "last seen ID" solution: Have the client keep not one, but 5 or 10 or 20 bookmarks - few enough that you can store them efficiently. The query ends up looking like:

SELECT * FROM posts
WHERE id > :bookmark_1
AND id > :bookmark_2
...
ORDER BY id

As the number of bookmarks grows, the odds rapidly diminish that you are (a) starting at some point past all n bookmarks but (b) seeing duplicate content anyway because they were all reranked.

If there are holes, or better answers in the future, I'll happily unaccept this answer.

Woollen answered 1/4, 2012 at 22:17 Comment(1)
what do you do here if your id is an UUIDThornie
K
2

Very late to the party but here's something we experimented with. We are using continuous loading, not pages that the user would go back and forth between.

The client builds a list of all the IDs it has displayed, so after the first set it might be: 4,7,19,2,1,72,3

When we load more content we do the same query with the same sort but add this to it: WHERE id NOT IN (4,7,19,2,1,72,3)

The NOT IN list can grow rather quickly. For us this isn't an issue as our internal tool doesn't typically have tons of results.

I want to add another idea. Maybe a server side addition could be applied to this. When the user searches add all the IDs they got to a table with a link to their search. When the client wants more it only has to provide the search ID (or use server side state) and the query can join in with their search data.

Kellam answered 11/4, 2015 at 19:8 Comment(0)
A
1

If the rows include a creation timestamp, then the query can include a "before" filter. This ensures that any rows created after the timestamp are not included and hence the pagination is consistent (provided that the rows are sorted on constant column(s)). Here is an example SQL query that assumes that the values in the animals.position column are constant.

SELECT
   a.*
FROM
   animals a
WHERE
   a.creation < :before
ORDER BY
   a.position
OFFSET ((:page_num - 1) * :page_size)
LIMIT :page_size

When the client makes the initial request (e.g. http://some.server.com/animals), the server sets :before to the current time, :page_num to 1 and :page_size to say 20. The server's response includes a link to request the next page with all 3 parameters set (e.g. http://some.server.com/animals?before=2020-04-08T10:40:34.833Z&page_num=2&page_size=20). Thus, the client keeps all of the state necessary to request the next page and the server can remain stateless with regards to pagination.

Note: If the user refreshes a URL without the before parameter (i.e. http://some.server.com/animals), they will see new data. If the user refreshes a URL with the before parameter (i.e. http://some.server.com/animals?before=2020-04-08T10:40:34.833Z&page_num=2&page_size=20), they will see the same data. The user can always change or remove the before parameter to see new data.

Aeneid answered 8/4, 2020 at 16:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.