The key isn't avoiding pagination but not relying on pagination. When you rely on it to display large lists of content you enforce this "many clicks" navigation. What you want is for a user (and hence also robots) to have easier and more meaningful ways to get to your content.
Generally when you're looking at pagination you're at a point of the IA that doesn't easily break down into a hierarchical structure. At this point the best approach to get through a large amount of content is filtering using tags.
Take SO as a good example, which essentially has no IA beyond big giant paginated list of questions. The main question page has at the moment 142414 pages. If this was the only way to find relevant content it would be a NIGHTMARE. But a good tag system suddenly makes it easy to use. For the sake of simplicity let's pretend that the paginator only has prev and next and there's only one sort order, in reality these features help improve the depth of questions in a similar manner by giving shortcuts through the list but no where near as strongly as tags.
Once you click on a tag you get links which add related tags. You can very quickly narrow down questions. Let's think about navigating to a question somewhere in the middle, I picked Blind RSA signature using .NET cryptography API? which was on page 70,000.
This would take 70,000 clicks to reach normally, this is obviously very bad SEO. From the Tags page (1 click) "Encryption" is on page 6 (6 clicks) add in "Cryptography" (7 clicks) add in "rsa" (8 clicks) and in ".net" (9 clicks) and the question appears on the page. Navigating there has gone from a depth of 70,000 to 10. Without the assumptions I made earlier (skipping pages and using different sort) would likely bump this up a few more places.
Add in some other basic SEO such as meaningful URLs, meaningful titles, keywords in headers and you're pretty much there.
/category?page=2
not/category/2
, and no problems. – Forethought