First off, as you may or may not be aware, some languages, including Haskell, implement sharing, which alleviates some of the problem that you might think of.
While Andrew's answer points at Turing completeness, it doesn't really answer the question of what algorithms are hard to implement in functional languages. Instead of asking what algorithms are hard to implement in functional languages, people typically ask what data structures are hard to implement in functional languages.
The simple answer to this: things that involve pointers.
There's no functional equivalent to pointers when you drill down to the machine level, there's a map, and you can compile certain data structures safely down to arrays or things implemented as pointers, but at a high level, you can just can't express things using pointer based data structures as easily as you can in imperative languages.
To get around this, a number of things have been done:
- Since pointers form the basis for a hash table, and since hash tables really just implement a map, efficient functional maps have been studied comprehensively. In fact, Chris Okasaki has a book ("Purely Functional Data Structures") that details many, many ways to implement functional maps, deques, etc...
- Since pointers can be used to represent nodes inside the traversal of some larger data structure, there has also been work in this area. The product is the zipper, an efficient structure that succinctly represents the functional equivalent of the "pointer inside of a deeper structure" technique.
- Since pointers can be used to implement side effecting computations, monads have been used to express this kind of computation in a pretty way. Because keeping track of state is difficult to juggle around, one use for monads is to let you mask an ugly imperative behaving part of your program and use the type system to make sure that the program is chained together correctly (through monadic binds).
While I'd like to say that any algorithm can be translated from an imperative one to a functional one very easily, this is simply not the case. However, I'm fairly convinced that the problem isn't the algorithms per se, but the data structures they manipulate, being based on an imperative notion of state. You can find a long list of functional data structures in this post.
The flip side to all of this is that if you start using a more purely functional style, much of the complexity in your program goes down, and many needs for heavily imperative data structures disappear (for example, a very common use of pointers in imperative languages is to implement nasty design patterns, which usually translate into clever uses of polymorphism and typeclases at the functional level).
EDIT: I believe the essence of this question deals with how to express computation in a functional manner. However, it should be noted that there are ways of defining stateful computation in a functional way. Or rather, it is possible to use functional techniques to reason about stateful computation. For example, the Ynot project does this using a parameterized monad where facts about the heap (in the form of separation logic) are tracked by the monadic binds.
By the way, even in ML, I don't see why dynamic programming is that hard. It seems like dynamic programming problems, which usually build up collections of some sequence to compute a final answer, can accumulate the constructed values via arguments to the function, perhaps requiring a continuation in some circumstances. Using tail recursion there's no reason this can't be just as pretty and efficient as in imperative languages. Now sure, you may run into the argument that if these values are lists (for example), imperatively we can implement them as arrays, but for that, see the content of the post proper :-)