There are several things at play here, and I'll try to address them one-at-a-time.
First, you're probably using the toy development server. This server has many limitations; chiefly among these limitations is that it can only handle one request at a time. When you create a second request during your first request, you are locking up your application: The requests.post()
function is waiting for Flask to respond, but Flask itself is waiting for post()
to return! The solution to this particular problem is to run your WSGI application in a multithreaded or multiprocess environment. I prefer http://twistedmatrix.com/trac/wiki/TwistedWeb for this, but there are several other options.
With that out of the way... This is an antipattern. You almost certainly don't want to invoke all of the overhead of an HTTP request just to share some functionality between two views. The correct thing to do is to refactor to have a separate function that does that shared work. I can't really refactor your particular example, because what you have is very simple and doesn't really even merit two views. What did you want to build, exactly?
Edit: A comment asks whether multithreaded mode in the toy stdlib server would be sufficient to keep the deadlock from occurring. I'm going to say "maybe." Yes, if there aren't any dependencies keeping both threads from making progress, and both threads make sufficient progress to finish their networking tasks, then the requests will complete correctly. However, determining whether two threads will deadlock each other is undecidable (proof omitted as obtuse) and I'm not willing to say for sure that the stdlib server can do it right.