I am writing an application that does some real-time processing on image data it gets from AVCaptureVideoDataOutput within an AVCaptureSession.
I am currently able to start the session, add input & output and then retrieve image data, convert it to UIImage and display it on the screen live.
The main problem I'm having is that the image's orientation is awkward. It's rotated and mirrored and it also looks skewed. I've done some research into it, I've found some related questions, and I've tried the code that has been suggested but it doesn't fix the rotation problem.
I think the questions linked assume the UIImages came from somewhere else (perhaps a higher-level api that adds more information, such as orientation, to the image automatically. Or perhaps it's because I'm getting it from a video output feed?
I am not really looking for the code that fixes it (although an annotated example would be really helpful), but rather a good explanation of what the lifecycle of an image obtained in this way works; What's the recommended way to deal with it so that it can be displayed on the screen in a way that makes sense to the phone's orientation? What's the orientation of the CGImageRef that comes back? etc.
I have a previous question that has the code I'm using to set up the AVCaptureSession.