I am reading sample buffers from an iOS AVCaptureSesion, performing some simple image manipulation on them, and then analyzing pixels from the resulting images. I have done this using openCV for the image processing, but I'd like to switch to core image, which I hope will be more efficient for these simple operations. However I am completely stuck on how to read the pixel values from the resulting CIImage.
When I have a UIImage backed by a CGImage I can use the cgImage dataProvider to access the underlying pixel data (example below). But what is the analog with CIImage?
Here is my general flow:
// Getting sample video data
var session : AVCaptureSession = AVCaptureSession()
// processing the sample buffer with core image
func handleSampleBuffer(sampleBuffer: CMSampleBuffer)
{
let cvImage: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let ciImage: CIImage = CIImage(cvPixelBuffer: cvImage)
let filteredImage = processWithCoreImage(image: ciImage)
//
// How can I efficiently get pixel values from filtered image here?
//
}
func processWithCoreImage(image: CIImage) -> CIImage
{
// process with a core image filter chain
let filter1 = CIFilter(name: “…”)!
filter1.setValue(image, forKey: kCIInputImageKey)
filter1.setValue(…, forKey: …)
…
let outputImage = filter.outputImage
return outputImage
}
// With a regular UIImage I was doing something like this to get pixel data
// However CIImage does not have a cgImage backing it in this case.
public extension UIImage {
func getPixelValueGrayscale(x: Int, y: Int) -> UInt8 {
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(y)) + Int(x))
return data[pixelInfo]
}
}
I have tried using CIContext to get a CGImage backed UIImage as follows, but this proved horribly inefficient - was taking a good fraction of a second for each frame (hundreds of times longer than the equivalent openCV operations).
// re-used CIContext
let cgImage = context.createCGImage(filteredImage, from: ciImage.extent)
let img = UIImage(cgImage: cgImage!)
I should also mention that my filtered image is tiny (I am scaling it way down). I don't know if this is causing some problem.
What am I missing? Thanks.
UPDATE: After some experimentation it turned out that the scaling options in CI are just a lot slower than those available in OpenCV. It feels wrong to include OpenCV in my project just for scaling, but... for the moment.