How can I efficiently read pixel values from a CIImage generated from CMSampleBuffer data?
Asked Answered
K

0

6

I am reading sample buffers from an iOS AVCaptureSesion, performing some simple image manipulation on them, and then analyzing pixels from the resulting images. I have done this using openCV for the image processing, but I'd like to switch to core image, which I hope will be more efficient for these simple operations. However I am completely stuck on how to read the pixel values from the resulting CIImage.

When I have a UIImage backed by a CGImage I can use the cgImage dataProvider to access the underlying pixel data (example below). But what is the analog with CIImage?

Here is my general flow:

    // Getting sample video data
    var session : AVCaptureSession = AVCaptureSession()

    // processing the sample buffer with core image
    func handleSampleBuffer(sampleBuffer: CMSampleBuffer) 
    {      
       let cvImage: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
       let ciImage: CIImage = CIImage(cvPixelBuffer: cvImage)

       let filteredImage = processWithCoreImage(image: ciImage)

       //
     // How can I efficiently get pixel values from filtered image here?
       //
    }

    func processWithCoreImage(image: CIImage) -> CIImage
    {
      // process with a core image filter chain
        let filter1 = CIFilter(name: “…”)!
        filter1.setValue(image, forKey: kCIInputImageKey)
        filter1.setValue(…, forKey: …)
      …
        let outputImage = filter.outputImage
        return outputImage
    }

  // With a regular UIImage I was doing something like this to get pixel data
  // However CIImage does not have a cgImage backing it in this case.
  public extension UIImage {
      func getPixelValueGrayscale(x: Int, y: Int) -> UInt8 {
          let pixelData = self.cgImage!.dataProvider!.data
          let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
          let pixelInfo: Int = ((Int(self.size.width) * Int(y)) + Int(x)) 
          return data[pixelInfo]
      }     
  }

I have tried using CIContext to get a CGImage backed UIImage as follows, but this proved horribly inefficient - was taking a good fraction of a second for each frame (hundreds of times longer than the equivalent openCV operations).

// re-used CIContext
let cgImage = context.createCGImage(filteredImage, from: ciImage.extent)
let img = UIImage(cgImage: cgImage!)

I should also mention that my filtered image is tiny (I am scaling it way down). I don't know if this is causing some problem.

What am I missing? Thanks.

UPDATE: After some experimentation it turned out that the scaling options in CI are just a lot slower than those available in OpenCV. It feels wrong to include OpenCV in my project just for scaling, but... for the moment.

Kelsiekelso answered 19/9, 2017 at 3:27 Comment(3)
I up-voted the "answer" from @matt even though it really didn't answer (sorry). It sounds like you wish to work with the camera in "real-time". (He was correct, this isn't an efficient way.) I'm no expert, but here' a link to someone who is: flexmonkey.blogspot.com/2015/12/…. If that link doesn't work? Look at the rest of his blog posts. Technically, you do want to use CI, just not the way you posted. (Sorry for the use of italics - trying to emphasize.)Riti
Thanks for reporting back! I was just wondering how this had turned out. Please feel free to answer your own question, and even (in a couple of days) to accept your own answer.Helbonnah
Good question.... Getting the UInt8 values from a CGImage is a good solution, but I think manipulating the values directly from a CIImage or even better the PixelBuffer would be more efficient.Brutish

© 2022 - 2024 — McMap. All rights reserved.