CIImage extent in pixels or points?
Asked Answered
E

2

5

I'm working with a CIImage, and while I understand it's not a linear image, it does hold some data.

My question is whether or not a CIImage's extent property returns pixels or points? According to the documentation, which says very little, it's working space coordinates. Does this mean there's no way to get the pixels / points from a CIImage and I must convert to a UIImage to use the .size property to get the points?

I have a UIImage with a certain size, and when I create a CIImage using the UIImage, the extent is shown in points. But if I run a CIImage through a CIFilter that scales it, I sometimes get the extent returned in pixel values.

Extraterritorial answered 28/3, 2017 at 23:52 Comment(0)
B
6

I'll answer the best I can.

If your source is a UIImage, its size will be the same as the extent. But please, this isn't a UIImageView (which the size is in points). And we're just talking about the source image.

Running something through a CIFilter means you are manipulating things. If all you are doing is manipulating color, its size/extent shouldn't change (the same as creating your own CIColorKernel - it works pixel-by-pixel).

But, depending on the CIFilter, you may well be changing the size/extent. Certain filters create a mask, or tile. These may actually have an extent that is infinite! Others (blurs are a great example) sample surrounding pixels so their extent actually increases because they sample "pixels" beyond the source image's size. (Custom-wise these are a CIWarpKernel.)

Yes, quite a bit. Taking this to a bottom line:

  • What is the filter doing? Does it need to simply check a pixel's RGB and do something? Then the UIImage size should be the output CIImage extent.
  • Does the filter produce something that depends on the pixel's surrounding pixels? Then the output CIImage extent is slightly larger. How much may depend on the filter.
  • There are filters that produce something with no regard to an input. Most of these may have no true extent, as they can be infinite.

Points are what UIKit and CoreGraphics always work with. Pixels? At some point CoreImage does, but it's low-level to a point (unless you want to write your own kernel) you shouldn't care. Extents can usually - but keep in mind the above - be equated to a UIImage size.

EDIT

Many images (particularly RAW ones) can have so large a size as to affect performance. I have an extension for UIImage that resizes an image to a specific rectangle to help maintain consistent CI performance.

extension UIImage {
    public func resizeToBoundingSquare(_ boundingSquareSideLength : CGFloat) -> UIImage {
        let imgScale = self.size.width > self.size.height ? boundingSquareSideLength / self.size.width : boundingSquareSideLength / self.size.height
        let newWidth = self.size.width * imgScale
        let newHeight = self.size.height * imgScale
        let newSize = CGSize(width: newWidth, height: newHeight)
        UIGraphicsBeginImageContext(newSize)
        self.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
        let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
        UIGraphicsEndImageContext();
        return resizedImage!
    }
}

Usage:

image = image.resizeToBoundingSquare(640)

In this example, an image size of 3200x2000 would be reduced to 640x400. Or an image size or 320x200 would be enlarged to 640x400. I do this to an image before rendering it and before creating a CIImage to use in a CIFilter.

Basel answered 29/3, 2017 at 0:17 Comment(4)
Does this mean that extent cannot be used to accurate predict the pixel size when drawing to something like a GLKView or a MTKView? The targetSizes I'm using are image.extent, but since these are so varied, it's not always correct. So far anything coming out of a CIFilter with a inputScaleFactor will show pixels, but UIImage only will show points.Extraterritorial
When I use a GLKView, it's a subclass very similar to this: github.com/objcio/issue-21-core-image-explorer/blob/master/…. I pretty much treat a CIImage like I do a UIImage. That is, I layout a GLKView like I would a UIImageView, set things up for AspectFit in draw(rect:), give the GLKView an image, and call setNeedsDisplay() to trigger things. I don't really concern myself with pixels/points, except for layout, which I use auto layout. Hope that helps.Basel
Thanks for the response! I'm working with photos that are coming from different sources, one is a UIImage from a imageRequest, the other is loading a RAW with the CIRAWFilterImpl. To keep things simple, they use the same code to render to the view. Problem is, the image.extent is different depending on where it came from, which is what my issue is now. The solution will be to track where they originated from to give the right target size.Extraterritorial
I'll add something to my answer. It probably won't help, but you never know.Basel
S
3

I suggest you think of them as points. There is no scale and no screen (a CIImage is not something that is drawn), so there are no pixels.

A UIImage backed by a CGImage is the basis for drawing, and in addition to the CGImage it has a scale; together with the screen resolution, that gives us our translation from points to pixels.

Siegfried answered 29/3, 2017 at 0:9 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.