CIPixellate image output size varies
Asked Answered
A

3

7

I'm doing some tests with the CIPixellate filter and I have it working but the resulting images vary in size. I suppose that makes sense since I am varying the inputscale but it was not what I was expecting – I thought that it would scale within the rect of the image.

Am I misunderstanding/using the filter wrong or do I just have to crop the output image to the size I want.

Also, the inputCenter param is not clear to me from reading the header/trial and error. Can anyone explain what that param is about?

NSMutableArray * tmpImages = [[NSMutableArray alloc] init];
for (int i = 0; i < 10; i++) {
    double scale = i * 4.0;
    UIImage* tmpImg = [self applyCIPixelateFilter:self.faceImage withScale:scale];
    printf("tmpImg    width: %f height: %f\n",  tmpImg.size.width, tmpImg.size.height);
    [tmpImages addObject:tmpImg];
}

tmpImg    width: 480.000000 height: 640.000000
tmpImg    width: 484.000000 height: 644.000000
tmpImg    width: 488.000000 height: 648.000000
tmpImg    width: 492.000000 height: 652.000000
tmpImg    width: 496.000000 height: 656.000000
tmpImg    width: 500.000000 height: 660.000000
tmpImg    width: 504.000000 height: 664.000000
tmpImg    width: 508.000000 height: 668.000000
tmpImg    width: 512.000000 height: 672.000000
tmpImg    width: 516.000000 height: 676.000000

- (UIImage *)applyCIPixelateFilter:(UIImage*)fromImage withScale:(double)scale
{
    /*
     Makes an image blocky by mapping the image to colored squares whose color is defined by the replaced pixels.
     Parameters

     inputImage: A CIImage object whose display name is Image.

     inputCenter: A CIVector object whose attribute type is CIAttributeTypePosition and whose display name is Center.
     Default value: [150 150]

     inputScale: An NSNumber object whose attribute type is CIAttributeTypeDistance and whose display name is Scale.
     Default value: 8.00
     */
    CIContext *context = [CIContext contextWithOptions:nil];
    CIFilter *filter= [CIFilter filterWithName:@"CIPixellate"];
    CIImage *inputImage = [[CIImage alloc] initWithImage:fromImage];
    CIVector *vector = [CIVector vectorWithX:fromImage.size.width /2.0f Y:fromImage.size.height /2.0f];
    [filter setDefaults];
    [filter setValue:vector forKey:@"inputCenter"];
    [filter setValue:[NSNumber numberWithDouble:scale] forKey:@"inputScale"];
    [filter setValue:inputImage forKey:@"inputImage"];

    CGImageRef cgiimage = [context createCGImage:filter.outputImage fromRect:filter.outputImage.extent];
    UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:1.0f orientation:fromImage.imageOrientation];

    CGImageRelease(cgiimage);

    return newImage;
}
Aspergillosis answered 15/3, 2013 at 5:31 Comment(1)
not sure if this will help, but try NSNumber numberWithFloat for the input scalingLorylose
G
1

the problem is with scaling only.

simply do :

let result = UIImage(cgImage: cgimgresult!, scale: (originalImageView.image?.scale)!, orientation: (originalImageView.image?.imageOrientation)!)
            originalImageView.image = result
Getaway answered 3/9, 2017 at 18:45 Comment(0)
C
1

As mentioned in the Apple Core Image Programming Guide and in this post,

By default, a blur filter also softens the edges of an image by blurring image pixels together with the transparent pixels that (in the filter’s image processing space) surround the image

So your output images vary according to your scale.

For the inputCenter, as mentioned by Joshua Sullivan in the comments of this post on CIFilter, "it adjusts the offset of the pixel grid from the source image". So if the inputCenter coordinates are not a multiple of your CI Pixellate inputScale, it will slightly offset the pixel squares (mostly visible on large value of inputScale).

Coble answered 24/4, 2019 at 9:10 Comment(0)
B
0

Sometimes inputScale will not evenly divide your image, which is when I've found I get different sized output images.

For example, if inputScale = 0 or 1, then the output image size is perfectly accurate.

I have found that the way the extra space around the image is centered varies "opaquely" by inputCenter. Ie, I haven't taken the time to figure out how exactly (I set it through tap location in view).

My solution to the different sizes is to re-render the image into an extent the size of the input image, I do so with a black background for Apple Watch.

CIFilter *pixelateFilter = [CIFilter filterWithName:@"CIPixellate"];
[pixelateFilter setDefaults];
[pixelateFilter setValue:[CIImage imageWithCGImage:editImage.CGImage] forKey:kCIInputImageKey];
[pixelateFilter setValue:@(amount) forKey:@"inputScale"];
[pixelateFilter setValue:vector forKey:@"inputCenter"];
CIImage* result = [pixelateFilter valueForKey:kCIOutputImageKey];    
CIContext *context = [CIContext contextWithOptions:nil];
CGRect extent = [pixelateResult extent];
CGImageRef cgImage = [context createCGImage:result fromRect:extent];

UIGraphicsBeginImageContextWithOptions(editImage.size, YES, [editImage scale]);
CGContextRef ref = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ref, 0, editImage.size.height);
CGContextScaleCTM(ref, 1.0, -1.0);

CGContextSetFillColorWithColor(ref, backgroundFillColor.CGColor);
CGRect drawRect = (CGRect){{0,0},editImage.size};
CGContextFillRect(ref, drawRect);
CGContextDrawImage(ref, drawRect, cgImage);
UIImage* filledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
returnImage = filledImage;

CGImageRelease(cgImage);

If you're going to stick with your implementation, I'd suggest at least changing the way you extract your UIImage to use the 'scale' of the original image, not to be confused with the CIFilter scale.

UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:fromImage.scale orientation:fromImage.imageOrientation];
Betrothed answered 22/4, 2015 at 3:37 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.